i want to write a program that will prompt the user
for the type of conversion he wants: “binary to decimal” or “decimal to binary”
-If he chooses “binary to decimal”, the
program should prompt the user for 8 bits
(values only 0 or 1) and then display the number in decimal
-else prompt the user for any whole number between 0 and 255 and display the 8 bits
of that number in binary representation

it should use the
#include <stdio.h>
#include <conio.h>

the program should be written in C i need it for "borland c" and code should be written under #include<conio.h> and #include<stdio.h>.

Hint, for instance, binary 10010 is calculated as 1*16+0*8+0*4+1*2+0*1 = decimal 18

i want to write a program

the program should be written in C i need it for "borland c"

FYI, nobody here is going to write this for you. We'll help you write it yourself, but that's it.

Thank you very much, i know nobody will write the code for me, i was just looking for a way to make the code i have shorter

In such a case it's best to post the code and ask how to make it shorter. :icon_rolleyes:

this is the code ive written

#include<stdio.h>
#include "conio.h"
int main()
{
    printf("Enter an 8 bit binary number one at a time and press enter after each one:  ");
    scanf("%d" "%d" "%d" "%d" "%d" "%d" "%d" "%d", &x, &v, &z,&d,&e,&f,&g,&h); 
		// above user has to enter the binarys numbers one by one
		
    decimal = x*128+v*64+z*32+d*16+e*8+f*4+g*2+h*1; 
   printf("The decimal value is %d\n", decimal); // shows the decimal value
	
	
_getch();
return 0;
}

Well, aside from the fact that you don't declare the eight variables you use to get the bit values, it should work - if that's what you need it to do.

However, assignments like these more often involve converting a binary number in a string, which works a bit differently. Fortunately, there is a fairly easy algorithm for converting strings of any base to a standard integer, and vis versa. The general algorithms (in Python, though they should be understandable well enough if you treat it as pseudo-code) can be found here; you'll want to use the second algorithm, the one titled 'str2int()'.

commented: Helpful +4

That code of Schoil-R-LEA has bug in ranges (try str2int('98789', 10) for example) but if you take right interpretation as pseudocode, it will work for you.

Thank you for pointing those errors out, Tonyjv; you are completely correct, and I had overlooked them this whole time.

this is the code ive written

#include<stdio.h>
#include "conio.h"
int main()
{
    printf("Enter an 8 bit binary number one at a time and press enter after each one:  ");
    scanf("%d" "%d" "%d" "%d" "%d" "%d" "%d" "%d", &x, &v, &z,&d,&e,&f,&g,&h); 
		// above user has to enter the binarys numbers one by one
		
    decimal = x*128+v*64+z*32+d*16+e*8+f*4+g*2+h*1; 
   printf("The decimal value is %d\n", decimal); // shows the decimal value
	
	
_getch();
return 0;
}

Well, you applied my initial hint, so here is a little follow up ...

// binary (base 2) to denary (base 10) action

#include <stdio.h>

int main()
{
  char bin[80] = "10010";  // this is the test binary string
  int  b, k, m, n;
  int  len, sum = 0;

  len = strlen(bin) - 1;
  for(k = 0; k <= len; k++)
  {
    // spell out bin and get numeric value
    n = (bin[k] - '0');
    for(b = 1, m = len; m > k; m--)
    {
      // 2 4 8 16 32 64 ... place values in reverse
      b *= 2;
    }
    // sum it up
    sum = sum + n * b;
    // this is a test to show the action
    printf("%d*%d + ", n,b);
  }
  printf("\nbinary %s --> denary %d\n", bin, sum);

  getchar();  // wait
  return 0;
}

/* result ...
1*16 + 0*8 + 0*4 + 1*2 + 0*1 +
binary 10010 --> denary 18
*/

It isn't easy to write C code after being spoiled by Python. However, I do enjoy the crispness.

thank you very much you guys have bn most helpful

The conversion methods used in this thread are inefficient.

Just now I created and tested two functions for you to use that perform Binary String to Integer conversion and Integer to Binary String conversion. They are the most efficient I could possibly make them. Any more efficient and i'd have to write it in assembly.

int BinaryToInt(char *in_BinaryString) {
   int t, Result = 0, len = strlen(in_BinaryString);
   for (t=0;t<len;t++) 
       Result |= ((in_BinaryString[t]=='1') << (len-t-1));
   return Result;
}

#define MaxLen 33  // Total number of bits supported (32 bit + null char)
char *IntToBinary(int in_Integer) {
    static char Result[MaxLen]; 
    int t;
    for (t=MaxLen-1;in_Integer>0;in_Integer>>=1,t--)
        Result[t] = (in_Integer&1)?'1':'0';
    memmove(Result,Result+t+1,MaxLen-t);
    Result[MaxLen-t] = '\0';
    return Result;
}

Unlike the original request for 8 bit conversion, these puppies can convert up to 64 bits if your computer can handle it. With a little tweaking they can convert enormous arrays of numbers to enormous arrays of numbers. The magic is in the algorithm.

In the Binary String to Integer converter, I loop through the given characters from start to end with the goal of trying to detect any '1' characters. For every one of them that is found, the Result which was set initially to zero will get overlapped by a single bit that corresponds to the position in which they were found (len-t-1). So basically its just a '1' detector that overlaps real 1's where it finds a fake '1'. The result is the real binary number as an integer.

In the Integer to Binary String converter, the main loop right shifts the given integer until it becomes zero. That's how it knows when it is done with the loop. Before every right shift, it checks the value of the least significant bit (in_Integer&1) and simply converts it to a character and appends it in the correct Endian to the end of the string array. It uses memmove to safely move the completed string to the front of the string buffer, and finally appends it with a null character as required.

The reason I don't just copy it straight to the front is because the method I use to convert the integer to binary string does not know the number of bits in the given number, and I wanted to preserve the Endianness of the bit order so not to have two loops in my function.

Anyways, I'm going to use these functions from now on in my projects, and feel free to use and tweak them to your liking. Also feel free to ask questions about my methodology or if you can think of a way to make it even more efficient.

@NIGHTS: Good for you to take with challenge the simple exercise for beginner the way which is interesting to you even already more advanced in your skills.

However there is hidden content in exercises for beginner and therefore we other posters tried to adapt to the situation of the OP and give some advice that would benefit him in his life as programmer and guide him to right priorities in his life.

Unfortunately in my view, maybe unduly influenced by my main language Python, optimizing for speed in the OP's case is what in Python circles is frowned upon as premature optimization. That does not however mean that your solution is wrong for you.

There is few inconsistencies in your variable naming which is maybe more important to OP than optimization of the code: You have three kinds of names:


Result[Maxlen] Capitalized
t cryptic type it fast letter
in_Integer combination of camelCase and words_with_underscores

This is for me thing which you could consider and learn from your code. I recommend to make one policy for names and try to be very consistent with the naming. You'll be happy in future for that, even if it now looks less interesting than superoptimizing code.

Finally with my limitted knowledge of C, does declaring Result as static make sense? I would think that register hint for t could be used, but I do not understand the motivation for that.

Second point of your optimization is, that you use function strlen in first function. You must know how it is implemented with null terminated strings ie it goes through the whole string to find the null.

@NIGHTS: Good for you to take with challenge the simple exercise for beginner the way which is interesting to you even already more advanced in your skills.

However there is hidden content in exercises for beginner and therefore we other posters tried to adapt to the situation of the OP and give some advice that would benefit him in his life as programmer and guide him to right priorities in his life.

That's why I explained the algorithm in detail.

Unfortunately in my view, maybe unduly influenced by my main language Python, optimizing for speed in the OP's case is what in Python circles is frowned upon as premature optimization. That does not however mean that your solution is wrong for you.

For those that don't know what tonyjv is talking about, read these links:

http://c2.com/cgi/wiki?PrematureOptimization
http://en.wikipedia.org/wiki/Program_optimization

As for your concern about premature optimization, while I agree that in the ladder of programming logic the highest level logic should always be programmed for readability over speed, however, this procedure is purely mathematical in nature, and as such, should be programmed consistent with the efficiency and philosophies that guide such fundamental mathematical logic.

There is few inconsistencies in your variable naming which is maybe more important to OP than optimization of the code: You have three kinds of names:

Result[Maxlen] Capitalized

I prefer to title case my variables, such that all words are connected together InThisManner.

t cryptic type it fast letter

The use of single letter loop iterators are debatable. In a language like python, this would be discouraged because the logic python works with is much higher level than what I am showcasing. In a language like C, where symbol context is just as important as logic and structure, single letter loop iterators can become helpful in allowing the reader to view the logic more clearly since the loop iterator is a required and basic component in any low level loop.

in_Integer combination of camelCase and words_with_underscores

This is intentional. All my parameters describe the direction of the variable as part of its parameter name. If a parameter gets modified by the function, I would call it out_ParameterName or inout_ParameterName, depending on its use. The two parts of the variable name are seperated by an underscore as a means to separate their function. The "in", "out", or "inout" is codified in case but consistent, and the parameter names follow the same rules as all my variables using connected title case.

This method of naming the direction of the parameter has many obvious uses, and it also helps to differentiate which variables are local to the function and which are from the outside world.

This is for me thing which you could consider and learn from your code. I recommend to make one policy for names and try to be very consistent with the naming. You'll be happy in future for that, even if it now looks less interesting than superoptimizing code.

As you can probably see from my comments, my choice of variable names was very carefully considered, and I try to keep consistent.

Finally with my limitted knowledge of C, does declaring Result as static make sense? I would think that register hint for t could be used, but I do not understand the motivation for that.

Its a thread-safe approach I try to keep consistent with so that if the program were to be used in a multi-threaded context, the calling function can read the string without the threat of another thread declaring the memory and writing to it.

This does not mean it is completely thread safe. To make it completely thread safe you need to use mutex locking on that static variable. I only made it static here to promote good programming habits.

Second point of your optimization is, that you use function strlen in first function. You must know how it is implemented with null terminated strings ie it goes through the whole string to find the null.

I see no other way to find out how many characters the user submitted to the program. These functions were designed to work directly with a human. The programmer need only pass the humans parameters to these functions -- it is a feature, not a lapse of efficiency. If I had designed it to be used by a program that knew the exact length of its submitted bits (or perhaps sent padded zero's), I would have provided a in_StringLength parameter.

Thank you for all your comments!

Finally with my limitted knowledge of C, does declaring Result as static make sense?

It makes sense, but I'd argue that it's not the best choice. If you look at all of the problematic functions throughout the history of C, the ones that use static local storage are consistently on the list. In my experience, a better solution is to have the caller pass in a buffer of a suitable size:

char *IntToBinary(int in_Integer, char *out_Result, size_t in_MaxLen);

I would think that register hint for t could be used

First I'd challenge you to find a modern compiler that doesn't completely ignore the register hint. ;)

char *IntToBinary(int in_Integer, char *out_Result, size_t in_MaxLen);

Passing MaxLen instead of as a global constant... very very nice! I didn't think about that.

Thanks for sharing interesting programming style. There is however one optimization technique which beats yours any time, it is called lookup table (consider input range 0..255, it's in requirements).

Yours and our dear Python Master Vegaseats answers are giving little much answer ready to OP. We could hope he has given much effort from his side for the thing (which for you and others is under one hour thing). I for one can be thankfull for OP, for lesson in testing, that random inputs is not enough test for function, but you must find and include the corner cases also. I have also included much sceptisism on quality of input for value to base conversion in my any number any base (as long as you give enough symbols).

I have strong bias towards optimization which I call "fast enough": If user input is got and response is given then the simple, understandable code is prefered, be it slightly non-optimal. That said I can not allways avoid temptation to try to find the neatest coding for the job.

Also I would use the technique of mine called "think while the teacher is talking" (I prepared one time answer to question in biology during teachers presentation and gave answer to teacher before he finished his question):

1) take one char of input
2) update current value
3) in case of no more letters give the ready answer otherwise goto point 1

Oh my God I wrote goto, now my soul will go to hell....arggggh:scared:

P.S. Narue, I have read about uselessness of register somewhere, I do not know if it was in Your Holy Scripts or elsewhere in Apocrypta of Internet.

There is however one optimization technique which beats yours any time, it is called lookup table (consider input range 0..255, it's in requirements).

I did not make the functions exclusively for the original poster. It was made to be useful for many applications.

As for the lookup table method, if you are implying your method of multiplying indexes with powers of 2, the multiplication performed in that process alone takes many more clock cycles than left shifting which my method employs.

Yours and our dear Python Master Vegaseats answers are giving little much answer ready to OP. We could hope he has given much effort from his side for the thing (which for you and others is under one hour thing). I for one can be thankfull for OP, for lesson in testing, that random inputs is not enough test for function, but you must find and include the corner cases also. I have also included much sceptisism on quality of input for value to base conversion in my any number any base (as long as you give enough symbols).

In the BinaryToInt function, it only reads '1' characters. If the user typed "hello world" and passed that to the function, the return value would be 0. Bad data in, bad data out. It was designed to accept bad data without crashing.

I have strong bias towards optimization which I call "fast enough": If user input is got and response is given then the simple, understandable code is prefered, be it slightly non-optimal. That said I can not allways avoid temptation to try to find the neatest coding for the job.

As a professional engineer and as a software business owner who has worked with professional software written in many languages by companies as big as Microsoft, I can say with certainty that a programmer taught that code does not need to be optimized in favor of clearly written code tends to write cryptic "well written" code that falls under the "Uniformly Slow Code" category. There was one program I reviewed two years ago for a company that had written about 600 lines of code for each front end they designed (about 10 in total, so approximately 6000 lines of code in total) and after spending DAYS trying to interpret their logic, I converted the entire program into a centralized object oriented system consisting of literally 158 lines of code TOTAL. The complexity they introduced in their attempt to "simplify" the logic was absurd. The customer had originally paid about $100,000 for this slow "well written" garbage. I laughed all day.

But now I've seen this too many times to find it funny anymore. I am not saying that the philosophy is wrong, but emphasizing this point to a beginner does not always translate correctly in practice.

Also I would use the technique of mine called "think while the teacher is talking" (I prepared one time answer to question in biology during teachers presentation and gave answer to teacher before he finished his question):

1) take one char of input
2) update current value
3) in case of no more letters give the ready answer otherwise goto point 1

This is correct for user input or event management, but should not be used in every situation, such as with the mathematical functions. To me it is simply good practice that "utility functions" like a binary string to integer converter can be used anywhere without fear of having to review the code for "Optimize Later" because they want to use it in a more time-critical part of their application.

The complexity they introduced in their attempt to "simplify" the logic was absurd.

Sounds like a straw man to me. Just because they failed miserably at simplifying the logic doesn't mean simplifying the logic is a bad idea. I too have seen such attempts fail, and it's nearly always due to code cowboys who spent more time hacking than designing.

emphasizing this point to a beginner does not always translate correctly in practice

It's quite a bit harder to design clear code with acceptable performance than unclear code with excellent performance. I'd honestly prefer to see beginners fail at the former than not even try.

Sorry giving Python code but I mean in Python this (value to binary string, valid range 0..255)

>>> valtobin = ["{0:08b}".format(val)
		       for val in range(256)]
>>> int(valtobin[111],2)
111
>>> for test in range(256):
	assert int(valtobin[test], 2) == test
>>>

This is giving out quite much solution, but this is given as 'Python for pseudocode' spirit.

Thanks for sharing your rich experience with us.

I have sort of experience, but not so much auctority so I want to quote Dijkstra whom you probably agree:

Elegance is not a dispensable luxury but a quality that decides between success and failure.
Source: EWD1284
If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with.
1972 Turing Award Lecture

I would like to see
* basic students to gain capability of understanding well writen code and be able to update it for changing circumtances and
* better ones to write code clear enough for everybody to understand and fullfilling the requirements (leaving enough flexibility for changing circumstances).

And if I seem dense he gave me an excuse:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

(I did start with Pascal later in university, though, after TI58C keycodes, BASIC and Z80 assembly (numbers, no assembler available) )

Sounds like a straw man to me. Just because they failed miserably at simplifying the logic doesn't mean simplifying the logic is a bad idea. I too have seen such attempts fail, and it's nearly always due to code cowboys who spent more time hacking than designing.

It's quite a bit harder to design clear code with acceptable performance than unclear code with excellent performance. I'd honestly prefer to see beginners fail at the former than not even try.

This is essentially a debate on learning styles rather than design styles. You are right on both those points you made, and its basically a teaching preference. I started learning programming by learning logic circuitry and digital electronics engineering, then 80286 assembly language, and worked my way into high level languages like BASIC and C. This perspective helped me design simplified "to the point" code before it spiral's into undue complexity. Learning object oriented programming concepts after all this taught me the value of readability in code.

I don't expect everyone learning programming to be an engineer first, but students that start their programming experience from the top down are not given enough emphasis on design efficiency, something that is required at lower levels of electronics, firmware, and system programming.

I have seen some horrible firmware code recently, obviously made by someone who only writes at high levels, and the code was trying very hard to avoid using recursive functions and ended up taking up nearly the entire ROM space given to it for such a simple function.

They are the most efficient I could possibly make them. Any more efficient and i'd have to write it in assembly.

To begin with, both functions make library calls, which are totally uncalled for. A standard technique here is

while(*in_BinaryString)
        Result = (Result << 1) | (*in_BinaryString++ == '1')

The IntToBinary (which BTW fails for a negative argument) instead of calling memmove should just return Result + t; . Since Result is static, its last byte is initialized to 0 and shall never be touched:

char * IntToBinary(unsigned int in_Integer)
{
    static char Result[MaxLen];
    static char digits = { '0', '1' };
    char * dst;
    for(dst = Result + MaxLen - 1; in_Integer > 0; in_Integer >>= 1)
        *--dst = digits[in_Integer & 1];
    return dst;
}

I do prefer a digits array to a ternary for a reason of branch elimination. Assuming an ASCII code set, one may even do

*--dst = '0' + (in_Integer & 1);
commented: Looks neat +13

Thank you for pointing those errors out, Tonyjv; you are completely correct, and I had overlooked them this whole time.

You might be interested that also your function gives result similarly to my original posted code:

print(str2int('98789h', 16)

result 9992353
and

>>> str2int('98789', 8)
41481

(after correcting the ranges only)

To begin with, both functions make library calls, which are totally uncalled for. A standard technique here is

while(*in_BinaryString)
        Result = (Result << 1) | (*in_BinaryString++ == '1')

The IntToBinary (which BTW fails for a negative argument) instead of calling memmove should just return Result + t; . Since Result is static, its last byte is initialized to 0 and shall never be touched:

char * IntToBinary(unsigned int in_Integer)
{
    static char Result[MaxLen];
    static char digits = { '0', '1' };
    char * dst;
    for(dst = Result + MaxLen - 1; in_Integer > 0; in_Integer >>= 1)
        *--dst = digits[in_Integer & 1];
    return dst;
}

I do prefer a digits array to a ternary for a reason of branch elimination. Assuming an ASCII code set, one may even do

*--dst = '0' + (in_Integer & 1);

Aha! Delicious suggestions! I especially like sending the end segment of the string rather than move it to the beginning.

Sorry giving Python code but I mean in Python this (value to binary string, valid range 0..255)

>>> valtobin = ["{0:08b}".format(val)
		       for val in range(256)]
>>> int(valtobin[111],2)
111
>>> for test in range(256):
	assert int(valtobin[test], 2) == test
>>>

Posted snippet to Python forum: Lookup between binary string and binary value byte

While I am ordinarily very much in the 'high level first' camp, I have to admit that today's Daily WTF gives a strong argument in favor of making sure new programmers get a strong grasp of low-level techniques.

Then again, the Daily WTF taken as a whole is a strong argument in favor of shotgun mouthwash (especially the notorious 'Tossing Your Cookies' article), so I suppose that taking it too seriously will only be excessively depressing.

code to convert decimal to binary


#include<stdio.h>
#include<conio.h>
void main()
{
int l,i=0,a,b[9];
printf("\nEnter a decimal number:");
scanf("%d",&a);
while(a>0)
{
b=a%2;
a=a/2;
i++;
}
l=i;
printf("\nBinary number is:");
for(i=l-1;i>=0;i--)
printf("%d",b);
getch();
}

while(a>0)
{
b=a%2;
a=a/2;
i++;
}

Division and modulo takes much more processing time than a simple shift operation, and this is especially true with dividing/multiplying by 2 or 2^n.

But the advantage with your code is that it makes more sense from a decimal mathematics perspective, making it easier for typical programmers to understand it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.