I want to know how software is converted into electrical pulses or 0's and 1's in computer. How does the computer do it? I know that 0 and 1 in binary system actually represent off and on states respectively and the computer stores the data in its memory in the same state. Also I want to know how the computer(more specifically the video controller) converts the binaries i.e 0 and 1 or low voltage and high voltage into a form that can be displayed on the monitor or is this the work of the monitor.
And at last, what do we mean by instructions to processor. I mean here the op codes. As processor only manipulates the on and off states of the circuit then how the op codes triggers the processor to do a specific task.
So, please help me and increment my knowledge......

Recommended Answers

All 2 Replies

how software is converted into electrical pulses or 0's and 1's in computer. How does the computer do it?

The computer itself doesn't do it, it's converted from human readable text into computer usable instructions with a compiler or interpreter.

As for your other questions, someone else will have to answer because I don't know. Exactly how all that is done is most likely company secrets and patents by Intel and other companies like that.

When you have a compiled program, i.e., software, then that is just a chunk of memory containing machine codes (or instructions). So, as AD said, the thing that converts software (source) into binary form is the compiler. Once you have a binary executable program, the way that it gets executed on the processor is actually very simple (it has to be, it's just "dumb" electric circuits after all).

Each instruction is a sequence of 0s and 1s (bits), that is usually 32bits long (or more or less, depending on the architecture). Imagine that sequence of bits like a sequence of valve positions for a series of pipes bifurcations carrying water. If 0 is left and 1 is right, then you could have an instruction like 11001, which would mean to set: valve 1 = right, valve 2 = right, valve 3 = left, valve 4 = left, and valve 5 = right. That unique sequence of valve positions carries the water to a unique route through the pipes, to a unique destination. This is basically the way processors execute instructions, except that the "water" is made of electrons and the "valves" are transistors. That is probably the simplest way to picture it.

Complete instructions to the processor are usually composed of an instruction (e.g., "add", "subtract", "increment", etc.) and one or two operands that sit on registers, which are little storage units directly at the "entrance" of the processor. So, if you do an operation like "add R1, R2" (which would mean add R2 to R1 and store the result in R1), then the current instruction "add" would position all the transistors (like valves) to the correct configuration such that all the bits in R1 and R2 are directed to the addition circuit (or module) and the result comes back to overwrite the bits in R1.

Of course, modern computers have a lot more stuff going on than this, but at the very basic level that's how it's done. The reason we call them "integrated circuits" is because they are nothing more than very large circuits with trillions of transistors integrated into one piece of silicon. But at the end of the day, they are still just hard-wired "dumb" circuits, just a big elaborate sequence of valves and pipes.

There are lots of other stuff around this, like the instruction pipeline, cache memory, various system buses, pre-fetching, branch prediction, and then, of course, all of the stuff outside the CPU (motherboard, bios, peripherals, etc...). But it's all just the same basic building blocks that I just explained, just built up to a higher and higher degree of complexity.

Also I want to know how the computer(more specifically the video controller) converts the binaries i.e 0 and 1 or low voltage and high voltage into a form that can be displayed on the monitor or is this the work of the monitor.

Well, on the one hand, the graphics card converts "what you want to draw" into an image (about 60 times per second). The image is just a sequence of pixels and each pixel is usually 8bits for each color (red, green, blue). That whole sequence of bits is blasted through the cable that goes to your monitor (the protocols used depend on the hardware). And the monitor generally displays those pixels by making a very rapid (electrically switched) sweep through all the individual pixels of the screen to set each of them to the appropriate color. This happens within a fraction of a second (1/60 of a second, usually), and your eyes only see at 24Hz (24 images per second), and so, you don't see it happen at all, you just see the complete image, as if it was drawn instantaneously.

So, in recap, it's the job of the graphics card to produce a continuous stream of images, and it's the monitor's job to turn that into the image you see. There are more details here.

And the way the graphics card generates images is just the same as the way to CPU executes instructions... in fact, the graphics card is just a special processor (GPU) that is specialized to produce images and blast them through the display connection.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.