2^64 / (1024^2) = 17.592.186.044.416MB = 16 exabytes
2^32 / (1024^2) = 4.096MB = 4 gigabytes
2^16 / (1024^2) = 0,0625MB = 64 kilobytes
2^8 / (1024^2) = 0,000244140625MB = 0,25 kilobytes = 256 bytes

How the heck you guys managed to do anything on "computers" which had maximum of 256 bytes?

(2^64) / 2 = 9.223.372.036.854.775.807 highest decimal = Summer 4531319894415291403656
(2^32) / 2 = 2.147.483.648 highest decimal = January 2038
(2^16) / 2 = 32.768 highest decimal = Jan 1970 09:06:08
(2^8) / 2 = 128 highest decimal = Jan 1970 00:02:08

Asuming midnight is epoch. 128 seconds was limit of systems "these" days. Did you used computer timer "back then" or system worked another way?

Recommended Answers

All 9 Replies

Stupid me just realized that first question was pointless. Since it's 256 bytes of RAM. Not 256 bytes of actual memory. Second question is left over.

I'm not sure I understand your question (#2). I have been programming Linux/Unix systems since 1982, and led the Y2K analysis/remediation effort for a major software company in 1998 (10 million lines of Unix and Windows code). Also, I have written C++ date/time classes that can handle date/time values up to the heat death of the universe, to the millisecond.

So, please be more explicit (and clear) as to what your concerns are.

P.S. The "heat death" of the universe is hypothetical. Let's just say that my date/time classes handle this up to a REALLY big number. :-)

If you want that code, it is now owned by Applied Materials - the 800lb gorilla of the semiconductor manufacturing equipment industry.

Really short and simplified.

How did you managed to store time on your computer. When greatest decimal is 128 (10000000 binary) which is only 128 seconds. So your computer had to reset about 1800 daily.

First of all, in an unsigned byte, the maximum decimal is 255 (0xFF). And second, it's not because the word-size is only 8 bits (1 byte) that you cannot create numbers that are larger than that, it just means that if the numbers are larger than the word-size, you have to handle them word-by-word. For example, if you add two multi-word numbers, you just have to add the least-significant words from each number, and then keep the carry for the next word addition, and so on..

Think of it this way. You, as a human, when you were in elementary school, you could only represent a number between 0-9 through writing a digit down (i.e., that is your native "word-size"). But, you could still represent very large numbers and do many complicated operations with them, right? Well, it was the same for these computers. And it's still the same today for very large numbers that exceed the 32bit / 64bit word-sizes.

@mike_2000_17 I still don't get it. If I was 9, you were 9 and rubberman was 9. We're small numbers. We together can represent only 999. But if you have 10 humans. You will can get max of 9999999999 (~9 mld.).

There is huge difference 999 and 9 999 999 999. The whole situation is still zoomed in. Since we're talking in general about difference between few quintillions and only 128.

If me, you and rubberman were limited to 128. At some point we would restart. Leaving no other sign over that would point out we counted till 128. There is no 4th "number". Now, I may be wrong, but it's what I am here for. You have to correct me, so I won't teach this wrong teachings to any other people think of taking me as example.

Take me as an idiot. Your logic just overflows my brain. Could you use explaination. Level: Beginner?

The word-size of a computer architecture is just the number of bits that the computer can digest at one time (for each instruction). So, for an 8-bit processor, it just means that it can only perform operations on 8-bit numbers at a time. This does not mean that it cannot digest bigger numbers, it just means that if it needs to digest a bigger number, it must break up that number into 8-bit chunks and digest each one at a time.

If we take the analogy of eating food, then the limit on the amount of food you can put in your mouth at a time does not limit the total amount of food you can consume, it just means that it will take longer to eat a big plate of food it your bites are smaller.

The standard Unix representation of time (date) has always (AFAIK) been using a 32bit signed integer (and lately, using a 64bit integer) for the number of seconds since the epoch (1970). On 8-bit platforms, this would mean that in order to manipulate a date (e.g., adding a year to a date), the computer would have to add the two 32bit numbers by individually adding 8bit chunks of it.. meaning, four additions with carry (which would be 7 additions total). But the point is, it can still deal with larger numbers than 8-bit, it's just that it needs more work to do so.

If we take the analogy of doing additions like we did in elementary school, then I could say the following. Let's say I want to add the numbers 1762 and 4589, but I can only deal with one digit (0-9) at one time, then I would have to do this:

1762  +  4589  :
add 2 + 9 = 11 (1: carry, 1: first digit)
add 1 (carry) + 8 = 9
add 9 + 6 = 15 (1: carry, 5: second digit)
add 1 (carry) + 5 = 6
add 6 + 7 = 13 (1: carry, 3: third digit)
add 1 (carry) + 4 = 5
add 5 + 1 = 6 (6: fourth digit)

result: 1762 + 4589 = 6351

That's it.. this is basically how an 8-bit computer would deal with a 32-bit number, except that instead of 0-9 digits, it can deal with 0-255 "digits" (or "words").

commented: Mike saves world again :D. +2

Oh, now it's clearer. Thank you for explaination.

Mike does do that well (explain stuff) doesn't he? :-)

Indeed. You too. But unfortunely you didn't understand question.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.