Hi everyone,

I was wondering if anyone could help -

at the minute I have a program that records how many bytes are received over a set interval (for example, 500 milliseconds). How would you calculate how many megabits have been received per second if the sampling interval will always be different? I have it so far multiplying the number of bytes by 8 to get bits, but how would you get a good estimate of how many per second?

Many thanks in advance :)

## Recommended Answers

It sounds like you built the hard part already? how can the calculation beat you down?

did you give it a try? can you give an example and we will show you the spots we think you can improve...

Basic questions to look at for this type of …

Alright... so if I say there is 1000 milliseconds in a second... then would you agree you can fit 500 milliseconds into 1 second twice? so it would look like the following:

x = 500 (ms)
y = 1000 (ms in a second)
z = y/x
z = 2...

## All 6 Replies

It sounds like you built the hard part already? how can the calculation beat you down?

did you give it a try? can you give an example and we will show you the spots we think you can improve...

Basic questions to look at for this type of calculation:

1. Amount of milliseconds in a second
2. Amount of bytes in a Megabyte
3. Build calculation to convert your bytes -> megabytes... you milliseconds -> seconds and run the same calculations vs your input... (should all be multiplication and division)

Yeah the part that has me beat is how to calculate per second if the sampling interval is less than or greater than a second :(

say for example you want to check how many bytes are received within 500 milliseconds, then I need to work out what the average megabits is per second from that sample

Alright... so if I say there is 1000 milliseconds in a second... then would you agree you can fit 500 milliseconds into 1 second twice? so it would look like the following:

x = 500 (ms)
y = 1000 (ms in a second)
z = y/x
z = 2...

so if you have for example 4000 bytes each 500 ms... then you would get 8000 bytes in a second... do you agree?

Yeah I was thinking something like that but I didn't think it would be that easy lol :( Think I'll need to go over things again and see how it goes but thanks for the help anyways! :)

now just apply the same logic to the bytes conversion to megabytes and you should be set :D best of luck to ya mate :)

For these kinds of problems, keep a toolbox of ways of thinking of the numbers in your mind. The trick that applies here is remembering that dividing by a lower number will give a higher result. 5/1 = 5, 5/0.5 = 10, 5/0.25 = 20, etc...

So if you have the number of bytes transferred in one variable, and the amount of time that has elapsed in another, then transferred/elapsed = the number of bytes transferred per unit of elapsed. If transferred is in bytes, and elapsed is in seconds, then the result would be bytes per second. Since we know that there are 8 bits in a byte, then transferred * 8 / elapsed would give bits per second. Since we want MEGA bits per second, we need to divide the result by 1000000. It's pretty much basic algebra to perform further transformations.

There's a bit more to it if you are not using floating point. Remember that if you want to divide it by 1000000, then you wouldn't want to do the divide on the left hand side - because it might make it a number so small that an integer would be zero. So in that case, you can do the opposite thing on the other side of the division operation. If you did (bytes * 8) / (elapsed * 1000000), then your answer would be in megabits per second, and it would work if they were integers. The other option would be to do (bytes * 8 / 1000000) / (elapsed). If you did that, and bytes * 8 <= 1000000 then it would effectively become 0 / elapsed and your answer would be wrong. The mental trick I'm using here is moving the operation across the division, and doing the opposite, to make the numbers go away from zero, instead of toward it.

You need to be careful when using tricks like this to do integer operations. Remember that integers have limited range. If you multiply elapsed by 1000000, then you are reducing the maximum allowable value (before it screws up) of elapsed by a factor of 1000000. Say elapsed were in milliseconds, then you would divide your multiplier by 1000 to get everything balancing out.

Make sense? Hopefully I haven't made you even more confused. I know I'm more confused than when I started this post! :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.