This is from my C mid-term study guide so you geniuses should be able to crack it easily:

Write a function that will return a random double number. The function takes two parameters (both integers). The first parameter is the maximum whole-number value that number can be (the minimum is 0.) The second parameter is how many places of random precision the number must have.

I have this:

double function(int randSize, int prec)
{
      double num;
 
      num = rand() % randSize;

...

I know that will generate a random with only zeroes after the decimal, and have no idea how to give it immediate precision. Help!

Look up C output format specifiers. The number will be stored to whatever degree of precision that is allotted to type double, but the degree of precision can be dictated on displaying the value.

I know, but since the function only returns a double, not prints it, why would it need the precision value?

I know, but since the function only returns a double, not prints it, why would it need the precision value?

The program doesn't need it. In fact the program could care less. It's going to use 8 bytes or whatever of memory to store the double with as much precision as it can. Your teacher wants to see if you can figure out how to manipulate the information on output. After all looking at 124.567832465484205876 or whatever the program stores it as isn't nearly as useful under most circumstances as printing it as 124.568.

Completely understood. Perhaps that is what my teacher wants, but the wording suggests otherwise. The class is nearly done, and I don't think he would be asking something as simple as formatting a double.

Since the function doesn't print it, I think he wants us to generate a random double (don't know how to do that) and somehow manipulate it so that it only randomizes for a certain amount of places past the decimal. If that is even possible.

Ugh, does that even make sense? Maybe his wording is off? He always words things poorly.

Read the question again, he says nothing about printing it.

Thanks for the help, either way.

I don't think that's what the question's asking, Lerner. It doesn't say that.

shmay, if you need to generate a number in the range [0, 37) with five random decimal digits after the decimal point, start by generating an integer in the range [0, 3700000], and then divide it by 100000.0. You might need to be able to handle up to 14 or 15 decimal digits, so you might need to generate multiple random numbers and piece things together, depending on the range of your random number generator.

For example, to generate a number in [0, 444) with 10 random decimal places, I'd first generate a number in [0, 4440000000), and divide that by 10000000.0. That gives the first six decimal places. Then I'd generate a number in [0, 10000) and divide that by 1e-10, for decimal places 7 through 10, and add that to the first number.

A simpler algorithm is to generate the decimal expansion one digit at a time. For example, if you want a double in the range [0, n), generate an integer in the range [0, n), and then generate ten decimal digits, multiplying the nth decimal digit by 0.1 to the nth power, adding these up, and then adding these to the integer you generated.

And if you want precision measured in binary digits, replace the powers of 10 with powers of 2 in the above algorithms.

Comments
as expected from a Haskeller. ;-) ~s.o.s~
This article has been dead for over six months. Start a new discussion instead.