First of all some background. My friend and I are having a bit of a competition over theoretically infinite number storage. We have agreed that the first person with a working arbitrary precision integer library with the theoretical ability to store an infinitely large number (IE: if the hard drive was big enough it would fill it up with the number) shall win. My plan has been to use .int files with the following format:

1-bit : Sign (1=positive, 0=negative)
31-bits : data (0=END)
data-bits : base 256 little endian number info
max FNAME_MAX-bits : next File (to be multiplied by 256^31 and added to this number)

I have it working, but only as a recursive function and I realized that this violates the theoretical infinity idea by relying on having enough space on the stack for all my recursive calls, which will be bounded by RAM and/or the system. Unfortunately some of my recursive functions (like my multiply function) are rather complicated, with multiple points of recursion, in multiple directions. I have been trying to convert them to iterative formats, to no avail. I am wondering if there is a general way to convent a recursive function into an iterative one? If not does anybody have a more iterative solution to my infinite storage problem?

3 Years
Discussion Span
Last Post by Labdabeta
Featured Replies
  • @rubberman I have looked at them, and none of them have allocation for transcomputable numbers. This is obviously because there is no reason to use them. However for this competition transcomputable numbers are fair game. All we care about is the theoretical computability. So even if it would take a … Read More


There's a way to systematically convert a recursive algorithm to an iterative one, but unless you're dealing strictly with tail recursion, some form of stack data structure is still required. Do your contest requirements exclude the high probability of virtual memory on the machine being sufficient to contribute to the definition of "infinite"?


The point is that we will determine exactly what will be the limitation on our programs, then we will look up the 'optimal' situation and see who could store the largest number. As such I use up a fairly large chunk of RAM, then I switch to a series of files. As such I am bounded only by (prepare for epic equation now):

(number of 1 byte digits)^(number of those in a file [31 bits worth of bytes=2^31 bytes])^(maximum files I can create MIN[(number of valid filename characters)^(FILENAME_MAX),free disk space/the stuff before the last ^]

using 55 valid file characters (A-Z, a-z, -, _, )


using 13.2 TB hard disk space (http://www.cray.com/Products/Storage/Sonexion/Specifications.aspx)

(256^(2^31))^(13 200 000 000 000/(2^31))

Since there is no reason why an 'optimal' setup couldn't have many terrabytes of memory (by just having a super computer with tons of disks) I thought that if I use a recursive solution my bottleneck is likely to be a lack of RAM rather than a lack of hard disk space. The point of the competition is to think asymptotically, so I think I need some means by which to get these files working. However doing such things as multiplication/integer division on these files is no easy task!

(Note that with 13.2 TB like on the Sonexion I would end up with 13.2x10^12/(2^31)=6146 recursive calls, which is almost certainly enough to outrun my stack, especially seeing as each one will be a recursive call to a function that takes at the very least a file name [seeing as I can't very well have 6146 files open at a time I would have to open/close them!]).


It is pretty obvious that you cannot use actual recursive calls in the implementation. Any recursion can be turned into a loop of some kind, usually supported by a queue or stack data structure to hold what would have been the stack-frame if it were implemented with recursive calls. Doing so effectively is usually a case-by-case thing. In fact, it is, for the most part, assumed that recursive algorithms are not implemented by recursive calls, unless explicitely cited as being reasonable to do so (e.g., intro-sort). Presenting algorithms using recursive calls makes it easier to understand it, but you don't actually implement it as such unless you have proof that it is safe (and desirable, i.e., avoiding free-store allocations for performance reasons, like in intro-sort implementations).

Generally speaking, recursions usually fall under a few simple categories, which can all be seen as tree traversals. The most trivial case is the traversal of a unary tree (i.e., a list), which corresponds to implementations that naturally lend themselves to tail-recursions, and can be trivially turned into a loop. Then, there is the back-and-forth traversal of a unary tree, which is more rare and corresponds to cases where a tail-recursion is impossible (because you need to do things on the way back up the call-stack), and this can generally be turned into two loops (forward and back). Then, you have the classic tree traversals like breadth-first and depth-first. These are also easy to turn into a loop: the breadth-first traversal turns into a loop with a queue (like std::queue) to keep track of things left to do; and, the depth-first traversal turns into a loop with a stack (like std::stack) instead. What you store in the queue or stack is everything that changes from one recursive call to another. There are, of course, more special cases with a bit more complicated traversals, but it's usually similar to this. The most difficult part is identifying what kind of traversal your recursion corresponds to.

This is a good exercise because being able to turn recursions into loops is an essential skill, because of the number of algorithms expressed as recursions and the very few cases where it is actually acceptable to use recursive calls in a real implementation.

As far as all that math goes... I don't have much to say, I don't really understand the scheme you are using.


That math is just my argument for the limitations on my program. It represents the largest number I could theoretically represent given my desired solution.

From what I understand of your reply, it seems as though my best chance at a solution is probably to just re-write my recursive functions to be iterative one way or another. I rarely use more than one loop per function unless I really have to, but I suppose in this case it will probably be necessary.

I'll mark this as solved if/when I succeed in my rewrites.


Well, there are "infinite" precision math libraries (I think Boost has such) that you could look at for ideas... :-)


@rubberman I have looked at them, and none of them have allocation for transcomputable numbers. This is obviously because there is no reason to use them. However for this competition transcomputable numbers are fair game. All we care about is the theoretical computability. So even if it would take a computer the size of the earth and more than the lifespan of the earth to calculate a number, our programs shall still do it!

Anyways, by modifying how my files work to a more iterative style (instead of having files of files I just have a string of files, similarly to a linked list) I have been able to rewrite my recursive functions iteratively. So all is well.

Neat! Hope you win... :-)
This question has already been answered. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.