0

I've never understood how to handle space analysis. I understand that it mostly depends on a part that varies depending on the instance, but I don't see it. I understand the time analysis for programs, but space doesn't seem as easy to figure out.

If someone could just try to explain what to look for when dealing with space analysis, that would be great.

Find the best theta() for the worst case space  required by the following sets of code give   
      a.
        vector <vector<int> >  v(n); // Creates a vector of vectors
        for (int i = 0; i< n; i++)
       v[i].resize(n);     // allocates space for each vector    

Here's my look on this part. The creation of the vector of vectors takes space. And then they allocate the space for each. So, I figure that's all the space needed for this program, I just don't see how I can put a number to it. I mean, it could be that it's n vectors of n size, so my guess would be n*n but that's more of a guess then anything.

  b. foo(n) where
    int foo(int n)
    {
      if (n < 2)
        return 73;
      else
         return foo(n/2);
     }

For this one, since it's recursive, I would assume it's n times a constant, thus leaving it at n.

Edited by mike_2000_17: Fixed formatting

2
Contributors
1
Reply
2
Views
8 Years
Discussion Span
Last Post by Rashakil Fol
0

Mathematically, space analysis works the same way as time analysis, except that you measure a function by the maximum amount of memory it uses, instead of the total amount of time.

So it just involves asking the question: how much space does a function use?

In the vector example, the answer is Theta(n*n), because space for n*n integers gets allocated.

In your second example, it's recursive, and there's no extraneous space usage outside of that needed for your stack frames. So the question is: how many stack frames are there? Hint: the answer is not n.

----

To approach the vector problem more precisely, we can note that the "memory usage" of the vector v is more like sizeof(vector<vector<int>>) + n * elementUsage. I'm pretending that the vector has a few fields to contain a size and allocation limit value, along with a pointer to its dynamically allocated array. Then I'm accounting for the space taken by the array with n elements by multiplying each element's usage by n. This expression produces an exact number of bytes of memory usage, but the truth is that if you consider that the array of elements is allocated on the heap, there's a few bytes of overhead, not controlled or known by us, used to keep track of memory allocation -- a constant amount, A, presumably. So we could say more precisely that the memory usage is sizeof(vector<vector<int>>) + A + n * elementUsage.

And the value of elementUsage is going to be sizeof(vector<int>) + A + n * sizeof(int), because an int uses precisely sizeof(int) bytes of memory, with nothing pointing elsewhere on the heap.

So the "actual" memory usage is sizeof(vector<vector<int>>) + A + n * (sizeof(vector<int>) + A + n * sizeof(int)). This is still a value that could be off by a number of bytes -- maybe the heap allocator can only allocate memory in multiples of eight bytes -- if n is odd, and if sizeof(int) is 4, then each array will have 4 hidden bytes of memory. But this doesn't affect the Theta-notation description of memory usage. (You can prove it doesn't by just supposing that the constant A were slightly larger.) Another factor is the fact that the vector datatype can overallocate the array. I'm pretty sure it does not do this when you explicitly set the size, though, and we won't consider this possibility.

If we take the "actual" memory usage, we've got Theta(1) + n * (Theta(1) + n * Theta(1)), which simplifies to Theta(n*n).

----

If I were to do this myself, I'd simply note that a vector has Theta(1) memory usage on top of n * elementSize memory usage. And then I'd note that each element has Theta(1) overhead on top of n * Theta(1) memory usage. So we have Theta(1) + n * (Theta(1) + n * Theta(1)), and that's the end of it.

Once you've internalized that a vector has Theta(1) overhead + the sum of its elements' memory usage, you don't need to worry about all the stuff you've read.

----

Also, I said that we'd just assume our vectors aren't overallocating the array. But suppose one does. The worst it can do is overallocate by a factor of 2 (because that's the way they work). So then you can argue that the memory usage of a vector is Theta(1) + the sum of its elements' memory usage + O(n). But the sum of the elements' memory usage is always at least n bytes, meaning the O(n) factor becomes superfluous.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.