Hi All,

I am working on optimizing the memory usage of my codes and I found a problem that I do not understand. Please let me know what you think.

Code:

typedef map <pair<string, int>, vector<pair<int, float> > > TDM;

void parsefile (TDM & my_map, string in_file); // parse input file and put it in the data map
float CheckMemUsage (void); // check memory usage by extracting /proc/self/stat info


int main (int argc, char *argv[]) {
TDM my_map;
string infile = "xxx";

printf ("INFO: Memory usage before parse file: %.2fM.\n", CheckMemUsage ());
parsefile (my_map, in_file);
printf ("INFO: Memory usage after parse file: %.2fM.\n", CheckMemUsage ());

for (TDM::iterator it=my_map.begin(); it!=my_map.end(); ++it) {
vector<pair<int, float> > ().swap (it->second);
}
TDM ().swap (my_map);
printf ("INFO: Memory usage after delete map: %.2fM.\n", CheckMemUsage ());


}

The report of this program:

INFO: Memory usage before parse file: 13.46M.
INFO: Memory usage after parse file: 203.53M.
INFO: Memory usage after delete map: 132.59M.

In the function "parsefile", I only use string, vector, pair, map stuffs, without new any memory. So after this program, I assume that only my_map is in memory. After deleting my map, the memory usage should return to before parsing file status (13.46M). But it still reports a 132.59M. What is this 132M for? Is there sth wrong in my program?

Thanks a lot for your time.

Recommended Answers

All 12 Replies

instead of swap() use erase() e.g. my_map.erase(my_map.begin(),my_map.end()); Then you can delete lines 15-18 of the code snippet you posed.

The erase () does not release the memory until the destruction of the map. But the swap () can release the memory immediately.
I tried the erase(): my_map.erase(my_map.begin(),my_map.end()); the report said:

INFO: Memory usage before parse file: 13.46M.
INFO: Memory usage after parse file: 203.53M.
INFO: Memory usage after delete map: 203.53M.

How does CheckMemUsage work? It could be you're expecting the memory management to be handled in a different way than reality.

For the CheckMemUsage, I used the linux /proc/self/stat info, as you can see a similar code below:

void process_mem_usage(double& vm_usage, double& resident_set)
{
   using std::ios_base;
   using std::ifstream;
   using std::string;

   vm_usage     = 0.0;
   resident_set = 0.0;

   // 'file' stat seems to give the most reliable results
   //
   ifstream stat_stream("/proc/self/stat",ios_base::in);

   // dummy vars for leading entries in stat that we don't care about
   //
   string pid, comm, state, ppid, pgrp, session, tty_nr;
   string tpgid, flags, minflt, cminflt, majflt, cmajflt;
   string utime, stime, cutime, cstime, priority, nice;
   string O, itrealvalue, starttime;

   // the two fields we want
   //
   unsigned long vsize;
   long rss;

   stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
               >> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
               >> utime >> stime >> cutime >> cstime >> priority >> nice
               >> O >> itrealvalue >> starttime >> vsize >> rss; // don't care about the rest

   stat_stream.close();

   long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // in case x86-64 is configured to use 2MB pages
   vm_usage     = vsize / 1024.0;
   resident_set = rss * page_size_kb;
}

So are you taking into account that memory allocated to a process may be released within the process but not released back to the OS for reallocation performance?

I am afraid not. Actually, I have no deal how to count that memory you talked. Any suggestion? Also, another interesting issue is if I commented out

for (TDM::iterator it=my_map.begin(); it!=my_map.end(); ++it) {
vector<pair<int, float> > ().swap (it->second);
}
or
TDM ().swap (my_map);

the Memory usage after delete map is 203.53M. The memory usage reduce can be see only when both swap() are used.

Actually, I have no deal how to count that memory you talked. Any suggestion?

Can you be more specific about what you're looking to optimize? Is it minimizing total memory usage over time or decreasing the lifetime of large allocations? And this should go without saying, but have you profiled your code already and determined that it uses too much memory?

What I am trying to do is minimizing the total memory usage of my program. I need to read multiple files and check the contents. Some contents will be removed from my database map if they are proved to be useless in the following stage. What troubles me here is even if I removed some content, the memory release does not return as expected, which may trouble my further operation (try to allocate memory again). I want to make sure that my codes do not waste memory.

How to profile the code as you said? I am new in this and have no idea how to do it. Thanks for your suggestions.

What troubles me here is even if I removed some content, the memory release does not return as expected, which may trouble my further operation (try to allocate memory again).

I wouldn't worry about that. The way it typically works is if the process needs more memory on an allocation, it'll request that memory from the OS. When blocks are released, they're simply marked as unused in the process but not necessarily returned to the OS. Further allocations would still look at the unused blocks and reuse them if they fit the request.

As far as minimizing total memory usage, it's a combination of data structure and algorithm choices more than anything. You want to choose algorithms that don't require vast amounts of data to be stored at any given moment, and data structures that don't have a large overhead.

You can also look for ways of representing your data that's more compact.

Okay, Narue, thank you very much for your reply and suggestions.

I must say that I'm surprised that this happens.
I know that memory management doesn't always return the pages to the OS immediately, but it seems that even if you create and delete 1 GB+ worth of objects, the memory is not returned to the OS, even after some time has passed (I waited about ten minutes).
It seems that this is true for all objects allocated with new. When deleting arrays created by new[], the memory seems to be returned to the OS immediately.
Doesn't this behavior cause high overall memory usage?

Never mind, apparently this is normal behavior on Unix systems and relies on the assumption that peak memory usage is not significantly higher than the average usage.

P.S. the edit limit could be slightly longer than it is now.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.