I've been writing a Python module which contains a few classes. As I have come to use the module in earnest it has become clear that I need to create a _very_ large number of instances of these classes. This is using more memory than I would like. Speed is not an issue but memory is becoming a serious issue for me.

So, my question is: if I recode the module in C/C++ will I gain any memory saving over a Python module? Or do classes/structs/variables instantiated from a Python program have the same overhead as straight Python?

Whilst I have used __slots__ where possible to gain a small improvement I feel I need to go much further. The other alternative is to completely change the way my module works. Before running down either route I wondered if anyone more familiar with Python could shed any light on this for me?

With such a large number of class instances you may have to use disk space and check into modules pickle and shelve.

Thanks for that.

I've tried 'shelve' but I the program continues to consume RAM despite calls to shelve.close() and even forcing garbage collection (which probably shouldn't be necessary). Tracking live objects though the garbage collection module suggests I don't have a memory leak so it's appears to be something to do with the 'shelve' module.

More work is required it seems.