Hi,

I'm taking a shot at the netflix challenge and relearning C++ at the same time (I usually code in Python now). I'm storing the data in the following structure:

typedef struct ndata{
unsigned int userid : 22;
unsigned int rating : 3;
unsigned int : 0;
unsigned int movieid : 15;

unsigned int year : 3;
unsigned int month : 4;
unsigned int day : 5;
} ndata;

Each of these ndata represent a user rating a single movie on a particular day. They are given to me organized by movie. What I want to do is load them into ram by organized by user. Since I don't know a priori how many movies each user rated, I'm making an array of lists to hold the user-organized info (one list per user).

The problem I'm having is that there are ~480,000 users. When I declare

list<ndata> list_array[200000];

bhere are no problems and the program runs. However, when I declare

list<ndata> list_array[500000];

the program says "segmentation fault (core dumped)" as soon as I execute it.

To me, this sounds like a memory problem. But I don't understand what limit I am running into here. Any ideas?

Recommended Answers

All 3 Replies

What compiler are you using? If you're using one of those old compilers, make the memory model large.

Dynamic memory allocation would be a better solution.

I'm using g++ in cygwin (newly downloaded). I'll look into dynamic memory allocation -- I don't really even know what it means at the moment.

No problems regarding the compiler, then. I'm unaware about the details of this problem. Yeah, it is a memory problem; atleast. But I'd still suggest that you learn dynamic memory allocation... That'd be better for the magnitude of array elements you're dealing with.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.