I am new in Python. I wrote a program that reads-in large text files using xreadlines, stores some values in lists (and arrays) and does some calculations. It runs fine for individual files, but when I try to consecutively process files from a folder, I get a memory error.

my program looks like this:

data = fileName.xreadlines()
for line in data:
tokens = line.split(';')
list1.append(tokens[2])
list2.append(tokens[3])
...
...
outfile.write(results)

When I enter fileName manually and run it for one file it works fine, but when i do:

for file in os.listdir(dir):
code as above

I get a memory error after processing the first file.

I have tried manually to delete the lists after the calculations either by aList = [] or del(aList). So, how can I free memory? platform: windows XP

Please tell us how the program is invoked, what happens, what is the exact failure mode, and text of failure.

Can you (just) process all the files? That is: If you comment out the list1.append(tokens[1] etc, and just throw the results away, does the program run to completion? What happens if you run this in a directory with just one file? Just two files?

I think you are reading in all your file with readlines, use open(filename) generator instead of data in for loop.