0

I am new in Python. I wrote a program that reads-in large text files using xreadlines, stores some values in lists (and arrays) and does some calculations. It runs fine for individual files, but when I try to consecutively process files from a folder, I get a memory error.

my program looks like this:

data = fileName.xreadlines()
for line in data:
tokens = line.split(';')
list1.append(tokens[2])
list2.append(tokens[3])
...
...
outfile.write(results)

When I enter fileName manually and run it for one file it works fine, but when i do:

for file in os.listdir(dir):
code as above

I get a memory error after processing the first file.

I have tried manually to delete the lists after the calculations either by aList = [] or del(aList). So, how can I free memory? platform: windows XP

3
Contributors
2
Replies
3
Views
7 Years
Discussion Span
Last Post by pyTony
0

Please tell us how the program is invoked, what happens, what is the exact failure mode, and text of failure.

Can you (just) process all the files? That is: If you comment out the list1.append(tokens[1] etc, and just throw the results away, does the program run to completion? What happens if you run this in a directory with just one file? Just two files?

0

I think you are reading in all your file with readlines, use open(filename) generator instead of data in for loop.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.