when getting file from a web server or ftp server, if we use
u = urllib2.urlopen(url) Correct me if I am wring, it is not getting downloaded to disk, i think its getting stored in buffer(RAM), is there a way that the buffer split should get saved onto disk directly since if file size is massive it will slow down the system(computer) .. what is the better approach in retrieving the file over ftp or web server ?

Recommended Answers

There is a script in the python distribution:

pythonhome\Tools\Scripts\ftpmirror.py

Maybe itt will help you if you look at it.

Jump to Post

All 2 Replies

It seems to me that urlretrieve() answers your question. For ftp, the example at the top of the ftplib module's documentation shows how to download a file named README.

Also a search for ftp on pypi may reveal promising modules such as ftptool and others.

There is a script in the python distribution:

pythonhome\Tools\Scripts\ftpmirror.py

Maybe itt will help you if you look at it.

commented: great info! +14
Be a part of the DaniWeb community

We're a friendly, industry-focused community of 1.21 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.