Hi guys,
I have been away Python for long now.
I need to brush with small project that will be downloading images from give url.
I give url and it crawls through all pages in give location and its subfolders and download image. Now two challenges:
1. Crawl through all pages in Given url (Folder and sub folders)
2. Download found Images (urllib2?)
3. Sites need authentication, how do I do?

Please help me point right Direction and I missed you ;)

First you can start to dowload a picture from web.

2. Download found Images (urllib2?)

Yes can use urllib2.

I have been thinking off makeing a Image crawler for a while now and maybe with a gui frontend(wxpython)
Just havent getting startet yet.

Here some code you can look at,download a random picture i found on net and save it to disk.

from urllib2 import urlopen

def download_from_web(url,outfile):
    Give url adress to source you want to download
    Name of fileformat example <somthing.jpg>
        webFile = urlopen(url)
        localFile = open(outfile, 'wb')
    except IOError, e:
        print "Download error"
def main():
    #Just a random picture
if __name__ == "__main__":

Edited 6 Years Ago by snippsat: n/a

Thanks for snippets.
Do you have idea on how to get files on the web server with path and detect images in them and download them? I'm thinking but not yet got "how-to" ::

You could open the page via urllib2 read the source, and look for image extensions.
Regex and plan old slicing could do it.

You could also parse for image tags. The html parse lib that came with the standard library could help.

This question has already been answered. Start a new discussion instead.