I would like to write an application that would search a site like "minitorrents.com" and automatically download torrent files of tv shows I like. I am thinking urllib.py might be able to do something like this. I need to be able to use the search feature of "minitorrents.com", then recursivley follow a few levels into the sight, until my torrent file is found, then download it. Any ideas or suggestions would be appreciated.

I have found a sight that uses rss feeds. I can use the module feedparser.py to get most of the job done. Here is where I am stuck. I get a url that looks like this, using feedparser.py
http://isohunt.com/btDetails.php?ihq=king+of+queens&id=6071088

on this page(the link above) their is a link I click on called "download.torrent". Then that opens up the save dialog in my browser that allows me to download the torrent file. Is there a way to download this torrent file, aoutomatically(not using my web browser) using urllib2 or some other module?

This is pretty complex programming. You are leaving me in awe! Where do you get the idea in first place?

I am petty obsessed with automating any thing that can be automated. It seemed like a natural thing to automate my torrent downloads. I should have part of the script hacked out this weekend :-)

This is pretty complex programming. You are leaving me in awe!

not sure if you are just messing with me :?: My programming skills are not that strong. I pretty much just know how to use the basic constructs: loops, if then statments, and the other basic stuff. Python does most of the work for me, there is a module built to do just about anything, you just need to string them together :-)

Sorry, awe might not be the corrected word. I am impressed how you get idea and quickly know how to tackle with Python! I feel weak there. Maybe my brains is too saturated with C.

Thank you for the compliment :-)

If I do have a skill, it is hacking out short useful scripts(nothing technically complex). I figure the best way to learn python is by coding stuff. So I will try and code just about anything that can be coded(at my level of expertise). I am always looking for ideas.

edit adde later//

that is why I like python so much. You can impliment an idea very fast, without a whole lot of programming experience. It is very satisfieing to have something work and be useful, after just a few hours of coding.

I am always so impressed at the ease of using python. It lets you make such useful stuff with so little code(or programming experience).

This program depends on the python module feedparser (available at feedparser.org), It uses rss feeds provided by isohunt. The idea is you just need to to set your televison shows in this variable. below is a sample variable

feeds = [ ('medium', 350), ('the king of queens', 175), ('invasion', 350), ('two and a half men', 175), ('er', 0), ('yes dear', 175), ('deadwood', 0) ]

you need to set two parts, the first is the title. Make sure you include the full title, including works like: 'and', 'a', and 'the'. The second number is the file size of your avifiles. If you do not want to specify a target file size, set it to 0.

it seems you can only get current shows for about 5 to 6 days after they were released. So long as you run the script every 4 or so days you will not miss any new shows

to run it on linux, change to the directory you want your torrent files saved, then just run the script.

to run it on windows, just drag the file to the directory you want your torrents saved to and click on the icon.

here is the code

#!/usr/bin/env python
#
#
import os, feedparser, urllib

# this is the url used to search isohunt, 'shanenin' is used as a place holder
rss_url = 'http://isohunt.com/js/rss.php?ihq=shanenin&op=and&iht'
generic_download_url = 'http://isohunt.com/download.php?mode=bt&id=shanenin'

# fill this part with the names of shows you want to track, and
# the prefered file size of your download. enter 0 for no preference
# Entering 0 may retrun more bad torrents. 
feeds = [ ('medium', 0), ('the king of queens', 175), ('invasion', 350), ('two and a half men', 175), ('er', 0), ('yes dear', 175), ('deadwood', 0) ]


# this function gets it parameter for the variable feed. Search is the first element of 
# each tuple in feeds
def get_feed_obj(search):
    url = rss_url.replace('shanenin', search[0].replace(' ', '+'))
    feed_obj = feedparser.parse(url)
    return feed_obj
    

# this function takes the summary, and parses out the file size in MBs
# this is used for the parse_rss()
def get_size(summary):
    size = summary.split('Size:')[1]
    size = float(size.split()[0])
    return size

# this function takes the link and parses out the torrent id, this is used for
# the parse_rss()
def get_id(v_link):
    id = v_link.split('id=')[1]
    return id

# this function checks if a directory is already made, if not it makes it
def mkdir(name):
    if os.path.isdir(name) == False:
        os.mkdir(name)

# function either returns feed with '.' for spaces, or '_' 
def clean_feeds(feed, delimeter):
    cfeed = feed.replace(' ', delimeter).lower()
    return cfeed
    
def main():
    for i in feeds:
        direc = clean_feeds(i[0], '_')
        mkdir(direc)
        search_term = "%s."% clean_feeds(i[0], '.')
        size_parse = i[1]
        feed_obj = get_feed_obj(i)
        for i in range(len(feed_obj.entries)):
            title = feed_obj.entries[i].title.lower()
            id = get_id(feed_obj.entries[i].link)
            size = get_size(feed_obj.entries[i].summary)
            print_size = str(int(size))
            modified = feed_obj.entries[i].modified
            if title.startswith(search_term):
                if size_parse == 0 or size >= size_parse - 4 and size <= size_parse +4:
                    download_url = generic_download_url.replace('shanenin', id)
                    saved_title = "%s/%s.%s.torrent" %(direc, title, print_size)
                    urllib.urlretrieve(download_url, saved_title)
                    
main()
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.