I just solved my own problem instead of using a Button widget, I am now using a Radiobutton widget using the attribute indicatoron=0.

I have a tkinter app that has 2 frames. The frame on the left has a bunch of buttons and the frame on the right has a text widget. I use the create_buttons function below to make the buttons. Is there a way to have the button stay sunken when clicked. Then when I click the next button that one stays sunken? I want to be able to see the last button clicked.

   def create_buttons(self):
        Loop that creates a button for each file in lterrors.txt.  The buttons uses the client/server name as its title
        text and a custom action that opens the appropriate file in the text widget.
            with open(App.logfile, 'r') as f:
                lines = f.readlines()
        except IOError:
            message = 'Cannot find %s.' % (App.logfile)
            showerror(title='Sorry', message=message)

        labels = []
        for i, line in enumerate(lines):
            filename = line.strip()
            filename = filename.replace('C:\LTShare', 'L:')
            host = os.path.dirname(filename)
            company = os.path.dirname(host)
            company = os.path.basename(company)
            host = os.path.split(host)[-1]
            host = host.split('-')[0]
            text = company + '-' + host
            labels.append((text, filename))

        for i, data in enumerate(labels):
            text, filename = data
            button = Button(self.frame1, text=text, command=lambda filename=filename: self.button_command(filename))
            button.grid(row=i, column=0, sticky='EW')

I am not a big time developer but just a guy to likes to tinker with python. Most of the python scripts I write are for myself. And lately, I started reading up on virtualenv.

So I get the wroking on various projects inside a virtualenv is a good thing. Namely code and package isolation. Installing a module in one virtualenv has no affect on another virtualenv. What I don't get is that after a while you have a whole bunch of projects on the go, but, the only way to run them in in a virtual environment. Dosen't that get in the way? Dosen't it become a pain to start a virutalenv everytime you want to run a script or am I missing something?

How do you run your python scripts outside of the virtualenv?

I am new to flask and new to git so hopefully someone can help me.

This is my git repo:

I am trying to create a web based spreadsheet. I am storing my data in a sqlite3 database. On my edit page, I would like to be able to edit existing entries and save them back to the database. But I don't know how to get the form data. I also, don't know how to interactively introspect/debug that edit page to see what exists in that namespace.

I know you can use request.form.getitems('asset_tag') but it is not working for me.

Can anyone help a noob?

Can someone please tell me why my menubar does not show up?


!/usr/bin/env python

from Tkinter import *

class Application(Frame):

def __init__(self, master=None):
    Frame.__init__(self, master)
    self.master.rowconfigure(0, weight=1)
    self.master.columnconfigure(0, weight=1)
    self.master.title('Test Menu')

def createMenu(self, master):
    menubar = Menu(master)

    loadmenu = Menu(menubar)
    loadmenu.add_command(label='Load', command=self.load)
    loadmenu.add_command(label='Quit', command=self.quit)

def createShell(self):
    frame = Frame(width=400, height=300)

def load(self):

def save(self):

def quit(self):

root = Tk()
app = Application(master=root)

Wow, thanks for the responses guys. Tonyjv, that's exactly what I was trying to do but could not wrap my head around.

The couple of eye openers for me were:

  • The clever use of zip() and tkvars
  • Making self.siteno a local variable so I can delete the current site yet still keep track in a dictionary
  • Using a frame for each row is way easier to remove/destroy()

I'm on my way to making my first useful gui. Thanks guys.

I am doing the O'reilly School of Technology course and the current topic deals with Tkinter. It gave me the idea to write this application.

The idea is to transfer files to various ftp sites. If I set a default master password, then the password should be used for all sites. If I click the add/remove button it should add/remove a line for the next sites details. Ideaily the remove button should remove the current site from the list, right now it removes the last site(or trys to anyway). Not sure how to keep track of that.

I am having problems with the following:

1) Having the master passward update all site password entries
2) The add/remove looses count and dosen't always remove the sites(say if I add 4+ entries)

I have no idea what's wrong, any help would be appreciated.


!/usr/bin/env python

from Tkinter import *

class Application(Frame):

def __init__(self, master=None):
    Frame.__init__(self, master)
    self.master.rowconfigure(0, weight=1)
    self.master.columnconfigure(0, weight=1)
    self.master.title('Bulk FTP Updater')

    self.toggle_pass = 1

def createShell(self):
    self.e_master_var = StringVar()
    self.use_master_p = IntVar()

    self.l_master = Label(self, text='Master Password:')
    self.e_master = Entry(self, textvariable=self.e_master_var)
    self.c_master = Checkbutton(self, textvariable=self.use_master_p,
    self.l2_master = Label(self, text='Set as default')
    self.blank = Label(self, text='', pady=20)

    self.l_master.grid(row=0, column=0, sticky=W)
    self.e_master.grid(row=0, column=1, sticky=W)
    self.l2_master.grid(row=1, column=0, sticky=E)
    self.c_master.grid(row=1, column=1, sticky=W)
    self.blank.grid(row=2, column=0)

    self.r = 3

def use_master_pass(self):
    password = self.e_master_var.get()
    if self.toggle_pass:       
        for r in range(3, self.r+1):
        self.toggle_pass = 0
        for r in range(3, self.r+1):
        self.toggle_pass = 1

def createSite(self, r):
    self.l_host = Label(self, ...

Tim Golden is your friend...



import win32api
import win32con
import win32security

FILENAME = "temp.txt"
open (FILENAME, "w").close ()

print "I am", win32api.GetUserNameEx (win32con.NameSamCompatible)

sd = win32security.GetFileSecurity (FILENAME, win32security.OWNER_SECURITY_INFORMATION)
owner_sid = sd.GetSecurityDescriptorOwner ()
name, domain, type = win32security.LookupAccountSid (None, owner_sid)

print "File owned by %s\%s" % (domain, name)


I am a programming and python beginner and thought this would be a fun exercise. I wrote this script to mine web pages. First it finds all of the hrefs on the page. Then it takes those urls and searches those pages for content. This is by no means perfect. For one, it searches hrefs only. And two, when I search the page for content I have to give it an offset to find 'text' content. It's not always ideal. I know the code is long and few will read it but I was wondering if anyone had any better approaches to doing this?


''' Data Mining Script -'''
from BeautifulSoup import BeautifulSoup
import urllib2
from urllib2 import HTTPError
from urlparse import urljoin, urlsplit
import httplib
from optparse import OptionParser
import re
import sys
from tempfile import TemporaryFile
from xlwt import Workbook, easyxf

EXCLUDES = ['&nbsp', '\n', '\r', '\t']
OUTPUT = 'output.xls'

STYLE6 = easyxf('pattern: pattern solid, fore_color grey25;'
'font: bold yes, height 160;'
'border: top medium, bottom medium')

class Miner():
''' Data Miner '''
def init(self, options, url):
self.url = url
self.options = options
self.keys = options.keys1.split(',')

def get_soup(self):
    ''' Parse HTML into BeautifulSoup '''
    opener = urllib2.build_opener()
    opener.addheaders = [('User-agent', 'Mozilla/5.0')]
        soup = BeautifulSoup(
        self.get_links(doc=soup, keys=self.keys)

    except HTTPError:

def get_links(self, doc, keys):
    ''' Use a search pattern provided by list(keys) to
        find hrefs that match. '''
    links = []
    for link in doc.findAll('a', href=True):
        for key in keys:
            if, link['href']):


def ...

I cannot seem to figure this out. This part of a larger script that I am writing. I have a list that looks like this. The fields are servername, port and program. How do I sort it to get a tally of what servers are listening on what ports.

This is my list:
[['server1', '1045', 'winlogon.exe'],
['server1', '8001', 'ctxsgsvc.exe'],
['server1', '3704', 'winlogon.exe'],
['server1', '1043', 'snmp.exe'],
['server2', '1041', 'snmp.exe'],
['server2', '1040', 'bpjava-msvc.exe'],
['server3', '2226', 'winlogon.exe'],
['server4', '1045', 'winlogon.exe'],
['server4', '1049', 'svchost.exe'],
['server5', '1048', 'clussvc.exe'],
['server5', '4660', 'winlogon.exe'],
['server5', '4911', 'winlogon.exe'],
['server6', '2226', 'winlogon.exe'],
['server6', '1045', 'winlogon.exe'],
['server6', '4998', 'unsecapp.exe'],
['server7', '4001', 'winlogon.exe']]

This is what I want:

1045 is open on server1, server4
8001 is open on server1
2226 is open on server3, server6

So far this is what I have but it doen't quite work:
allports = list(set(total.keys()))
lines = total.values()

for port in allports:
for line in lines:
if line[1] == port:
data[port] = line[0]

I am connecting to Active Directory using python-ldap to query some information. I am also using getpass to get my password used to bind via ldap. This works just fine.

What I am trying to do is to use my currently logged on username/password to bind to active directory so I don's have to type in my password all the time. Does anyone know how to do this?

Not quite sure what you are looking for but take a look at ClientForm: [url][/url]

OK, this may be a dumb question but I'll ask anyway. I am starting to see the benefits of using classes. Most notably code reuse, inheritance and overloading. Shoud I still be writing fuctions in my scripts or do classes make them obsolete? Is there a rule of thumb that if your script has say 20 lines or more you should write a class?

Well, I don't think I'm ready for Wxpython so I'm gonna try to use Threading. I am also looking at the code for "" which is along the lines of what I want to do. If someone out there is curious too, it's a simple google search away.

Thanks for the info guys!

Using raw_input() would be way easy but my script updates every 5 minuites. And I am not sure that the script would be able to update if it's waiting on user input.

I have written a script that scrapes a particular website and returns the date of an article, description and url. I want to display this information on the console with clickable text so that when I click on text/link called "site" it will open the url of the article in my default browser.

I have been searching google for the best way to do this but I have not found too much. Should I use curses, if so, I am not sure which curses module I should use. Or is there a better/easier way?

Any ideas?

My bad. I was running the code from idle and kept getting a 'RuntimeError: maximum recursion depth exceeded' error message. I am not quite sure why but it works from the console. Thanks!

I am trying to run the following screen scraping script but it's not displaying any output. Can someone tell me what I'm doing wrong?

from BeautifulSoup import BeautifulSoup
import urllib

url = ''

doc = urllib.urlopen(url).read()
soup = BeautifulSoup(doc)
tags = soup.findAll('p')
for tag in tags:
addate = tag.contents[0]
path = tag.contents[1].attrs[0][1]
desc =
print addate, path, desc

For my first script, I thought I didn't do to bad. This makes much more sense to me now. By simply adding the extra print statements, found flag and the returns from the datecheck funtion really helped!


I am still reading the learing python o'reilly book and not sure the best way to approch my problem.

Given c:\dir1\dir2\dir3.

I want to zip all files in dir3 if those files are older than 30 days using 1 zip file (ie. If all files in dir3 are older 30 days, I want to zip the directory (ie. I want to recursively keep doing this until I reach the top.

This is what I have so far, but I don't know where to interrupt os.walk to work with files in a directory. I hope this makes sense.

import os, os.path, stat, time
from datetime import date, timedelta

dirsNotUsed = []

def getdirs(basedir, age):
for root, dirs, files in os.walk(basedir):
basedate, lastused = datecheck(root, age)
if lastused < basedate: #Gets files older than (age) days

def datecheck(root, age):
basedate = - timedelta(days=age)
used = os.stat(root).st_mtime # st_mtime=modified, st_atime=accessed
year, day, month = time.localtime(used)[:3]
lastused = date(year, day, month)
return basedate, lastused

def archive():

def main():
basedir = raw_input('Choose directory to scan: ')
age = raw_input('Only scan files older than... (days): ')
getdirs(basedir, int(age))

if name == 'main':