snippsat 661 Master Poster

Use string formatting

>>> value = 1000000000000
>>> n = "{:,}".format(value)
>>> print(n)
1,000,000,000,000
>>> #Back to float
>>> float(n.translate(None, ','))
1000000000000.0
snippsat 661 Master Poster

It's started (:
Find some videos that are interesting,feel free to post link or discuses it here.

I was looking most forward to David Beazley talks.
Live coding with Concurrency as topic,doesn't get much better than this.
David Beazley - Python Concurrency From the Ground

The main talk David Beazley has is about Modules and Packages.
Maybe not the most interesting topic,haven't watch it all(long),but i guess it will be good.
David Beazley - Modules and Packages: Live and Let Die!

snippsat 661 Master Poster

because my anaconda distribution doesn't seem to have linprog (?)

You can use pip install or conda install(Scripts folder) to get new stuff into Anaconda.
For this is best to use conda install scikit-learn,then you get all new packages that scikit-learn need.
Look like this when run conda install scikit-learn.

The following packages will be downloaded:   

package                    |            build
    ---------------------------|-----------------
    conda-3.10.0               |           py27_0         207 KB
    conda-env-2.1.3            |           py27_0          54 KB
    nose-1.3.4                 |           py27_1         233 KB
    numpy-1.9.2                |           py27_0        23.2 MB
    requests-2.6.0             |           py27_0         590 KB
    scikit-learn-0.16.0        |       np19py27_0         3.5 MB
    scipy-0.15.1               |       np19py27_0        71.3 MB
    ------------------------------------------------------------
                                           Total:        99.0 MB

As you see you get scipy version 0.15.1.

You can even create a virtual environment,with all stuff you need.
conda create -n test scikit-learn
Create a folder test(virtual environment) that has Python and scikit-learn installed.

Gribouillis commented: great info +14
snippsat 661 Master Poster

Making hello world movie with great MoivePy
Results can be seen here

This make a 10 sec long videoclip showing hello world,using codec H264.

import moviepy.editor as moviepy

hello_world = moviepy.TextClip('Hello World!',font="Amiri-bold", fontsize=100, color='blue', size=(800,600))
hello_world = hello_world.set_duration(10)
hello_world.write_videofile('hello_world.avi', fps=24, codec='libx264')

Here a circle fade to text The End.

from moviepy.editor import *
from moviepy.video.tools.drawing import circle

clip = VideoFileClip("hello_world.avi", audio=False).\
           subclip(0,10).\
           add_mask()
w,h = clip.size

# The mask is a circle white vanishing radius r(t) = 800-200*t
clip.mask.get_frame = lambda t: circle(screensize=(clip.w,clip.h),
                                       center=(clip.w/2,clip.h/4),
                                       radius=max(0,int(800-200*t)),
                                       col1=1, col2=0, blur=4)

the_end = TextClip("The End", font="Amiri-bold", color="red",
                   fontsize=200).set_duration(clip.duration)
final = CompositeVideoClip([the_end.set_pos('center'),clip],
                           size =clip.size)
final.write_videofile('the_end.avi', fps=24, codec='libx264')
snippsat 661 Master Poster

psutil is good.
Here a run on my windows and Mint(VirtualBox)

>>> import psutil
>>> psutil.disk_partitions()
[sdiskpart(device='C:\\', mountpoint='C:\\', fstype='NTFS', opts='rw,fixed'),
 sdiskpart(device='D:\\', mountpoint='D:\\', fstype='', opts='cdrom'),
 sdiskpart(device='E:\\', mountpoint='E:\\', fstype='NTFS', opts='rw,fixed'),
 sdiskpart(device='F:\\', mountpoint='F:\\', fstype='UDF', opts='ro,cdrom'),
 sdiskpart(device='G:\\', mountpoint='G:\\', fstype='NTFS', opts='rw,fixed')]
>>> psutil.disk_usage('C:\\')
sdiskusage(total=255953203200L, used=177126027264L, free=78827175936L, percent=69.2)

Mint:

>>> import psutil
>>> psutil.disk_partitions()
[partition(device='/dev/sda1', mountpoint='/', fstype='ext4', opts='rw,errors=remount-ro')]
>>> psutil.disk_io_counters()
iostat(read_count=32177, write_count=6804, read_bytes=833645568, write_bytes=317702144, read_time=123880, write_time=124128)
snippsat 661 Master Poster

Here are the diffrent ways,
and also what i would call the prefered way these day with Requests.

Python 2:

from urllib2 import urlopen

page_source = urlopen("http://python.org").read()
print page_source

Python 3:

from urllib.request import urlopen

page_source = urlopen('http://python.org').read().decode('utf_8')
print(page_source)

For Python 3 to get str output and not byte we need to decode to utf-8.

Here with Requests,work for Python 2 and 3:

import requests

page_source = requests.get('http://python.org')
print(page_source.text)

Basic web-scraping we read in with Requests and parse with BeautifulSoup.

import requests
from bs4 import BeautifulSoup    

page_source = requests.get('http://python.org')
soup = BeautifulSoup(page_source.text)
print(soup.find('title').text) #--> Welcome to Python.org
snippsat 661 Master Poster

To make some futher i improvement's.
All code is now in functions,
explanation is moved into function so it work as doc-string(then it's possible to use help() on functions).

I have removed all global statement which it's not good at all.
Functions should be isolated code,that takes argument's and return result out.
This way it's easy to test a singel function,
and you don't have to worry that something magically appear in it from global space.

range(len(something)) is used to much in Python,better to use enumerate().

import random, sys

def board():
    '''Make matrix board of random numbers'''
    list1 = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
    random.shuffle(list1)
    matrix = []
    while list1 !=[]:
        matrix.append(list1[:4])
        list1 = list1[4:]
    return matrix

def zero(board):
    '''function to find where the zero is'''
    empty_space = None
    for x,item in enumerate(board):
        for y,item in enumerate(board):
            if board[x][y] == 0:
                empty_space = (x,y)
    return empty_space

def draw_board(board):
    '''function to draw the board'''
    print('\n\t+-------+-------+-------+-------|')
    for x,item in enumerate(board):
        for y,item in enumerate(board):
            if board[x][y] == 0:
                print('\t|  XX' , end='')
            else:
                 print('\t|  ' + '{:02d}' .format(board[x][y]), end=' ')
        print('\n\t+-------+-------+-------+-------|')

def ask_number(board):
    ''' function to ask for the number to move'''
    num = input('\nplease type the number of the piece to move : ( q ) to quit  ')
    if num in ['q','Q']:
        print('\n\ngame over  ')
        sys.exit()
    num = int(num)
    piece = ()
    for i,item in enumerate(board):
        for j,item in enumerate(board):
            if num == board[i][j]:
                piece = (i,j)
    return piece , num

def game():
    '''Run the game logic'''
    matrix …
snippsat 661 Master Poster

print link['title'] what is title? The alt of the <img> tag?

Have you looked at Firebug or Chrome DevTools?
Links i gave you in post.
Then is easy to so see what <title> of the <img> is.

except KeyError: what does keyError mean here? What the keyword keyError is for?

Not all <img> on this page has a title tag,so it trow a keyError.
This is source code that gets called in BeautifulSoup.

def __getitem__(self, key):
    """tag[key] returns the value of the 'key' attribute for the tag,
       and throws an exception if it's not there."""
    return self.attrs[key]

With except KeyError: pass,just ignore those images.
Can be more specific with class="col-md-6,
so it only search for names on images we need.
Then it will not trow an error.

from bs4 import BeautifulSoup
import urllib2

url = 'http://www.thefamouspeople.com/singers.php'
html = urllib2.urlopen(url) #Don not use read()
soup = BeautifulSoup(html)
tag_row = soup.find_all('div', {'class':'col-md-6'})
for item in tag_row:
    print item.find('img')['title']
snippsat 661 Master Poster

David W it's about time to learn string formatting :)
Splitting up with , and + something + is not so nice.

print('"{}".istitle() is {}'.format(item, item.istitle()))

Python String Format Cookbook

string formatting specifications(Gribouillis)

snippsat 661 Master Poster

So now i want my script to find every word that start with "A" in that url page >and print it for me. Then i should find a way to ask my crawle just save those >words starting with "A" that are singers names.
Very difficult!!!

That would be nightmare,and you would have to clean up a lot of rubbish text.
This is just one word that start with A Ajax.Request(,
that i saw when i quickly looked at source you get from that page.

You have to find a tag that give you info you want.
so tag <img> with <title>,will give you a fine list.

from bs4 import BeautifulSoup
import urllib2

url = 'http://www.thefamouspeople.com/singers.php'
html = urllib2.urlopen(url) #Do not use read()
soup = BeautifulSoup(html)
link_a = soup.find_all('img')
for link in link_a:
    try:
        print link['title']
    except KeyError:
        pass

"""Ouptput--> here just 3 names befor it changes to B
Ashlee Simpson
Avril Ramona Lavigne
Axl Rose
Barbra Streisand
Barry Manilow
Barry White
"""
snippsat 661 Master Poster

he's gone too far=D

Yes of course,to make it great humoristic read.
Regex can be ok to use some times,like you only need a singel text/value.

Both BeautifulSoup and lxml has build in support for regex.
Sometime it ok to use regex as helper to parser,when parsing dynamic web-sites
you can get a at lot of rubbish text.

snippsat 661 Master Poster

Use regular expressions Click Here

No no no just to make it clear :)
Have to post this link again.
Use a parser Beautifulsoup or lxml.

from bs4 import BeautifulSoup

html = '''\
<head>
  <title>Page Title</title>
</head>
<body>
  <li>Text in li 1</li>
  <li>Text in li 2</li>
</body>
</html>'''

soup = BeautifulSoup(html)
tag_li = soup.find_all('li')
print tag_li
for tag in tag_li:
    print tag.text

"""Output-->
[<li>Text in li 1</li>, <li>Text in li 2</li>]
Text in li 1
Text in li 2
"""
Slavi commented: Great read, but this 'Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins.. he's gone too far=D +6
snippsat 661 Master Poster

so why it didn't print the url of each website into output?!

Because this is a dynamic site using JavaScript,jQuery....
The problem is that JavaScript get evaluatet by DOM in browser.
To get links we need something that automates browsers like Selenium.

Can show you one way,here i also use PhantomJS,for not loading a browser.

from bs4 import BeautifulSoup
from selenium import webdriver

driver = webdriver.PhantomJS(executable_path='C:/phantom/phantomjs')
driver.set_window_size(1120, 550)
driver.get('https://duckduckgo.com/?q=3D&t=canonical&ia=meanings')
page_source = driver.page_source
soup = BeautifulSoup(page_source)
link_a = soup.find_all('a')
for link in set(link_a):
    if 'http' in repr(link):
         try:
            print link['href']
         except KeyError:
            pass

Output: here first 6 links of out 100 links.

http://3d.si.edu/
https://en.wikipedia.org/wiki/3-D_film
http://3d.about.com/
http://www.boxofficemojo.com/genres/chart/?id=3d.htm
http://www.urbanfonts.com/fonts/3d-fonts.htm
http://www.3dcontentcentral.com/

This is more advanced web-scraping,
and you need to study(understand) site and know what tools to use.

Gribouillis commented: very good help +14
snippsat 661 Master Poster

Change last part to this.

lst = []
for k in d:
    high = int(max(d[k]))
    lst.append((k, high))

score_lst = sorted(lst, key=lambda tup: tup[1], reverse=True)
for name,score in score_lst:
    print('{} {}'.format(name, score))

It's ok to throw away lambda,and use itemgetter as shown by slavi.

snippsat 661 Master Poster

Use the new bs4,do not call old BeautifulSoup.
Do not use read(),BeautifulSoup detect encoding and convert to Unicode.

As mention you need take out href attributes,
and you most learn to study webpage with Firebug or Chrome DevTools.
So then you see that you only need adresses that start with http and have href attributes.

from bs4 import BeautifulSoup # Use bs4
import urllib2

url = urllib2.urlopen('http://www.python.org') # Do not call read()
soup = BeautifulSoup(url)
with open('python-links.txt.', 'w') as f:
    for link in soup.find_all('a'):
        if link['href'].startswith('http'):
            f.write('{}\n'.format(link['href']))
snippsat 661 Master Poster

As Andrae posted,but need to close file object.

f = open('url.txt', 'w') #Opens the file to write text
f.write(url) #Writes the file to the opened text file
f.close()

Or the best approch is to use with open(),no need to close file object.

import urllib

url = 'http://www.python.org'
text = urllib.urlopen(url).read()
with open('url.txt', 'w') as f:
    f.write(text)
snippsat 661 Master Poster
>>> answer = ('7', 'Q', '5', 'H', '9', '4', '3', '8', 'L') 
>>> print(' '.join(answer))
7 Q 5 H 9 4 3 8 L
>>> help(''.join)
Help on built-in function join:

join(...) method of builtins.str instance
    S.join(iterable) -> str

    Return a string which is the concatenation of the strings in the
    iterable.  The separator between elements is S.
snippsat 661 Master Poster

Hello world on the wall?
This is a rewite of some code i did for a "99 bottles of beer" challenge.
Use Google image API and take out some random images.
Use IPython Notebook to show result.
Here some result.
Hello world
Hello World Python
Hello World Linux

snippsat 661 Master Poster

Don't call both function main,use name that make sense.

import random

def generate_numbers():
    one = 1
    thirteen = 13
    subtract = 1
    number_gen = open('numbers.txt', 'w')
    for number in range(one,thirteen,subtract):
        numbers = random.randint(1,100)
        number_gen.write(str(numbers) + ' ')
    number_gen.close()

def read_numers(numb):
    numbers_read = open(numb)
    numbers = [float(n) for n in numbers_read.read().split()]
    numbers_read.close()
    for n in numbers:
        print(n)

generate_numbers()
numb = 'numbers.txt'
read_numers(numb)

A more pythonic approch.

def generate_numbers():
    with open('numbers.txt', 'w') as f_out:
        f_out.write(' '.join(str(random.randint(1,100)) for i in range(12)))

def read_numers(numb):
    with open(numb) as f:
        numbers = [float(n) for n in f.read().split()]
        for n in numbers:
            print(n)

generate_numbers()
numb = 'numbers.txt'
read_numers(numb)
snippsat 661 Master Poster

Look's like you need Avbin
An other option is to use your favorite player.

import subprocess

# pick an external mp3 player you have
sound_program = "path to player"
# pick a sound file you have
sound_file = "path to mp3"
subprocess.call([sound_program, sound_file])
snippsat 661 Master Poster

Here is a big hint.

>>> import random
>>> s = 'i am a programmer'
>>> ''.join(random.sample(s, len(s)))
'm mraep mriago ra'
>>> #no whitespace
>>> s_1 = ''.join(s.split())
>>> ''.join(random.sample(s_1, len(s_1)))
'omagripramamer'
snippsat 661 Master Poster

With jason
ValueError: Expecting property name enclosed in double quotes
is a royal pain, if you want to save string objects!

Can you give an example?
No problem in me simple test here.

import json

record = "hello world" + ' test'
with open("my_file.json", "w") as f_out:
    json.dump(record, f_out)
with open("my_file.json") as f:
    saved_record = json.load(f)

print(saved_record)
print(repr(saved_record))
print(type(saved_record))

"""Output-->
hello world test
u'hello world test'
<type 'unicode'>
"""
snippsat 661 Master Poster

A look at Dataset and easy and Pythonic way to create a database.
Other candidates in this categorie,i will mention pyDAL and Peewee.
I did some test of Dataset in this post to look at.

So here a common task some web-scraping of food recipes(just a task i did help someone with).
A task you may not use a database for(to much work).

So let see how Dataset will work for this task.
Requirement Requests and BeautifulSoup.
First clean code without Dataset.

import requests
from bs4 import BeautifulSoup

start_page = 1
end_page = 3
for page in range(start_page, end_page+1):
    url = 'http://www.taste.com.au/search-recipes/?q=&%3Btag[]=13&%3Btag[]=28&%3B=&sort=title&order=asc&page={}'.format(page)
    url_page = requests.get(url)        
    soup = BeautifulSoup(url_page.text)
    tag_div = soup.find_all('div', {'class': "content-item tab-content current"})[0]\
    .find_all('div', {'class': 'story-block'})
    print('--- Page {} ---'.format(page))
    for content in tag_div:
        print(url_page.status_code, content.find('a')['href'])

In code snippet i bring in Dataset,and take out some data.

snippsat 661 Master Poster

for sses in assesses, it output me 1 instead 2! What is the secret?

In your first post you dont't have "assesses" and "sses" test.
str.count() don't count overlapping occurrences.
So if you need to count overlapping occurrences you can not use str.count().
Help and doc do mention that it return non-overlapping occurrences.

>> help(str.count)
Help on method_descriptor:

count(...)
    S.count(sub[, start[, end]]) -> int

    Return the number of non-overlapping occurrences of substring sub in
    string S[start:end].  Optional arguments start and end are interpreted
    as in slice notation.

To fix my regex soultion,to count overlapping occurrences.

>>> import re
>>> text = "assesses"
>>> sub = "sses"
>>> len(re.findall(r'(?={})'.format(sub), text))
2
snippsat 661 Master Poster

can I make?:

I think it should fine,i have not read your assignment yet.
Here a test run.

#Python 3.4.2 
>>> text = input('Enter some text: ')
Enter some text: trans panamanian bananas
>>> subtext = input('Enter subtext to find: ')
Enter subtext to find: an
>>> text.count(subtext)
6

>>> text = input('Enter some text: ')
Enter some text: assessement
>>> subtext = input('Enter subtext to find: ')
Enter subtext to find: sse
>>> text.count(subtext)
2
snippsat 661 Master Poster

@snippsat I like the concept, although being based on top of sqlalchemy may >have a performance cost for dataset.

Yes,i agree there may be some performance cost,
but for many small project where you just want a easy way to get data in and out,i think it can be ok.

With a little more work, freezefiles could be used to export pickle files >instead of json or csv, making them trivial to load from python.

Yes i was looking a little into this freezefiles option.
I like json much more than pickle,and json is in the standard library.
So no need to mix in pickle.

Here is a test with freezefiles.
So here i make json file Score_table.json.

result = db['Score_table'].all()
dataset.freeze(result, format='json', filename='Score_table.json')

Load Score_table.json out,and exctact som data.

import json

with open("Score_table.json") as j:
    score_table = json.load(j)

Test:

>>> score_table
{u'count': -1,
 u'meta': {},
 u'results': [{u'id': 1, u'name': u'Tom', u'score': 250},
              {u'id': 2, u'name': u'Kent', u'score': 150},
              {u'id': 3, u'name': u'Paul', u'score': 500}]}

>>> score_table['results'][0]
{u'id': 1, u'name': u'Tom', u'score': 250}
>>> score_table['results'][1].keys()
[u'score', u'id', u'name']
>>> score_table['results'][1].values()
[150, 2, u'Kent']

>>> print('{} has the highest score of: {}'.format(score_table['results'][2]['name'], score_table['results'][2]['score']))
Paul has the highest score of: 500

export one JSON file per user
You can create one file per row by setting mode to “item”:

Test this out.

result = db['Score_table'].all()
dataset.freeze(result, format='json', filename='users/{{name}}.json', mode='item')

So this create a …

snippsat 661 Master Poster

A test with dataset,database as simple as it get.
So here i have a player list that i store in a Sqlite(comes with Python) database.

import dataset

player_lst = [
    {'name': 'Tom', 'score': 250},
    {'name': 'Kent', 'score': 150},
    {'name': 'Paul', 'score': 500}
    ]

db = dataset.connect('sqlite:///mydatabase.db')
table = db['Score_table']
for item in player_lst:
    table.insert(item)

Test it out,by taking out some data.

>>> users = db['Score_table'].all()
>>> for user in db['Score_table']:
...     print user
...     
OrderedDict([(u'id', 1), (u'score', 250), (u'name', u'Tom')])
OrderedDict([(u'id', 2), (u'score', 150), (u'name', u'Kent')])
OrderedDict([(u'id', 3), (u'score', 500), (u'name', u'Paul')])

>>> table.find_one(name='Tom')
OrderedDict([(u'id', 1), (u'score', 250), (u'name', u'Tom')])
>>> table.find_one(name='Tom')['score']
250

It also have all the power of SQL queries if needed.

>>> [i for i in db.query("SELECT MAX(score) FROM Score_table")]
[OrderedDict([(u'MAX(score)', 500)])]
>>> [i for i in db.query("SELECT MAX(score) FROM Score_table")][0]['MAX(score)']
500
Gribouillis commented: cool link +14
snippsat 661 Master Poster
>>> text = "trans panamanian bananas"
>>> text.count('an')
6

--

>>> import re
>>> len(re.findall(r'an', text))
6
snippsat 661 Master Poster

this is the HTML line which returns as soup - I'm after the 1.41 only - which
I hope to return as valueTable

Using .next_sibling can be better.

from bs4 import BeautifulSoup

html = '''\
<td class="yfnc_tablehead1" width="74%">Price/Book (mrq):</td><td class="yfnc_tabledata1">1.41</td>'''

soup = BeautifulSoup(html)
tag = soup.find('td', {'class': 'yfnc_tablehead1'})

Test with parent and nextSibling.

>>> tag
<td class="yfnc_tablehead1" width="74%">Price/Book (mrq):</td>
>>> tag.parent
<td class="yfnc_tablehead1" width="74%">Price/Book (mrq):</td><td class="yfnc_tabledata1">1.41</td>
>>> tag.parent.text
'Price/Book (mrq):1.41'    

>>> tag.nextSibling
<td class="yfnc_tabledata1">1.41</td>
>>> tag.nextSibling.text
'1.41'
>>> float(tag.nextSibling.text) + 1
2.41
snippsat 661 Master Poster

You should use after
Tkinter is running an infinite loop(the event loop).
When you use a while loop and time sleep,then this can lock up(block GUI).

That's why all GUI toolkit has some kind of timer/schedule/thread task,
that dont interfer with the running event loop.

In Wxpython that i have used most,there is wx.Timer, wx.CallLater.
Here is an example i found with Tkinter,that use this method.

import Tkinter as tk

class ExampleApp(tk.Tk):
    def __init__(self):
        tk.Tk.__init__(self)
        self.label = tk.Label(self, text="", width=10)
        self.label.pack()
        self.remaining = 0
        self.countdown(10)

    def countdown(self, remaining = None):
        if remaining is not None:
            self.remaining = remaining    
        if self.remaining <= 0:
            self.label.configure(text="time's up!")
        else:
            self.label.configure(text="%d" % self.remaining)
            self.remaining = self.remaining - 1
            self.after(1000, self.countdown)

if __name__ == "__main__":
    app = ExampleApp()
    app.mainloop()
snippsat 661 Master Poster

the "Price Book or class="yfnc_tabledata1" is in the return respData which is >the source code downloaded from yahoo.ca.

Ok i understand,it's just that i cant find it if search through "respData" or url.

snippsat 661 Master Poster

Cant find what you search in respData.
Do post also post your import.

import urllib.request, urllib.parse

This is the adress,you get data from.

>>> resp.geturl()
'https://ca.finance.yahoo.com/lookup?s=basics'

Do you find Price Book or class="yfnc_tabledata1 in url or in return respData?

Some notes this use JavaScript heavy,and are not a easy site to start with.
Which mean that you may have to use other method than urllib to read site.
I use Selenium to read sites like this.
Then i get executed JavaSript to,and can parse with Beautiful Soup or lxml.

Regex to parse HTML can be a bad choice,
it can work in some cases,but use a parser(BeautifulSoup) is the first choice.
I usually post this link,why not to use regex.

Gribouillis commented: good tips +14
snippsat 661 Master Poster

Pillow fork dos more than just bugfix of orginal PIL.
It now comes with new stuff and serval improvements.
So it can be smart to get Pillow instead of PIL.

snippsat 661 Master Poster

Is it possible we have both code in the one?For future GUI.

That should not be a problem,if i understand you right.

snippsat 661 Master Poster

You can not open 2 exe in binary mode,and try to merge these into 1 file.
exe are standalone programs,that has to be open separately.

Here a demo of what i mean,with subprocess and threads to open 2 files.

#2_exe_start.py
import subprocess
import threading

def file_1():
    subprocess.call([r'C:\Windows\notepad.exe'])

def file_2():
    subprocess.call([r'C:\Windows\explorer.exe'])

f1 = threading.Thread(target=file_1)
f2 = threading.Thread(target=file_2)
f1.start()
f2.start()

So this open both notepad and explorer at once.
Then i can make a exe,to open these 2 files.
So here a make 2_exe_start.exe,that's open notepad and explorer.

#make_exe.py
from distutils.core import setup
import py2exe
import sys

def py2_exe(file_in, ico):
    dest_ico = ico.split('.')[0]
    if len(sys.argv) == 1:
        sys.argv.append('py2exe')

    # Py2exe finds most module,here you can include,exclude moduls
    includes = []
    excludes = []
    packages = []
    dll_excludes = []

    # Bundle_files : 3 most stable | bundle_files : 1 create 1 big exe
    setup(options =\
    {'py2exe': {'compressed': 1,
                'optimize': 2,
                'ascii': 0,
                'bundle_files': 1,
                'includes': includes,
                'excludes': excludes,
                'packages': packages,
                'dll_excludes': dll_excludes,}},
                 zipfile = None,

    # Can use console(Command line) or windows(GUI)
    console = [{
              'script': file_in,
              #--| Uncomment for ico
              #'icon_resources' : [(1, ico)],
              #'dest_base' : dest_ico
               }])

if __name__ == '__main__':
    #--| The .py file you want to make exe of
    file_in = '2_exe_start.py'
    #--| Ico in same folder as .py
    ico = 'your.ico'
    py2_exe(file_in, ico)
snippsat 661 Master Poster

To fix your orginal code.

def translate():
    a = {"merry":"god", "christmas":"jul", "and":"och", "happy":"gott", "new":"nytt", "year":"år"}
    return a

for k, v in translate().iteritems():
    print k, v

You need to practice more on using functions.
It make no sense to take a argument a,when all you have in function is a dicionary.
a as mention is not a meaningful variable name.

Something like this for translate.

def my_dict(user_input):
    return {
    "merry": "god",
    "christmas": "jul",
    "and": "och",
    "happy": "gott",
    "new": "nytt",
    "year": "år"
     }.get(user_input, "Not in dicionary")

def translate(word):
    return my_dict(word)

Test:

>>> translate('christmas')
'jul'
>>> translate('hello')
'Not in dicionary'
>>> translate('happy')
'gott'
>>> print '{} {} til alle i Sverige'.format(translate('merry'), translate('christmas'))
god jul til alle i Sverige
snippsat 661 Master Poster

A couple of ways.

from __future__ import print_function

def generate_n_chars(n, a):
    for i in range(n):
        print(a, end='')

def generate_n_chars_1(n, a):
    result = ''
    for i in range(n):
        result += a
    return result

Test:

>>> generate_n_chars(7, 'j')
jjjjjjj
>>> print generate_n_chars_1(20, 'z')
zzzzzzzzzzzzzzzzzzzz
snippsat 661 Master Poster

Is this all of your code?
There are missing several function definition here.
You never use return without a function.

Post all of you code and full Traceback.
Or at least code we can run that give this Traceback.
Traceback always give you linenumber where error occur.

Here is a explaintion of UnboundLocalError

snippsat 661 Master Poster

With enumerate().

>>> mylist1 = [1,2,3]
>>> mylist2 = ['a','b','c']
>>> for index, item in enumerate(mylist2, 2):
...     mylist1.insert(index, item)
...     
>>> mylist1
[1, 2, 'a', 'b', 'c', 3]

Flatten list approach.

def flatten(container):
    for i in container:
        if isinstance(i, list):
            for j in flatten(i):
                yield j
        else:
            yield i

def my_code(list1, list2, index, flatten):
    list1.insert(2,index)
    return list(flatten(list1))

if __name__ == '__main__':
    list1 = [1, 2, 3]
    list2 = [2]
    index = ['a','b','c']
    print my_code(list1, list2, index, flatten)
    #--> [1, 2, 'a', 'b', 'c', 3]
snippsat 661 Master Poster

A distribution such as Anaconda make it easy to run iPython notebook without thinking of dependencies.

I aslo have Miniconda wish i run iPython notebook/Spyder for python 3.4 from.
Anaconda is closed enviroment so it would not affect orginal installed Python.
My main version that i use most is still Python 2.7
More about it here

snippsat 661 Master Poster

A little look at concurrent.futures
concurrent.futures has a minimalistic API for threading and multiprocessing.
Only change one word to switch ThreadPoolExecutor(threading) and ProcessPoolExecutor(multiprocessing).
concurrent.futures is backportet to Python 2.7

A look at ProcessPoolExecutor(multiprocessing)

from __future__ import print_function
from time_code import timeit
import concurrent.futures
import time

def counter(n):
    """show the count every second"""
    for k in range(n):
        print("counting {}".format(k))
        time.sleep(1)

@timeit
def main():
    with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
        for i in range(20):
            executor.submit(counter, i)

if __name__ == "__main__":
    main()

For Windows if __name__ == "__main__": and run from command line to see print function.
It can work from a IDE if no need to see output from print function.

So here max_workers=4 i got a time of 55 sec.
So if a spawn load over more processes,i should see a faster time.
Change to max_workers=30 and time go down to 19 sec.

The @timeit code i use for this.

#time_code.py
import time

def timeit(f):
    '''Timing a function'''
    def timed(*args):
        ts = time.time()
        result = f(*args)
        te = time.time()
        print 'func:{!r}  {!r} took: {:.2f} sec'.format\
        (f.__name__, args, te-ts)
        return result
    return timed
snippsat 661 Master Poster

wooee show that range() work fine,another good and pythonic way is to use a generator.
If also use itertools.islice() can make it even more versatile.

from itertools import islice

def fib(a=7, b=18):
    yield a
    while True:
        yield b
        a, b = b, a + b

fib_numb_1 = list(islice(fib(),15))
# Slice out last 3 number
fib_numb_2 = list(islice(fib(),12,15))
# Slice out number with a step of 2
fib_numb_3 = list(islice(fib(),0,23,2))
#---
print fib_numb_1
print fib_numb_2
print fib_numb_3

'''Output-->
[7, 18, 25, 43, 68, 111, 179, 290, 469, 759, 1228, 1987, 3215, 5202, 8417]
[3215, 5202, 8417]
[7, 25, 68, 179, 469, 1228, 3215, 8417, 22036, 57691, 151037, 395420]
'''

No list output,is of course just to loop over generator object.

for i in islice(fib(),15):
    print i

7
18
25
43
68
111
179
290
469
759
1228
1987
3215
5202
8417

With count trow in enumerate().

for index, item in enumerate(islice(fib(),15), 1):
    print '{} -> {}'.format(index, item)

1 -> 7
2 -> 18
3 -> 25
4 -> 43
5 -> 68
6 -> 111
7 -> 179
8 -> 290
9 -> 469
10 -> 759
11 -> 1228
12 -> 1987
13 -> 3215
14 -> 5202
15 -> 8417
snippsat 661 Master Poster

Look at this post

EDWIN_4 commented: thanks +0
snippsat 661 Master Poster
snippsat 661 Master Poster

As mention bye chriswelborn use Flask or Bottle.
CGI is dead in Python after PEP 3333(WSGI).
Flask,Bottle is a layer above WSGI and 100% WSGI compliant.

You need some JavaScript to get the value from your input box.

JavaScript is not needed for this Flask has of course stuff like this build in with request object.

A quik demo.
HTML form(my_form.html)

<!DOCTYPE html>
<html lang="en">
<body>
    <h1>Enter some text:</h1>
    <h2>Text to be multiplied</h2>
    <form action="." method="POST">
        <input type="text" name="text">
        <input type="submit" name="my-form" value="GO">
    </form>
</body>
</html>

Flask code(form_demo.py)

from flask import Flask
from flask import request
from flask import render_template
app = Flask(__name__)

@app.route('/')
def my_form():
    return render_template("my_form.html")

@app.route('/', methods=['POST'])
def my_form_post():
    text = request.form['text']
    multiply_text = text * 3
    return multiply_text

if __name__ == '__main__':
    app.run()

This run fine or your local computer(with Flask installed pip install Flask)
A folder with form_demo.py a subfolder templates with my_form.html
To run it in adress bar browser http://localhost:5000/
If you write hello in input box,you get back hellohellohello in browser.

use (do I need my files to be hosted to a web server)? etc.

As mention to code over work fine without a webserver in browser.
If you want to share it with world you need a webserver.
Look into Python friendly webserver as Heroku,Pythonanywhere,Openshift...

snippsat 661 Master Poster

either

I guess ungalcrys has left Python a long time ago.
This is his only post over 4 year ago,look at dates.

snippsat 661 Master Poster

---

snippsat 661 Master Poster

If you really wanted it in list format you could do this to it:

Yes i agree if that's really what BingityBongity want,
but a plain list with no connetion between vaules seems not right.
Just use dict.items() then key and value from dict are together in tuple.

>>> d = {'this': 3, 'that': 2, 'and': 1}
>>> d.items()
[('this', 3), ('and', 1), ('that', 2)]

Just use dict from collections.Counter is the best way.

snippsat 661 Master Poster

Nice, but it shortens the list if the length is not a multiple of 3.

Yes that's right,an other solution is to use map().

>>> mylist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
>>> mylist_1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> map(None, *[iter(mylist)]*3)
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11)]
>>> map(None, *[iter(mylist_1)]*3)
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]

So here is None used as fill value.
If this this is not desirable,then other soultion in this post are fine.

snippsat 661 Master Poster
>>> zip(*[iter(mylist)]*3)
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11)]