Hi everyone,

This is my first time here, so if I don't do everything up to how it should be done, bear with me. Anyways, for a project in my python class I am having trouble generating a webpage with clickable links. I am using Python 2.6.5 and will be eventually hosting this on Google's AppSpot.

Anyways, let me just say this: Programming is really hard for me. I don't know what it is, but I can't put what I am thinking into code. So although these questions may seem simple, it's not because I am not trying, it's because they are just hard for me to understand.

So, here is my code so far:

import urllib2
from BeautifulSoup import BeautifulSoup
def mainPage():
	s='<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"\n'
	s +='"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"\n'
	s +='<html><head>\n'
	s +='<title>At-A-Glance Stock Information</title>\n'
	#s +="<link rel='stylesheet' href='mystyles.css' type='text/css' />\n"
	s +='</head>\n'
	s +='<body>\n'
	s +='<a href = http://www.speedfinance.appspot.com/sprint>Sprint</a>\n'
	s +='<a href = http://www.speedfinance.appspot.com/google>Google</a>\n'
	s +='<a href = http://www.speedfinance.appspot.com/apple>Apple</a>\n'
	s +='<a href = http://www.speedfinance.appspot.com/att>AT&T</a>\n'
	s +='<a href = http://www.speedfinance.appspot.com/microsoft>Microsoft</a>\n'
	return s
	
	for name in company:
		generateMainPage(company
		

def generateMainPage(company, url):
	company = ['sprint', 'google', 'apple', 'att', 'microsoft']
	companyUrl = ['/sprint', '/google', '/apple', '/att', '/microsoft']
	s='<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"\n'
	s +='"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"\n'
	s +='<html><head>\n'
	s +='<title>At-A-Glance Stock Information</title>\n'
	#s +="<link rel='stylesheet' href='mystyles.css' type='text/css' />\n"
	s +='</head>\n'
	s +='<body>\n'
	s +='<a href =' + link + '>'+company+'</a>\n'
	return s
def sprint():
	website = 'http://www.google.com/finance/company_news?q=NYSE:S'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
sprint()

def google():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:GOOG'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
google()

def apple():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:AAPL'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
apple()

def att():
	website = 'http://www.google.com/finance/company_news?q=NYSE:ATT'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
att()

def microsoft():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:MSFT'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
microsoft()

generateMainPage and mainPage are basically the same idea I am trying to implement. In words, I am trying to make each company have a clickable link on the homepage that will direct to another page and (currently) show the url's to the top news stories for those companies. However, I am having troubles making it work that way.

Also, I know my code is really messy, but I am in the general set-up phase.

for name in company:
		generateMainPage(company

Something wrong with your code, this call has not all parameters and not closing parenthesis.

Please post cleaner version of the code.

First, in your mainPage() function, you can't do this:

s +='<a href = http://www.speedfinance.appspot.com/microsoft>Microsoft</a>\n'
	return s
	
	for name in company:
		generateMainPage(company

You're returning before you reach the 'for' loop. The loop will never be executed and you *should* be getting a Python indentation error, as well a syntax error for the missing closing parenthesis.

Instead of all the 's +=' lines, why not just make a template file called 'headers.tmpl' and print the lines from there?

<!-- File: headers.tmpl -->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" '"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html>
<head>
<title>At-A-Glance Stock Information</title>
<link rel='stylesheet' href='mystyles.css' type='text/css' />"
</head>
<body>
<!-- End of headers -->

Then one for links:

<a href = http://www.speedfinance.appspot.com/sprint>Sprint</a>
<a href = http://www.speedfinance.appspot.com/google>Google</a>
<a href = http://www.speedfinance.appspot.com/apple>Apple</a>
<a href = http://www.speedfinance.appspot.com/att>AT&amp;T</a>
<a href = http://www.speedfinance.appspot.com/microsoft>Microsoft</a>

Then read the files:

def mainPage():
    print 'Content type: text/html'

    for line in open('headers.tmpl'):
        print line
        pass

    for line in open('links.tmpl'):
        print line
        pass

That will be much easier in the long run.

You also can't use a 'for' loop that's dependent on variables that have not yet been defined. Though I can't see where you actually call 'mainPage()' at so it should probably just be removed if 'generateMainPage()' does essentially the same thing.

It's also best to define your functions, then call them, not call each function after it's been defined. That makes it messy and hard to follow. Therefore:

def sprint():
	website = 'http://www.google.com/finance/company_news?q=NYSE:S'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
        return

def google():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:GOOG'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
        return

def apple():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:AAPL'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
        return

def att():
	website = 'http://www.google.com/finance/company_news?q=NYSE:ATT'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link
        return

def microsoft():
	website = 'http://www.google.com/finance/company_news?q=NASDAQ:MSFT'
	articles = urllib2.urlopen(website).read()
	soup = BeautifulSoup(articles,selfClosingTags = ["br"])
	stories = []
	for entry in soup.findAll('div', attrs = "g-section news sfe-break-bottom-16", limit = 4):
		link = str(entry.find("a")["href"])
		print link	
        return

microsoft()
att()
apple()
google()
sprint()

Once defined, you can then call the function when you need it, for example, to run them all in a quick loop:

company = [ 'microsoft', 'att', 'apple', 'google', 'sprint' ]

for name in company:
    companyURL = '/' + name
    print "<a href ='%s'>%s</a>" % ( companyURL, name )
    pass

That is, of course, if I understand your goal and code correctly.

Can you please mark as solved or let us know what is unclear?