My custom error pages (via.htaccess) work for browser requests, but when I print out an error status code from a cgi script, it bypasses .htaccess. I can't see any problem with just reading in an "error page" and printing it to STDOUT...

But, I have to use a Content-Type:text/html header then.. Is it ok to output a page with a Status: 404 header and a Content-Type header? If so, is it just a html page with a meaningless header line before the Content-Type line?

Recommended Answers

All 7 Replies

The status code should be the first line of all http response headers. You must also include a content-type line in the header. If you are using the CGI module you can use the cgi_error method to get any errors the script reports and inlcude them in headers. See the CGI documentation. See also:

http://tools.ietf.org/html/rfc2616#section-6.1

Hmm.. since i found out that query string was an environment variable; I only use CGI::Carp, everything else is just a print.

If the browser sees the Error Status Code, will it not just spit out a generic error page regardless of what follows? Next time I upload a script I will test with both... The thing with scripts is they go over .htaccess... it would be bad if they didn't!

Each browser is free to interpret and do what it wants with the staus line of an http response header. In my experience, Mozilla 1.5 pretty much ignores them. IE 6 prints a generic error page.

Try this script:

my $q = CGI->new;
print $q->header(-status=>'404',-type=>'text/html'),
      $q->start_html(-title=>'404 Page Not Found'),
		'<hi>500 Internal Server Error. The webmaster is an idiot!</h1>';

Mozilla 1.5 totally ignores the status and prints out what you see above quite literally. IE6 reads the status and prints an error page.

IE 6 prints a generic error page.

>_<

I guess if they're not important, I'll just read my error page and output that. It would have been nice if the .htaccess was obeyed.. but only in this case... I'm certainly glad I can use .htaccess to easily deny access to "system" folders from the outside world, and then easily get data out of them in the inside world.

I'm no accomplished hacker but from what I gather htaccess files are not hard to get passed using scripts/programs to fake http request headers. If you have data you want to protect the best place is above the www root. Your script should still be able to get to any data stored in folders above the www root folder but hackers can't unless they actually hack into your web site account. In which case your dead anyway.

well. that's certainly better! i didn't realise i'd be able to access up there without using a different file access method. but seems as simple/even simpler than what i was doing before.

cool stuff, i will be moving my system folder there :P

You must be fairly new to internet/cgi programming because it seems like you know about programming in general pretty well.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.