hi this is devesh,

i have a perl cgi script that downloads files from other site.

but i got this weired problem that the script stops responding after downloading exactly 100 files (any browser IE or Firefox).

i am running this script from windows 2003 server with IIS 6.0

any idea guys...

thnx in advance.....

Recommended Answers

All 12 Replies

First random guess is you've reached the max number of open files you're allowed to have because you're not closing any of them.

Can we see the script or is this going to be a guessing game?

>>>First random guess is you've reached the max number of open files you're allowed to have because you're not closing any of them.

_____________________________

i dont hv any file descriptor for downloaded files..
how can i close them ???

Ok, before everyone gets bored with yet another round of "20 questions", do you use some pre-defined API to get a file for you, even if you're not directly responsible for opening files?

Have you read the manual for said API to see if there are some other functions you should be calling at the end?

Here is my script

#!c:\perl\bin\perl.exe -w

print "done</br>";

#######################

use WWW::Mechanize;
my $mech = WWW::Mechanize->new( );
#$mech->agent_alias("Windows Mozilla");
use LWP::Simple;


    $usrnm = "abc\@xyz.com";
    $passwd = "abc123";
    print "$usrnm</br>";
    print "$passwd</br>";

    #print "$passwd</br>";

    my ($username, $password) = qw/$usrnm $passwd/;

    #print "$username</br>";
    $mech->get("http/www.website.com/index.jsp");
    die "cud not open " ,$mech->response->status_line unless $mech->success;

    $mech->form_number(1);
    $mech->set_visible($usrnm, $passwd);

    $mech->submit;


    $count = 100;
    #print $mech -> content;
    for ($count = 100; $count <= 500; $count++) 
    {
        print "$count</br> ";

        $purl = 

"http://www.website.com/servlet/Servlet?pdf=[COLOR=red]abc$count.pdf[/COLOR]";        

        $mech->get( $purl, ":content_file" => "[COLOR=red]d:/Devesh/abc$count.pdf[/COLOR]");
        print "downloading abc$count.pdf</br>";
        #print $mech->content;
        #print "after......</br>";
        #print "save $count done</br>";

    }


    $mech->get("http://www.website.com/index.jsp");
    $mech->follow_link(text_regex => qr/Logout/i);
    #print $mech -> content;


###########################

after downloading exactly 100 files the files downloaded after that are currepted. after first 100 files all the other files come to be of same size and when i try to open those files acrobet reader shows files are currepted.

thnx for suggestion.........

try this maybe:

$count = 200;
#print $mech -> content;
for ($count = 200; $count <= 500; $count++)
{

which will start the file download at 200 instead of 100. See what happens.

try this maybe:

$count = 200;
#print $mech -> content;
for ($count = 200; $count <= 500; $count++)
{

which will start the file download at 200 instead of 100. See what happens.

tried with different ranges but same problem.
ranges tried (200-500, 235-500, 300-450).

grasping at straws...... try sleep():

$count = 100;
#print $mech -> content;
for ($count = 100; $count <= 500; $count++)
{
print "$count</br> ";

$purl =

"http://www.website.com/servlet/Servlet?pdf=abc$count.pdf";

$mech->get( $purl, ":content_file" => "d:/Devesh/abc$count.pdf");
print "downloading abc$count.pdf</br>";
[B]sleep(1);[/B]
#print $mech->content;
#print "after......</br>";
#print "save $count done</br>";

}

grasping at straws...... try sleep():

$count = 100;
#print $mech -> content;
for ($count = 100; $count <= 500; $count++)
{
print "$count</br> ";

$purl =

"http://www.website.com/servlet/Servlet?pdf=abc$count.pdf";

$mech->get( $purl, ":content_file" => "d:/Devesh/abc$count.pdf");
print "downloading abc$count.pdf</br>";
[B]sleep(1);[/B]
#print $mech->content;
#print "after......</br>";
#print "save $count done</br>";

}

does Not make any difference....

OK, sorry, but I have no clue what is causing the problem you are having.

problem solved...


i modified the script to logout after downloading every 100 files and log in again Now it is working fine.

thnx for suggestions.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.