Hi all,

This is my first post, and I am certainly no perl programmer. I do have programming experience and have inherited a some perl code written by someone else. I have spent days trying to find the solution to this, so here it goes.

This script parses a tab delimited text file and downloads files from our Intranet based on 3 columns of information. Whenever the script hits a file that is a bit over 60MB, it dies with a "Out of memory during "large" request for 134221824 bytes, total sbrk() is 271218688 bytes at..." error.

We are using a while loop (which I read can be a problem if not since the while reads line by line) and the data segment size and file size are unlimited. The machine running this has plenty of resources. There is not actually a memory shortage, at least a Gig of physical mem is available when this happens. I do run these scripts in a cygwin environment, but I have tested this on ubuntu 9 with the same results. I am including the script below. Any help is much appreciated. Thanks!

#!/usr/local/bin/perl


##### LOAD THE NEEDED PERL LIBRARY MODULES #####
use FileHandle;
use LWP::UserAgent;
use HTTP::Request;
use IO::File;

##### MAKE SURE STDOUT KEEPS FLUSHED #####
autoflush STDOUT 1;
##### SETUP RESTRICTIONS, AS WELL AS GLOBAL VARIABLES #####
use strict;

my $infile = shift || die "Usage: multidownload.pl < file_with_info\n";
open( IN, "< $infile" );



##### SET UP YOUR INFO HERE #####

my $PATH = "filesDownloaded";

my ($url,$request,$response);
my $UA		= new LWP::UserAgent;
my $nodeid;
my $ext;
my $file = "";
my $line;


my $llserver	= "xxxxxxxi";
my $llcookie 	='LLCookie=xxxxxxxxxxxxx';

while (<IN>) {
	$line = $_;
	($nodeid,$ext,$file) = split(/\t/, $line);
	chomp $nodeid;
	chomp $ext;
	chomp $file;

	##### make sure we have a nodeID####
	if( $nodeid =~ /\d+/ ) {
		print "Downloading $nodeid, Ext $ext, Name $file...\n";

		$url = "$llserver?func=ll&objId=$nodeid&objAction=download";

		$request = new HTTP::Request GET => $url;
		$request->header( Cookie => "$llcookie" );
		$response = $UA->request($request);

		# Replace spaces/slashes/stars/pipes/? with '_' in the filename
		# Remove leading and trailing spaces first
      $file =~ s/^\s+//;
      $file =~ s/\s+$//;
      $file =~ s/\s+/_/g;
      $file =~ s/\/+/_/g;
      $file =~ s/\\+/_/g;
      $file =~ s/\*+/_/g;
      $file =~ s/\|+/_/g;
      $file =~ s/\?+/_/g;
      $file =~ s/\&+/_/g;
      $file =~ s/\(+/_/g;
      $file =~ s/\)+/_/g;
      $file =~ s/\'+/_/g;
      $file =~ s/\"+/_/g;
      $file =~ s/\;+/_/g;
      $file =~ s/\:+/_/g;
      $file =~ s/\%+/_/g;
      $file =~ s/\$+/_/g;
      $file =~ s/\~+/_/g;
      $file =~ s/\<+/_/g;
      $file =~ s/\>+/_/g;

		#DOWNLOAD THE FILE
		if( $ext =~/null/ ) {
			open(FH, ">> $PATH/$nodeid-$file") or print "=>FAIL DWNLD: $nodeid, $!\n";				
			print FH $response->content;
			close(FH);

			#CHECK FOR 'LOGIN ERROR' OR 'NO LONGER EXISTS'
			my $num = `/usr/bin//egrep "File:  error.html|Authentication Required." $PATH/$nodeid-$file | /usr/bin//wc -l`;
			if ( $num>0 ) { print STDERR "=>FAIL ERROR CHECK: $nodeid\n"; }

		} else {
			open(FH, ">> $PATH/$nodeid-$file.$ext") or print "=>FAIL DWNLD: $nodeid, $!\n";			
			print FH $response->content;
			close(FH);

			#CHECK FOR 'LOGIN ERROR' OR 'NO LONGER EXISTS'
			my $num = `/usr/bin//egrep "File:  error.html|Authentication Required." $PATH/$nodeid-$file.$ext | /usr/bin//wc -l`;
			if ( $num>0 ) { print STDERR "=>FAIL ERROR CHECK: $nodeid\n"; }
		}
	}	
	else { print "NodeID not ##s: $nodeid, Ext: $ext, Name: $file\n"; }
}

#If desired, create a text file with the md5 sums for all files
##`cd $PATH; md5sum * > $PATH/_md5.txt`;

close(IN);

It looks like the magic number for a file to fail is anything over 64MB. That's a nice, special number :)

Hi,
Well as you said your system has enough resource, i dont find any idea why it is failing. As far as my understanding goes the way you read file is line-by-line which is recommended to follow in case of large file. There are also other ways depends how your file gets modified(appended)?

Can you tell us on what line it is failing?

Also you may want reduce no. of lines used to do substitution in your code(line#56 - line#73), where it is tried to do separately. like, $file =~ s/[\s\/\\\*\|\?\&\)\'\"\;\:\%\$\~\<\>]/_/g; . I dont remember the abbreviated char for which represents all special chars.

katharnakh.

i don't know whether we have any switch to specify mem allocation for the program when running the program in perl. May be some else who know can suggest here.

katharnakh.

What's your ulimit?

ulimit -a

Ask root to increase your ulimits (or do it yourself if you have root access).

chuser fsize=-1 cpu=-1 data=-1 core=-1 stack=-1 rss=-1 nofiles=8192 username

This article has been dead for over six months. Start a new discussion instead.