I have 2 huge txt files, the first column is common to both of them, the second file includes more records that the first one, I want to read the first column in the second file element by element and chech in the first column of the first file also element by element to find a match, when a match is found I want to print to a new file only the matched rows from both files, I wrote the following Perl code which works but it is very slow and memory expensive, could anybody help me to find better way or fix my code please:

    use strict;
    use warnings;
    use Getopt::Std;

    open (my $newfile, ">", "C:/result.txt");
    open (my $fh, "<", "C:/position.txt");
    open (my $file, "<", "C:/platform.txt");

    my @file_data = <$fh>;
    my @position = <$file>; 
    close($fh);
    close($file);
    foreach my $line (@position) 
    {
        my @line = split(/\t/,$line); 
        my $start = $line[0];

        foreach my $values(@file_data)  
        {
            my @values = split(/\t/, $values);
            my $id = $values[0];

            if ($start eq $id) 
            {
            print $newfile $line[0],"\t",$line[2],"\t",$line[3],"\t",$values[0],"\t",$values[1],"\t",$values[2],"\n";
            }
        }
    }

print "DONE";

Hi salem_1,
Actually, if you check in the this forum am sure what you intended have been done again and again.

You can check this for example file comparism in Perl

You can modify these solution to suit your requirement.

Hoe this helps

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.