Hi,

I'm fairly new to PHP coding and currently have a project i want to realize with PHP. The idea is that i have a text file with ISBNs (The numbers used to identify book titles) which is read and each line of text (i.e. each ISBN) is stored in a variable.

I've managed the task that far.
But now i want to read certain URLs, depending on the ISBNs (e.g. www.exampledomain/examplesite.php/isbn1234567890 and www.exampledomain/examplesite.php/isbn7890123456 etc.), filter the content of the resulting sites and output/save it.

My main problem is to tell PHP to do the same thing for every ISBN i have, but with a changing variable, which also is the ISBN.

The code I have so far is:

$urls="isbns.txt";
$page = join("",file("$urls"));
$kw = explode("\n", $page);
    for($i=0;$i<count($kw);$i++){
    $url[i]="http://www.exampledomain.com/quickSearch?searchString=$kw[$i]";
        echo $url[i]."<p>";
}

This outputs every complete URL, one for every ISBN i have.

How do i do the same thing for the content of the resulting URLs?

Any help is greatly appreciated, 'cause i'm stuck.

Read up on cUrl.

OK, but the way i understand it cURL pretty much does what i already managed to do. I already managed to extract content from one resulting web page, i just need to know how to do it multiple times.

If cURL is the right thing to do that, what cURL options do i have to set for that?

You can just get each page within your loop, and process it.

Just search for curl example in this forum or with google, you'll have the settings in no time. I have no example here to give now.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.