0

Hi,

I'm fairly new to PHP coding and currently have a project i want to realize with PHP. The idea is that i have a text file with ISBNs (The numbers used to identify book titles) which is read and each line of text (i.e. each ISBN) is stored in a variable.

I've managed the task that far.
But now i want to read certain URLs, depending on the ISBNs (e.g. www.exampledomain/examplesite.php/isbn1234567890 and www.exampledomain/examplesite.php/isbn7890123456 etc.), filter the content of the resulting sites and output/save it.

My main problem is to tell PHP to do the same thing for every ISBN i have, but with a changing variable, which also is the ISBN.

The code I have so far is:

$urls="isbns.txt";
$page = join("",file("$urls"));
$kw = explode("\n", $page);
    for($i=0;$i<count($kw);$i++){
    $url[i]="http://www.exampledomain.com/quickSearch?searchString=$kw[$i]";
        echo $url[i]."<p>";
}

This outputs every complete URL, one for every ISBN i have.

How do i do the same thing for the content of the resulting URLs?

Any help is greatly appreciated, 'cause i'm stuck.

2
Contributors
3
Replies
5
Views
7 Years
Discussion Span
Last Post by pritaeas
0

OK, but the way i understand it cURL pretty much does what i already managed to do. I already managed to extract content from one resulting web page, i just need to know how to do it multiple times.

If cURL is the right thing to do that, what cURL options do i have to set for that?

0

You can just get each page within your loop, and process it.

Just search for curl example in this forum or with google, you'll have the settings in no time. I have no example here to give now.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.