0

Hello

I want to extract content of a website found between some specific html tags.How can i achieve that with php.
For e.g consider viewing google.com soure code.I want to extract data with <title> </title> tags and store it in a variable(its "Google" in this case)

Bascially script navigates to google.com and then extract data within the <title> </title> tags. I though of using fopen and fread but this would take all webpage content and isn't efficient.

Any better solution

3
Contributors
3
Replies
5
Views
4 Years
Discussion Span
Last Post by veedeoo
0

You'd need to download most of the content anyway, or in blocks, because you can't know where in the HTML the title is. If fopen is allowed, you can read a block, and then use a preg_xxx or XML function to get the title. If fopen isn't allowed, try cUrl. A working example can be found on StackOverflow already.

Edited by pritaeas

0

I know exact where is the title tag for the sites that i need.{i mean the line number}
Give me link for example
Greetz pritaeas

1

There is a php class called simple html DOM parser that can also help you achieved this task. For google, they have a custom search API, however this have a limit of 100 quieries per day and they want a fee of $5 per 1000 queries, for up to 10,000 queries per day.

Votes + Comments
nice parser - thanks for sharing
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.