In PHP, I've tried using simple_html_dom in order to extract URLs from web pages.

And it works a lot of the time, but not all of the time.

For example, it doesn't work on the website ArsTechnica.com, because it has a different use of HTML URLs.

So... one thing I do know... is that Firefox perfectly gets a list of all links on a page, hence, how you can load up a web page in Firefox, and all the links are clickable.

And so... I was wondering... is it possible to download the open source Firefox browser engine, or Chrome, or whatever... and pass some parameters to it somehow, and this will give me a list of all URLs on the page..??

I can then feed that into PHP by whatever means, whether it's shell_exec() or whatever.

Is this possible? How do I do it?

3 Years
Discussion Span
Last Post by pixelsoul

I can get the links from that site using file_get_contents, DOMDocument, and DOMXPath. If you're looking for more of a browser type behavior, I would recommend looking at a library like http://phantomjs.org/

Edited by pixelsoul


OK... I've done as you said with DOMDocument and DOMXPath

Here is my code

$dom = new DOMDocument();

// grab all the on the page
$xpath = new DOMXPath($dom);
$hrefs = $xpath->evaluate("/html/body//a");

for ($i = 0; $i < $hrefs->length; $i++) {
       $href = $hrefs->item($i);
       $url = $href->getAttribute('href');
       echo $url.'\n';


The issue is... this only gets URLs... How do I get the text overlay for the URLs?



Do you mean the text value of the link?

You can use this to get the text of link $text = $href->textContent;

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.