I am trying to create some sort of a crawler. I was using file_get_contents() to get the pages until i stumbled on this one site, where that didn't work:
$page = 'http://www.site.com/page.php'; $content = file_get_contents($page); echo htmlspecialchars($content);
This returned a completely blank page.
After looking it up, it appears that it's possible a certain allow_url_fopen is set to off. I read that it could be bypassed with cURL. So i tried:
$page = 'http://www.site.com/page.php'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $page); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $contents = curl_exec ($ch); curl_close ($ch); echo $contents;
...but this returns the exactly same thing: a blank page.
I have cURL installed and the code works for some other sites i tried. I could give you the url of this site in PM if you want to try it (cause i'd rather not post it publicly).
Thanks in advance.