Ladies & Gentlemen!

Oops! Let me try again:
Gentle Ladies & Hard Men (after-all, it's the ladies who are gentle compared to men and men hard, rough 7 tough compared to the ladies)!
And no, don't jest by saying "hard men" sounds like men having an erec**ion as some foolish men joked on other forums (busines/internet marketing) few months (or was it a yr back ?) back! :lol:
Get used to me addressing you as "hard men" because that's how I'm gonna do it frequently. :rofl:

Anyway, this thread is about cURL.
I will try building a unique script based on cURL. But, let's get rolling to learn first!
I have some viral traffic & viral money earning ideas and cURL will impliment them. Stick around and see how deep the rabbit hole is and what comes out of it! (Not joking!).

Recommended Answers

All 31 Replies

cURL Sample 1:

Why do you reckon the following code sample is not showing any page loading ? I see a complete blank page loading on my xampp.

<?php

//This code was found on: http://www.binarytides.com/php-curl-tutorial-beginners/
//gets the data from a URL
function get_url($url) 
{
    $ch = curl_init();

    if($ch === false)
    {
        die('Failed to create curl object');
    }

    $timeout = 5;
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

echo get_url('http://www.apple.com/');
?>

cURL Sample 2:

Is it true that the following short code is just as good as the one mentioned on my previous post ?

<?php 

//This code was found on: http://www.binarytides.com/php-curl-tutorial-beginners/
//2nd Example
//The above GET request to a url can be done in a much simpler way like this:

//Make a HTTP GET request and print it (requires allow_url_fopen to be enabled)
echo file_get_contents('http://www.apple.com/');

?>

cURL Sample 3:

Can you figure-out or atleast guess why the following code is better than the previous 2 ?
What benefits do you see in it than the other 2 code samples ?
No, I'm not testing you but trying to learn from you.

<?php 

//3rd Option
//This code was found on: http://www.binarytides.com/php-curl-tutorial-beginners/
//Calling the curl_setopt function again and again to set the options is a bit tedious. There is a useful function called curl_setopt_array that takes an array of options and sets them all at once. Here is a quick example:
//Make a HTTP GET request and print it (requires allow_url_fopen to be enabled)
echo file_get_contents('http://www.apple.com/');
curl_setopt_array($ch, array(
    CURLOPT_URL => $url ,
    CURLOPT_RETURNTRANSFER => 1,
    CURLOPT_CONNECTTIMEOUT => $timeout ,
));

?>

And, what is meant by the following:
"Calling the curl_setopt function again and again to set the options is a bit tedious. There is a useful function called curl_setopt_array that takes an array of options and sets them all at once.".

Can you elaborate it ? Because, if I understand it then I'll understand the benefits of this code over the previous 2.

Thanks!

For the first one, try replacing apple.com with something else. In particular, create a very simple Hello World webpage with a single file, no CSS, no Javascript, no redirect code, no authentication or anti-spam or anti-bot code, etc., and see if it works perfectly. My guess is it will. Then try google.com. You'll likely get sort of a skeleton page or a redirect page. There could be a variety of reasons why.

The last one doesn't make sense. You have an uninitialized $ch variable., so right off the bat, how could the curl part on line 8 work? Any display would be from line 7. Fetch the page code using curl or fetch it with file_get_contents, but not both.

AssertNull,

On my cURL Sample 3 code above, I have tried making changes based on your feed back to the following but I see a complete white blank page. I guess, I have to create an array that holds the url the cURL is supposed to load. Right ?
But, how do you do that ?

Sample 3:

    function get_url($url) 
    {
        $ch = curl_init();

        if($ch === false)
        {
            die('Failed to create curl object');
        }

        $timeout = 5;
    curl_setopt_array($ch, array(
        CURLOPT_URL => $url ,
        CURLOPT_RETURNTRANSFER => 1,
        CURLOPT_CONNECTTIMEOUT => $timeout ,
    ));
    }

AssertNull,

Have changed cURL Sample 2 to:

<?php 

//This code was found on: http://www.binarytides.com/php-curl-tutorial-beginners/
//2nd Example
//The above GET request to a url can be done in a much simpler way like this:

//Make a HTTP GET request and print it (requires allow_url_fopen to be enabled)
echo file_get_contents('http://www.apple.com/');

?>

Guys,

On this cURL tutorial:
http://www.binarytides.com/php-curl-tutorial-beginners/

Under the section "Make GET requests - fetch a url", you will see 3 blocks of code.
I'm referring to the 3rd one that looks like this:

curl_setopt_array($ch, array(
    CURLOPT_URL => $url ,
    CURLOPT_RETURNTRANSFER => 1,
    CURLOPT_CONNECTTIMEOUT => $timeout ,
));

How come no url is mentioned in that code ?
I can now see why my code sample 3 is not working. See my 4th post above.
What do you think that code should look like ? Care to show an example, how you'd do things ?

Get some working code without the array, then simply change the places where you set the options one by one to setting it a single time as an array...

<?php
function get_url($url) 
{
    $ch = curl_init();
    if($ch === false)
    {
        die('Failed to create curl object');
    }
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}
// get_url2 is the same as get_url except curl options are contained in an
// array and set using curl_setopt_array
function get_url2($url) 
{
    $ch = curl_init();
    if($ch === false)
    {
        die('Failed to create curl object');
    }
    $curlOptions = array(CURLOPT_URL => $url, CURLOPT_RETURNTRANSFER => 1,
                         CURLOPT_CONNECTTIMEOUT => 5);
    curl_setopt_array($ch, $curlOptions);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

// Call get_url or get_url2, but not both.  Comment and uncomment as needed to experiment.
// Change $myUrl variable for different websites, see what comes back as a blank page. Note that
// there is no guarantee that you'll always get the same blank and non-blank pages as I do or
// the same results every time.  Lots of factors.

$myUrl = 'http://www.google.com'; // non-blank, but incomplete page (no Google logo)
//$myUrl = 'http://www.daniweb.com'; // blank page
//$myUrl = 'http://www.apple.com'; // blank page

// echo get_url($myUrl);
echo get_url2($myUrl);
?>

but I see a complete white blank page

Debugging technique. First hit "View Page Source". Make sure there's truly nothing there. In my case with apple.com, there is not, but potentially something was returned that isn't visible on the screen. Then you can stick a bunch of lines that display stuff so that you don't have get that blank screen.

function get_url($url) 
{
    echo "Got this far 1";
    $ch = curl_init();
    if($ch === false)
    {
        die('Failed to create curl object');
    }
    echo "Got this far 2";
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
    echo "Got this far 3";
    $data = curl_exec($ch);
    echo "Got this far 4";
    curl_close($ch);
    echo "Got this far 5";
    return $data;
}

Now this doesn't tell me much, but it's a start with that blank page. If I get all those printouts, it tells me the function was called and completed and this wasn't a cached page or something like that. In addition, if it displays immediately rather than after a long pause, that tells me something too. WHAT it might tell me could be a variety of things, but it gets me farther along in my debugging process. Now if I comment out all of those debugging lines AND I comment out this line...

curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

I now see not a blank page, but instead a "1" on the page. I look at the spec for curl_exec...

http://php.net/manual/en/function.curl-exec.php

And I see that I was returned a true or false. 1 is true, so I know that my call to curl_exec "succeeded". What is a success? More research to do, but the point is it's now more directed research as opposed to a blank page.

If you really want to play around with curl, you should create your OWN pages and have curl fetch THEM. That takes the guesswork out of things since you know exactly what it is SUPPOSED to show.

Fellow Programmers,

I did test both code blocks separately and they did redirect me to apple.com.

Sample 1:

//gets the data from a URL
function get_url($url) 
{
    $ch = curl_init();

    if($ch === false)
    {
        die('Failed to create curl object');
    }

    $timeout = 5;
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

echo get_url('http://www.apple.com/');

Sample 2:

// Make a HTTP GET request and print it (requires allow_url_fopen to be enabled)
echo file_get_contents('http://www.apple.com/');

I was just curious to learn why anyone would bother using the long version if the short version can do the same job.
I got my answer from someone that the short version is risky. This is what I learnt:
file_get_contents() using a URL is not guaranteed to work in all situations, as it depends on a configuration setting to allow it to use HTTP (which is sometimes disabled for security reasons).
cURL, on the other hand, should normally work, as long as the PHP cURL extension is installed.

I've also learnt now that, writing the following for each and every url to fetch can be tedious:

curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);

And so, best to url_setopt_array that takes an array of options and sets them all at once. Like AssertNull's code sample above.

Thanks for everyone's inputs in this forum and others. :)
I hope I remember them all!!! Don't blame me if I don't! Lol!

AssertNull,

I experimented as you suggested. Good thing you suggested it!
Mentioning my findings below for future newbies to benefit from them.

And yes, the following code shows a blank page and checking the source shows the same.

<?php
function get_url($url) 
{
    $ch = curl_init();
    if($ch === false)
    {
        die('Failed to create curl object');
    }
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

But that maybe due to the $url variable has no value ? Maybe, the tutor forgot that ?
http://www.binarytides.com/php-curl-tutorial-beginners/

Saying all this, the following code loads a page. In my example google. But it mis-behaved which I'm mentioning here below (after the following code):

<?php
//
// get_url2 is the same as get_url except curl options are contained in an
// array and set using curl_setopt_array
function get_url2($url) 
{
    $ch = curl_init();
    if($ch === false)
    {
        die('Failed to create curl object');
    }
    $curlOptions = array(CURLOPT_URL => $url, CURLOPT_RETURNTRANSFER => 1,
                         CURLOPT_CONNECTTIMEOUT => 5);
    curl_setopt_array($ch, $curlOptions);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}
?>

The above code loaded google. But, google homepage was showing no logo as you experienced and some of the links were showing google links and some my localhost which puzzled me.
Checking the source (as you suggested) revealed that, those google links that were relative were being shown as link in my localhost. And, the links that were shown as google links were absolute links. So, there you go.
Now, let's try to fix this issue so that any relative links get auto converted or replaced to absolute links before cURL shows the page content (along with the links) to our screens. Shall we ?
How would you fix this ? With preg_match ? With anything else ? Any code sample from your end would be most appreciated!
At first, we have to tell the script to capture the domain name in order for it to precede it on relative links to convert all rel links to absolute links. Correct ?
Do you know the php code that converts rel links to abs ?
I say again, any code samples from anybody would be most appreciated by all newbies!

Thanks in advance for your helps! I look forward to some code samples from fellow members! I, myself, total clueless on how to begin on this!

Take care!

Just a note!

I was just curious to learn why anyone would bother using the long version if the short version can do the same job. file_get_contents() using a URL is not guaranteed to work in all situations, as it depends on a configuration setting to allow it to use HTTP (which is sometimes disabled for security reasons) ...

It happens because allow_url_fopen is set to false, in case curl is not available you can also use sockets or fsockopen() & co.

Also, file_get_contents() allows more complex requests, in fact, it can make POST requests, by using the resource context parameter. The same can be done by file(), readfile(), fopen() and in general by all functions that support streams, an example:

<?php

$url = "https://www.apple.com/";

// Resource context
$rc["http"]["method"] = "GET";
$rc["http"]["header"] = "Accept-language: en\r\n";
$rc["http"]["follow_location"] = 1; // 1 = yes, 0 = no
$rc["http"]["timeout"] = 10; // seconds

$context = stream_context_create($rc);

$fp = fopen($url, "r", FALSE, $context);

while( ! feof($fp))
    print fread($fp, 4096);

fclose($fp);

I can't comment on file_get_contents, how robust it is, and what its capabilities are since I don't know. Thus I cannot compare it to curl. I'll just say that curl is very robust, with lots of options, and you've only scratched the surface. A quick look at some of the options will tell you that. But two big potential reasons to use curl are its callback functions and the fact that many languages implement curl or have wrappers for curl (command line/shell script, Perl, Python, C++, PHP, and others), so it's quite portable and modular, which is really nice because you generally cannot assume everything will be written in PHP.

The callback functions are quite important. You'll likely be doing far more than immediately echoing single page requests all at once, nor will you want to pause your php script and wait for the entire massive website to all come back to you. You have that timeout option of 5 seconds, which is a long time for a PHP script to simply twiddle its thumbs. Better to send the request, keep processing the PHP script, then process the data as it comes.

Thus my advice would be to learn curl. My guess is that AFTER learning curl, it will be sufficient for all your needs in this realm. It has for me. Hence the reason I don't know anything about file_get_contents. Never needed to learn it. By the time I used curl at all in PHP, I was already using it in C++. When the need arose to do these things in PHP, I went with what I already knew (sort of): curl. YMMV.

Checking the source (as you suggested) revealed that, those google links that were relative were being shown as link in my localhost. And, the links that were shown as google links were absolute links. So, there you go.

You're getting somewhere. To understand what is going on better, set up two tabs side by side in Chrome. One is the regular google.com page in your browser (no php, no localhost, no curl). It should load just fine. Next to it have a tab that was fetched through your php script using curl. In each tab, hover over the Google link. Right-click "Inspect". You should see the html.

<img alt="Google" height="92" src="/images/branding/googlelogo/1x/googlelogo_white_background_color_272x92dp.png" style="padding:28px 0 14px" width="272" id="hplogo" onload="window.lol&amp;&amp;lol()">

In particular, focus on src...

src="/images/branding/googlelogo/1x/googlelogo_white_background_color_272x92dp.png"

Now HOVER over that... In the broken one, you'll get something like...

src="http://localhost/images/branding/googlelogo/1x/googlelogo_white_background_color_272x92dp.png"

In the google tab that loaded correctly, you'll get https://google.com rather than localhost, or something quite similar. You can edit that source right in the browser to break and unbreak that logo. Thus if you can find it in the string that curl_exec returns and do that replacement, that MAY be all that is needed (key word MAY. Experiment). Get familiar with the Inspect tool, or something similar.

before cURL shows the page content

Not to be pedantic, but curl doesn't show the page content. curl_exec executes and returns a string. You are echoing that string, and the user's BROWSER parses that string in order to figure out what to display to the screen. curl and php work on the server-side to send html to the client. The browser parses that html on the client side. I know what you meant, but it's important to understand this. You need php code to change the string that curl_exec returns. THEN echo that revised string.

How would you fix this ? With preg_match ? With anything else ? Any code sample from your end would be most appreciated!

preg_match or something similar COULD work, potentially, depending on the webpage. You could find src="/images and change it to src="https://www.google.com/images. But since that would only be the right move if the string is replaced within the img tag or something similar, there COULD be a problem if you replace it elsewhere. Unlikely, but possible. You might want to actually parse the HTML. More work, but more exact and accurate results.

$data = curl_exec($ch);
curl_close($ch);
$revisedString = replaceLocalHostWithGoogle($data); // write this function
echo $revisedString;

Some Mod!

Whatever script you got placed in your forum is really starting to piss me off for the past few nights! Look at the img attached! Every 5 secs I get this "unresponsive page" error whenever I load one of my threads to see others' replies. Can you imagine me having to click this damn "WAIT" button every 5 secs !!!!

Think of this as a "teachable moment". You're learning curl, you're learning to debug web pages, you're learning about all the things that can happen to make a webpage not load correctly and quickly. In the process, you've run into a bug that needs to be debugged. It's not YOUR bug, but again, make lemonade.

Experiment around using different browsers. See if you notice any patterns. Does this only happen on certain browsers? At certain times? Under certain conditions? Does it happen when you are logged out? Logged in? Does it load sometimes and sometimes not? Submit a detailed "bug report" to Dani. This will help you as a programmer even though you probably can't do the debugging. She may ask you follow-up questions to pin it down. Perhaps ask HER to give YOU a report back about what she finds. Again, you're learning. She's experienced. Flip this into a positive.

It's also a wonderful opportunity to experiment with curl with different timeouts, handlers, etc. All sorts of things could have gone wrong. Try going in depth and fetching daniweb.com in curl. There's a possibility that it's not a bad Daniweb script, but it could be a bad 3rd-party advertiser script. Something she needs to know.

If you keep up with this programming thing, you'll be submitting and receiving lots of bug reports. Each time you do it, you improve.

commented: Great post +15
commented: Spot on, as usual. PS - don't leave!!! +15

AssertNull,

Thank you for your previous post. It was handy!
This is all I know about cURL ...
At first, I thought it was a new extention of php. Then realized, it's a language of it's own and php just makes use of it.
Sql language is a language of it's own. Php just makes use of it. But have you noticed (sure you have) that in order to do something (like INSERT into db) then the sql command is not the same as the php command ? And so, I ask, is this the same with cURL and php too ? I mean, if you learn all that cURL codes, like the ones mentioned in this thread, then is this php format of cURL or the actual cURL lang itself ? I ask because php code to do some task on a mysql db (like UPDATE) is not the same as the native sql code that UPDATES the db.

So, you are a c++ programmer ? That is news indeed!
But, you don't know about file_get_contents ? That is strange! I thought you were a php pro who knows everything top to bottom about php! You certainly feel like one!

And yes, that's how I tested things. I wrote/copy cURL code onto Note Pad++ and then viewed the file in the browser by calling the page html via cURL where Chrome opened the page via cURL (eg. google.com). That was in one tab. Hovering my mouse on some links I found they pointed to my localhost. And some I found pointed to google pages.
I opened another tab on Chrome and directly visited the page (in our case: eg. google.com). I then checked view source out of curiosity on that tab and found the links that pointed to my localhost were actually relative links while the others were absolute. That's how I figured-out the problem. Then, googled for php code samples to convert rel links to abs links. Found a few codes samples but they all had a litle flaws. I thought the simple preg_replace would be catch & miss and so stuck to the regex preg_replace lastnight.
Tonight, however, I'm experiencing a new problem.
The following was the oroginal code, from lastnight, that came from the tutorial mentioned on my first 4 posts at the top:

<?php

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://google.com");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_HEADER, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($ch);
curl_close($ch);

?>

Then I added the following on line 10 & 11:

$result = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1http://$url/$2$3', $result);
echo $result

And tonight, I see problems. To experience it yourself, Run the above code in your xampp/wamp.
Notice the 4 links below the google buttons. Their links are shown like so:

http://google.com//intl/en/ads/
http://google.com//services/
http://google.com//intl/en/about.html

Can you spot the extra "/" ? I believe the regex didn't do it's job properly.
Anyway, tonight, I added the following on line 2:

$url = http://google.com

And replaced 'google.com':

$result = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1http://google,com/$2$3', $result);

to $url:

$result = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1http://$url/$2$3', $result);

And now face a new problem.
The links now show-up like this:

http://%24url//intl/en/ads/
http://%24url//services/

Where the hell does the "%24url" come from ?

I even changed the single quotes ('$url') to double quotes ("$url") but same result!

AssertNull,

Final code was looking like the following, which had the previous post mentioned problem:

<?php
$url = "http://google.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "$url");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_HEADER, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($ch);
curl_close($ch);
$result = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1http://$url/$2$3', $result);
echo $result
?>

Nope, definitely not a PHP expert or even proficient. I figure out what I want to do, look at the API, find a function that might work, and go from there. But very often the concepts are similar across languages, so I can occasionally say something semi-smart even in a language I hardly know.

I am the WRONG guy to direct preg_replace questions to. Once again, concepts. I understand regular expressions and I'm actually pretty good at them sometimes and have written parsers and lexers for RE subsets (not the whole thing), but for some reason I've never been able to crack the syntax. It's weird. Most people have the reverse problem.

But what DOES stand out to me is the "%24url". Whenever you see a percentage character followed by two hexadecimal digits in what should be a link, crack open your ASCII chart and see if a character has been encoded. 0x24 in ASCII is the dollar sign. I had a hunch that "%24url" was actually "$url" with the $ encoded. I look at your RE and I see "$url" in there, so it's a good guess that "$url" is being treated as a string LITERAL rather than a string VARIABLE with a value.

It took me a while, but I believe your culprit is here...

'$1http://$url/$2$3'

Try replacing with double quotes...

"$1http://$url/$2$3"

It took me a while, but I believe your culprit is here...

That proves that I'm not proficient in PHP. Someone who was would have spotted the double vs. single quotes right away given the string literal vs. variable "$url". I missed it. I suppose I could make excuses, but I won't. My ego is wounded. :)

AssertNull,

Look above this post and you'll see 3 of your own posts. look above your 3 posts and you'll see that I already tried with double quotes. My previous post says this:
"I even changed the single quotes ('$url') to double quotes ("$url") but same result!"

So, Dani is an expert and it's a "she" is it ? This Dani Web belongs to her ? My, oh my! A woman programmer, hey ? Something to feel fond about! I've never really come across a female programmer! ;)
Yeah, might aswell suggest her some forum features, then. Viral features. Lol!
Dani sounding Aussie. is this an australian forum ? I thought maybe Usa or even UK. Frankly, I don't care where it is aslong as I get answers. ;)

Run the code below and get back to me as far as whether there is no difference between single and double quotes.

<?php
    function test($aUrl)
    {
        $result = '<a href="/intl/en/ads/?fg=1">Advertising</a>';
        $url = "$aUrl"; // double quotes
        $result1 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1$url/$2$3', $result); // single quotes
        $result2 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#","$1$url/$2$3", $result); // double quotes
        $url = '$aUrl'; // single quotes
        $result3 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1$url/$2$3', $result); // single quotes
        $result4 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#","$1$url/$2$3", $result); // double quotes
        $url = $aUrl; // no quotes
        $result5 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#",'$1$url/$2$3', $result); // single quotes
        $result6 = preg_replace("#(<\s*a\s+[^>]*href\s*=\s*[\"'])(?!http)([^\"'>]+)([\"'>]+)#","$1$url/$2$3", $result); // double quotes
        echo "<html><body>" . $result1 . " " . $result2 . " " . $result3 . " " . $result4 . " " . $result5 . " " . $result6 . " " . "</html></body>";
    }
    test('https://www.google.com');
?>

Is it purely a quotes issue you're dealing with? I don't know. I don't have time to mess around with the Regular Expressions except as above. But certainly the type of quotes you use matters.

Dani sounding Aussie. is this an australian forum?

US-based, with a global membership.

And, for the record, there are plenty of coders who are not male. Your excitement levels suggest you are either 15 or live in a cave... ;-)

Mod,

I'd like to bring HappyGeek's comment about calling me living in a cave. Am I complaining ? No!. More like liking it. Lol! He is being humourous. Look at the way he made his approach. Not insulting. Infact, humourous. We expect others to mildly make criticisms like so. Good for business. I don't have any problems with such comments aslong as no swearing is involved.
As for female programmers. Nah! I haven't really come across any.
And so, I'd like to experience how good they are out of curiosity.
Any code inputs from females, most welcome. ;)
Is Dannielle a programmer ? I thought this was Daniel's Web, originally.

AssertNull,

Sorry to keep you waiting mate. I just read your post now.
This is the result ....
I see 6 "Advertising" links. And they open to:

http://localhost/test/$url//intl/en/ads/?fg=1
https://www.google.com//intl/en/ads/?fg=1
http://localhost/test/$url//intl/en/ads/?fg=1
http://localhost/test/$aUrl//intl/en/ads/?fg=1
http://localhost/test/$url//intl/en/ads/?fg=1
https://www.google.com//intl/en/ads/?fg=1

What is your point ?
I know the difference between the double and single quotes.
Single spit-out literally. Double translate the value ($variable, etc.) before spitting out the processed value.

commented: You seem to be from a country that has a pretty low opinion of women but hate swearing. Seriously? Or is that just you? +0

Diafol,

Get over it dude! I'm praising women folk and expecting to see some code samples and you're telling me I come from a country that looks down on women ? Just what on earth are you on about ? How do you know I'm not from the same country as you, pal ? Because, where you come from they swear regular and I don;t like swearing automatically makes me an ailen ? Just where do you get these ideas from ? You read into too much, mate. I suggest you stop drinking alcohol altogether. It's clouding your judgments.
Actually, don't bother replying as I can clearly see you're just another troll who hijacks threads into offtopic discussions! Leave it alone! Not only will you get banned but get me banned too if you keep up your nonsensical assumptions and conclusions.
I know women-folk are more academically expert than blokes (english language subject especially. I had a female teacher teaching Business Studies & social environment too. However, haven't come across any that teach high level maths nor programming). Therefore, was curious to learn if they'd beat you guys in programming too.
I have a feeling, they would! Good for them! Lol
FYI: Females are good on some subjects better than males and it's true vice versa too.
Now, end of offtopic ramblings. Ok ?

AssertNull,

Was my result today (which was an reply to your previous post) what you expected lastnight ?

Others,

How have you been experimenting with cURL ? How have you made use of it ? What did you use it for ? Just plain web scraping to gather daya or building web automation tools too ?
Usually, what exactly do you scrape ? I might aswell try it myself since I'm learning cURL and fond of it to try new experiments.

Cheers!

I just banned UI for repeated unrepentant irrelevant trolling of valued DaiWeb members.
Maybe that will cost me my Moderator badge, but someone had to do something.
JC

Update: happygeek disagrees. He's an admin and has overridden my ban (as is his right). It's out of my hands now <sigh of relief>.

Here's a contribution. An update.
This code sort of does the job:

<?php
$url = "http://www.ebay.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "$url");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_HEADER, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($ch);
curl_close($ch);
$phrase  = "$result";
$pattern = array("localhost","https://", "http://", "www.");
$replace   = array("http://mymydomain.com/tracker.php?", "http://mymydomain.com/tracker.php?", "http://mymydomain.com/tracker.php?", "http://mymydomain.com/tracker.php?");
$newphrase = str_replace($pattern, $replace, $phrase);
echo $newphrase;
?>

Other forums have also become aware that I create duplicate threads on many posts. They don't mind. I have not got banned for that and the ones that have members (like AssertNull, Diafol, etc. here) who do mind a little don't expect me to apologise for not revealing to them that I have other threads open elsewhere. Even though they don't like me doing this.
They still contribute and help as much as they can.
And, I too, contribute as much as I can. I will release a code that will be a life saver to the world. (I sort of finished it now. It is a latter code to what you see on my previous post. Not releasing now as not completely fool proof). You lot should have taken part in it, like the disciples of Jesus took part in Jesus' mission. You would have been a party who took part in it, had I not been temporarily banned. You missed out on becoming history. And yes, I'm on a mission to rid poverty from the world and php and programming is just my tool. And no. I'm not dreaming. Infact, some programmers gave me the thumbs-up, even though some the opposite. One even told me to keep quiet about it and not reveal my ideas. I say, the man has some sense to spot a gold nugget.
Don't send any criticism this way without knowing what's coming-up. You will regret it later. WATCH and WAIT. PATIENTLY. The mod who temporarily banned me, will regret for temporarily banning me.
I always brag and boast a little to see everyone's reaction. To get everyone telling me my plan is nothing but a dream. I get them to laugh a little so I can them drop their smiles of their faces later-on and have the last laugh. When I get criticised and challenged. I then prove them wrong.
One programmer, on another forum, betted money that it's technically impossible to do what I want to do. Even though I told him I already managed it. Now, he's not responding as he knows he'll lose the bet and his money.
Why am I mentioning all this. So, I can attract more betting against me. I want to prove every experienced programmer wrong and tell everyone: I told you so!
I, the newbie, amateur, upstart am the winner. Everyone else (experienced pros) are the losers.
Why am I saying all this ? Just one way taking it out here, why I got temporarily banned. The mod will regret it. One day. He and the grudge bearers here had the first laugh. Finally, it will be my turn. My turn will come when you see the top searchengines and social networks in the world suddenly choke and start copying my ideas to save themselves from going bankrupt. But they will all be branded as copycats. Yes, they will follow my lead. The scripts I am working on will make money for their members. How many social network or searchengine users do you know earn money by using the searchengine/social network ? None! It is the searchengine & social networks that make money from their users activities. This is changing very soon. Homeless will become homefull and bankrupts will clear their debts asap. The world is about to change. Many would earn a living (probably) out of my ideas. Thanks to me and my wild ideas. You guys would find it hard to resist engaging in the activities that earn you money from my wild wild wild ideas.

Good night!

commented: Troller keeps trolling... -3

Incase you still have not got it yet. I'm claiming, Facebook, Twitter, Youtube, etc. won't be the top SNs anymore. Neither Google, Yahoo or MSN the top SEs. Unless, ofcourse: They copy my ideas and follow my lead.
Now, go ahead and laugh. I want you to have the first laugh. I will have the last. I like challenges. Keeps me on my foot. On the fringe of the cliff.

Good night!

commented: Hahaha, ha ha, a-hahahahaha, ROFL. Bonk +0

HappyGeek,

You humour me! Keep it up! Lol! I'll join you in your: ROFL! Bonk! ;)

Just checked your definition of ROFL:
https://www.google.com/search?q=define%3A+rofl&oq=define%3A+rofl&aqs=chrome..69i57j69i58.3100j0j7&sourceid=chrome&ie=UTF-8

And, Bonk:
https://www.google.com/search?q=define%3A+rofl&oq=define%3A+rofl&aqs=chrome..69i57j69i58.3100j0j7&sourceid=chrome&ie=UTF-8#q=define:+bonk

Do check what Google is showing. Lol!
I thought you meant "bonkers"!

Anyway, have a nice weekend!

The bonk in question was the sound of me hitting my head following so much floor rolling in laughter... :-)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.