hello friend
i have a text file name url.txt contain lots of url, one url in a line.

now i want to get the content of each an every page which are open on the basis of those urls,

here is my code:

using System;
using System.IO;
using System.Net;
using System.Text;

class WebFetch
{
	static void Main(string[] args)
	{
		// used to build entire input
		StringBuilder sb  = new StringBuilder();

		// used on each read operation
		byte[]        buf = new byte[8192];
        int counter = 0;
        string line;

        // Read the file and display it line by line.
        System.IO.StreamReader file =
           new System.IO.StreamReader("c:/wamp/www/isbn/url.txt");
        while ((line = file.ReadLine()) != null)
        {
            // prepare the web page we will be asking for
            HttpWebRequest request = (HttpWebRequest)
                WebRequest.Create(line);

            // execute the request
            HttpWebResponse response = (HttpWebResponse)
                request.GetResponse();

            // we will read data via the response stream
            Stream resStream = response.GetResponseStream();

            string tempString = null;
            int count = 0;
        
            do
            {
                // fill the buffer with data
                count = resStream.Read(buf, 0, buf.Length);

                // make sure we read some data
                if (count != 0)
                {
                    // translate from bytes to ASCII text
                    tempString = Encoding.ASCII.GetString(buf, 0, count);

                    // continue building the string
                    sb.Append(tempString);
                }
            }
            while (count > 0); // any more data to read?

            // print out page source
            
           Console.WriteLine(sb.ToString());
        }

        file.Close();


		
	}
}

its working and show the content of the file in console but the records are repeated.
that mean content of 1st then 1st and 2nd then 1st 2nd 3rd.

but i want the records are unique.
for example 5 url 5 records.

this code give me 15.

plz help me what is the problem in that code??

that is the content of url.txt:

https://isbndb.com/api/books.xml?access_key=RPGYD5PC&index1=isbn&value1=3890071341
https://isbndb.com/api/books.xml?access_key=RPGYD5PC&index1=isbn&value1=8831754750
https://isbndb.com/api/books.xml?access_key=RPGYD5PC&index1=isbn&value1=0941419940
https://isbndb.com/api/books.xml?access_key=RPGYD5PC&index1=isbn&value1=0941419711
https://isbndb.com/api/books.xml?access_key=RPGYD5PC&index1=isbn&value1=3921029570

Recommended Answers

All 3 Replies

sandipan.rcciit,
Remove the content of StringBuilder object after printing it on console window.

sb.Length=0;

thanks it is now working properly...

My Freind ,
after writing the HTML of current url on console window , you need to clear what is inside in "sb" string.
you can do that by using remover method of the string which takes 2 arguments("starting index","end index").
you put staring index to 0 and end index to the current length of the string using "sb.length" property.
like ,just before closing the file.
wirte: sb.remove(0,sb.length);
and you r done.
THANKS

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.