Hi all,
I'm looking to program a webcrawler which will be used to search for specific strings in a large website. I'm thinking that it would be ridiculous to store each downloaded page locally, and then parse it for the strings but I really don't know how else I could do it.
Any advice would be much appreciated!