I want to download a series of files say 150 files whose download links are similar but they vary only by a number. That is, the general download link is:

http://www.example.com/content/download_content.php?content_id=#
where # is a number b/w 1 and 150.

Which of the following works best? Share your suggestions too.. :) Suggest some code in other languages as I'm not that good in linux.. But if linux is good for this kinda stuff, it's alright.. Please give me a brief explanation for how to execute these bash codes. Thanks...

for i in `seq 1 150` ; do wget "http://www.example.com/content/download_content.php?content_id=$i" -O $i.html ; done
curl -O http://www.example.com/content/download_content.php?content_id=#[1-150]
#!/bin/sh
i=1
while [ $i -le 150 ]; do
  wget -O $i.out "http://www.example.com/content/download_content.php?content_id=$i"
  i = $((i + 1))
done

Hi mirasravi!

I'm not *exactly* sure what you're looking for, but personally I like to use as few commands as possible for something like this. That being said, your curl example seems like the most efficient. I'm not sure about the syntax though. I think it should be something like this:

curl http://www.example.com/content/download_content.php?content_id=[0-150] -o file#1.html

That being said, there are endless ways to do loops in all kinds of scripting...

An exercise that I put to some of my tech support team was to come up with a way to use nested loops to print the words to weebl's badger song in as many different scripting/programming languages as possible :) You can do the same thing with your script here, if you're just looking for a learning tool. Otherwise, I think your curl line is probably the most efficient and straightforward.

I hope this helps!
-G

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.