I am developing a module that will download pdf files to a location. For downloading i have used streams to read and write to file. initially each download would happen in a thread, so for the sake of pausing and resuming i would just pause and resume the thread. But Now the number of download thread are limited, so obviously i cannot pause a thread and hold and keep some other file download waiting. So my question here is how exactly can i do the pause and resume standing within the confines of c#.


I don't see why the number of download threads are limited, unless you have users using the threads or something, then it would be more of a matter of limiting them to n threads in the API. Just Curious, why the limitation? Is it a user thing or something? Are you downloading like hundereds of files or something?

My initial thoughts are that if the number of threads are limited perhaps you could serialize the tasks to a file, or CSV, or something (the task object it's self, not the stream, so just thread metadata for restarting), I actually like XML if the threads have been paused, or something. Long running tasks could therefore perhaps be demoted, or paused and sent to a file provided they aren't time sensitive, because the longer the files download, the more they are abusing your obviously limited bandwidth. You could store the index in the file, and some other info like size to ensure the file hasn't changed since the last time you started downloading. If the computer which is preforming the ops need not be rebooted on occasion, then you can just put tasks in a collection. Perhaps you could use some sort of circular buffer or something so the old threads wouldn't be perpetually on the bottom of the list if there were problems with bandwidth.

If a file is clearly more important than others you can use an enum as a flag. I am not even going to lie, the following code has been conveniently lifted from .net pearls:

    enum Importance

Restarting the thread at the right location in the file would be interesting, I think you would need to do some "header magic". I have found some source code, but am unsure if the site is reputable, so you should probably google a better example. I would probably name the file you download to "part", or something like what the browser does until the file is actually downloaded, then rename when it is done so you don't accidentally open a malformed file.

You could have an additional thread or timer event keeping track of when to move on to the next tasks, and reassign. Everybody move down by n indexes!

I hope this helps, perhaps if you go into some more detail about the requirements/code people could be of more help.

Like i said initially each download would happen on a single thread. so if 100 files were downloading at the same time 1000 threads would be working. this would slow down the entire system. I guess this should answer your curiosity? let me know if my understanding is wrong here.

for pausing and resuming like you said is there any library i can use?