I have a simple web server built, and I created a status window that runs on a separate thread. If I have a list of structs being updated by the main thread, is it ok to read(only read) from those structs from the second thread for the purpose of getting the status of the transfers? Can I do it without using a critical section as long as I don't write to it?

Thanks for any help =)

First, a critical section is not really the synchro you need for this, you want to use a mutex (may I recommend from Boost Thread).

Second, you say you have a _list_ of structs. Is that a loose terminology or is it an actual std::list. If it is a std::list or another kind of linked-list, then it is marginally ok to read-only without protecting with a mutex. If it is a std::vector or any dynamic array, std::map, or anything like that, then NO it is not OK to read-only without a mutex. The reason for that is if in the middle of your read operation (even if it is very short) the thread scheduling might switch to your main thread and reallocate the memory (say to insert elements) and when you come back to the read operation, the memory addressing will be corrupt and your application will go hay-wire.

So as a pretty general rule, if you share any sort of dynamic memory (anything involving a new/delete) between threads, shield all read-only, write-only, and read-write operations on that memory by locking a mutex before, especially if it is a highly-dynamic memory (e.g. lots of elements added and erased from a list per second). There are cases where that rule is a bit over-the-top and could be relaxed, but you always need to be careful, and the fact that you are posting this thread shows that you are careful, which is good!

Anyways, if you are still not sure if you need it or not, give more details about the nature of the shared memory (like the actual code of that shared list or array or whatever) and give some idea of how you are using it.

ok, thanks for the advice. It looks like I'm going to have to do things the other way though. It isn't an std::list . What I have basically done is allocate an array of these:

struct SOCKET_EX
{
	SOCKET socket;
	string remoteIp;
	int index; //index of this struct in the array
	
	ifstream file;
	string fileName;
	DWORD fileSize;
	DWORD totalBytes;
};

Then I created an std::stack<int> that contains as many indices as there are SOCKET_EX structs in the array. When a user connects, the server pulls an index from the top of the stack, pops the stack, and then uses that index as the index of the array. This way, I don't have to iterate through the array every time I need to find a free socket to connect on, or close a socket. This also guarantees that each socket struct has a static memory location, and allows me to pass it around by memory rather than index.

Anyways, for simplicity, what I was thinking of doing, was to have my second window just poll the stuctures of any active transfers and retrieve fileSize and totalBytes ....but its starting to sound like a bad idea, because the majority of the files sent will be very small...so in most cases, a socket will only be active for as long as it takes to send a simple webpage.

I'm trying to monitor these transfers with as little locking and communication between threads as possible. I am using IO completion ports for this project, so If I start locking all kinds of things it will kind of defeat the purpose =/ I'm not really sure of the best way to do this.

Thanks again for your help

Edited 6 Years Ago by VBNick: n/a

Well from what I see, if you don't need to use the std::stack<int> in the second thread and you are only using the struct array, then you don't need a mutex. For example, you have a redundant variable in the struct, that is, the "index" variable. You could set that index to -1 for sockets that are not used and then your second thread can only traverse the array and check the index, if it is greater or equal to zero you can display whatever you like (fileSize or totalBytes ..). This way the mutex is not necessary.

However, the overhead associated with locking a mutex and as you say the communication between threads is ridiculously small compared to the network bottleneck. So don't worry about that, the network communication is going to be the main bottleneck, no doubt, regardless of how small the files are.

Finally, your strategy seems quite alright to me, so far.

All things considered, I think the best option may be to do things the other way. I was kind of thinking about this the wrong way.

network communication is going to be the main bottleneck

I Think I understand now, that the real lag time will be generated by the worker threads sitting there and waiting for the client to respond, or the data to finish sending. That means that If I do things the other way around, and just create a mutex for the status listview , it wont really interfere with the runnings of the server. I can have the worker threads lock it, and add an item or update the status after each chunk, or whatever needs to be done.

Assuming I have understood this correctly, Thanks for your help =D

The only thing you have to make sure is the the second thread is not just constantly checking the state, you have to leave enough time of unlocked mutex for the worker thread. So:

//This would be really bad:
void StatusCheckThreadFunction() {
  Lock mutex
  while(still_running) {
    Check the status of sockets.
    Print the status in listview.
  };
  Unlock mutex
};

//This is what it should be:
void StatusCheckThreadFunction() {
  while(still_running) {
    Lock mutex
    Check the status of sockets.
    Print the status in listview.
    Unlock mutex
    Sleep for, say, 100 ms (leaving the unlocked to allow the working thread to work).
  };
};

>>The efficient (and elegant) way to do this is to use a condition_variable.

First, can you explain how condition variables can actually help in the OP's context?

Second, you say efficient? I've tried to use condition variables from Boost for a massively-multi-threaded application I was developing and they were so slow (wake-up intervals are ridiculously long) and inefficient that it made them completely inapplicable for CPU-bound MMT apps. I had to resort to a simpler method with a yielding loop and double-buffered variables as synchros (and mutex lock, of course). So maybe they are applicable to a network-bound or user-bound application because the efficiency requirements are extremely low (given the network or user-event bottleneck) and they are elegant and simple to use, but they are certainly not efficient.

Thanks for that article, very informative! I didn't know so much about condition variables other than the big performance problem described in that article, by having experienced it but without looking into that problem. In either case, as I said earlier, condition variables are still best suited for a task queue (as in the article), but still, in CPU-bound MMT apps, it would be nightmarish to use them, so I'll keep my fast yielding loop.

for the OP's problem though, condition variables could be used, although I still believe a simple mutex will do because it is not a master/worker-queue implementation but just a master thread and a status checker -thread.

I didn't read that whole article, but I got a few good points from it.

I originally wanted a separate window on a separate thread to display the status of any transfers, and my server skip the updates when the window was closed. Implementing that turned out to be much more complicated than it was worth. Conditionally updating the list also created a lot of complications, involving a lot of pointless iterations of the list.

I settled for a tab-strip with a traffic tab for the UI. For the updates, I just defined a minimum file size for the status list. My internet connection only has a 1Mbit upstream, so I figured a minimum of 131072 bytes(1 second at top speed) would work ok for now.

The two main things I picked up on from what I read in that article were dealing with mutex contention, and waiting for threads to wake up. I decided finally to create a critical section with a spin count for the list view. When a qualifying transfer starts or finishes, it is added/removed from the list, and it is updated once per chunk sent.

I have a question though, for Mike. You said that I should not use a critical section, but I don't see how it would work any better than what I have done. Also, MSDN says that a critical section IS a mutex, but one that "can be used only by the threads of a single process"

Anyways, back to coding.
Nick

You are right, you can use both for either purposes, they are both methods for mutual exclusion of threads. So, I was wrong in saying that the critical section was not applicable, because frankly, I have never used them and I understood them as a way to just lock a piece of code (which is what they were years ago when I was still doing pure win32 applications). Now, on cross-platform coding, I work exclusively with Boost.Thread for all sync functionalities and it does not include critical sections, only mutexes.

Your question and the msdn page on critical sections poked my curiosity about the efficiency of mutexes vs critical sections. It is indeed expected that critical sections are faster because they try to avoid the kernel calls with a spin count. However, looking at the source code of Boost.Thread, I saw that the win32 mutex version is implemented with a spin count to avoid kernel calls just like critical sections. Read this nice article about it. So my recommendation: use the Boost.Thread library, this way you are pretty much guaranteed the best performance, flexibility and platform-independence.

Okay.

you have a redundant variable in the struct, that is, the "index" variable.

I finally see what you mean. I came across a winsock article on MSDN that I missed somehow, and finally learned that what I was looking for was called a "lookaside list"

some quick googling produced this nice article from codeproject:
http://www.codeproject.com/KB/trace/lookaside.aspx

after quickly looking it over, it occurred to me that the entire array I was using was redundant. All I had to do was use push the memory addresses of each struct onto the linked list on initialization instead of maintaining a list of indices from an array. This is also advantageous of course because the linked list can be easily expanded to accommodate more elements while maintaining static memory locations, where the array can not.

The only thing you have to make sure is the the second thread is not just constantly checking the state, you have to leave enough time of unlocked mutex for the worker thread.

I originally thought that it would be easier to just update the status listview after sending each chunk, but even with one file going at 120kb/s (15 updates per second with 8192 byte chunks) performance was suffering...So I decided to do it your way, which also allowed me even to omit the if statement for the update after each chunk and just lock the mutex for the transfers every 100ms to check the status. I just used SetTimer() in the main thread with a timerproc to do the checking.

so thanks again for the advice =)

now onward to my next problem....
when I started this project, VS was reporting 5 threads, which is how many I started based on the number of cores in my machine, plus 1 for the main window. After adding some extra features my output window now reads this when I close the app :'(

The thread 'Win32 Thread' (0x16d8) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0xa50) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0xa34) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0x101c) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0x11a8) has exited with code 0 (0x0).
First-chance exception at 0x75469617 in WebServer.exe: 0x80010108: The object invoked has disconnected from its clients.
First-chance exception at 0x75469617 in WebServer.exe: 0x80010012: 0x80010012.
First-chance exception at 0x75469617 in WebServer.exe: 0x80010012: 0x80010012.
First-chance exception at 0x75469617 in WebServer.exe: 0x80010108: The object invoked has disconnected from its clients.
First-chance exception at 0x75469617 in WebServer.exe: 0x80010012: 0x80010012.
The thread 'Win32 Thread' (0x13b4) has exited with code 0 (0x0).
'WebServer.exe': Unloaded 'C:\Windows\System32\upnp.dll'
'WebServer.exe': Unloaded 'C:\Windows\System32\ssdpapi.dll'
The thread 'Win32 Thread' (0x10ac) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0x126c) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0x1564) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0xf40) has exited with code 0 (0x0).
'WebServer.exe': Unloaded 'C:\Windows\System32\msxml3.dll'
'WebServer.exe': Unloaded 'C:\Windows\System32\npmproxy.dll'
The thread 'Win32 Thread' (0x11a0) has exited with code 0 (0x0).
The thread 'Win32 Thread' (0x170c) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x1748) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x120c) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0xaec) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x1050) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x10b8) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0xb08) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x13f8) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x16b8) has exited with code 18 (0x12).
The thread 'Win32 Thread' (0x15b4) has exited with code 18 (0x12).

I have no idea where all these extra threads are coming from...I guess I have some work to do :icon_rolleyes:

All these extra threads are there because a GUI is built with several elements that are supposed to appear as running simultaneously. So it is not uncommon to see a GUI having tons of threads running. The underlying mechanisms of many GUI elements involve passing windows messages between semi-sleeping threads. That's why, for example, some applications, when they freeze because of some intense computation on the main thread, will have some items on the windows that seem to still work (display is updated correctly, when you click a button it pushes down, etc.) that indicates how some components are built to run on their own thread. In fact real-time operation and semi-simultaneousness were the main purpose of multi-threading, and still is now even with multi-core computers (where the concept of parallel computing is much more relevant in increasing performance).

This article has been dead for over six months. Start a new discussion instead.