Hi guys..
I need advice on writing function in a multi-threaded envionment

Lets say I have a function which squares an integer.

int Square(int num)
{
    // May do other things which might take long time to complete.
    return num*num;
}

This function is called in threads eg ThreadA and ThreadB and sometimes, ThreadA and ThreadB may call Square function at the same time.
My concern is how do I ensure that when both threads call Square function at the same time, Square function functions correctly.

Btw, I'm using C++/CLI.

Please advice. Thanks.

If you do not modify and read a data from memory or hard disk, then you don't need to worry about thread safety, I think. The race condition or concurrent access/modify is important when you have a shared data between threads, in this function you do not have. You are doing only computation.

If you need to modify and access a shared data concurrently with multiple threads then you need a lock mechanism. This feature comes out of the box with thread safe modules or you can implement it on your own. You can search the internet.

Unless square is modifying a global variable, you don't really have to worry about such issues. But as a general answer, you can use locks to ensure that only one thread executes a function at a time.

If you make functions that are re-entrant (or almost pure-functional) then you don't have to worry about anything. The main characteristic of a re-entrant function is that it has no dependence whatsoever on data from outside the function (excluding the input variables of course, and read-only access to global (quasi-)constants), and has no side-effects (it generates some outputs from the given input variables, and that's it, it doesn't modify any global variables or anything of the sort). In this case, once the function is provided with its input variables, it can, by definition, operate from start to finish without affecting anything else in the application or without being affected by any changes in the application's state. This implies that you don't have to worry one bit about using this function in a multi-threaded environment.

The fact that two or more threads are executing the same function at the same time makes no difference, each thread has its own execution stack for the function, and thus, each execution is entirely separate from one another.

What you have to worry about is SHARED DATA.

This innocent-looking function that caches the fibonacci sequence is a typical example of a hidden side-effect:

int GetFibonacciNumber(int i) {
  static std::vector<int> seq{1,1};
  while(seq.size() < i)
    seq.push_back(*(seq.rbegin()) + *(++seq.rbegin()));
  return seq[i];
};

Here, the fibonacci sequence is being cached in a static vector which is unique for all instances of the function and is thus a data element that is shared between threads that are executing this function. If you happen to be unlucky enough that two threads execute this function at the same time and that both start adding elements to the sequence at the same time, you can certainly get a very corrupt fibonacci sequence. Because most of the time this won't happen, this is an example of a silent bug because it can go unnoticed for a long time, and eventually cause sporadic and extremely weird effects or crashes. This is typical of bugs that are related to concurrent and unprotected access to data that is shared between threads.

Any data that is global, or shared between more than one thread of execution is susceptible of being accessed simultaneously by multiple threads. That can quickly become a huge issue. The basic solution is to protect all the data accesses with mutexes (short for "mutual exclusions") which are a safe mechanism to stop (or lock) all other threads from accessing certain data (or doing certain operations) when one thread is accessing the data (and thus the name "mutual exclusion"). Of course, that's far from being all there is to it because there are many things to worry about like the performance problems occurring too much access to shared data is required and all the threads end up spending way too much time just waiting for one another. Then, you have race conditions (weird interactions between threads via the shared data) and dead-locks (when two or more threads are waiting for each other and are thus dead-locked). Of course, to solve these problems there are many different techniques that are much more fancy than simple mutexes (like spin-locks, atomics, critical sections, non-blocking algorithms, etc., in addition to a number of OS-specific features for concurrent programming, like programmable interrupts).

Ideally, of course, you want to avoid having to worry about any of this, and the best way to minimize the need to be concerned with concurrent access of shared data is to minimize the amount of shared data (or minimize the amount of threads, but that often defeats the purpose of your software). And this is something to always keep in mind when creating software designs, minimizing data interdependence between components (which is a good guideline whether working on a multi-threaded program or not).

To make this simple, if you don't read/modify a global or local static variable, then you can have as many instances of your function running as you want. All non-static local data, including the incoming argument, are stack-based, and each thread is running with its own stack. So, in effect, your example is "thread safe".

This article has been dead for over six months. Start a new discussion instead.