Child pages
  • Multithreading


Search

  Wiki Navigation

    Loading...


 Recently Updated


 Latest Releases

 MediaPortal 1.22 
            Releasenews | Download
 MediaPortal 2 2.2.1 
            Releasenews | Download

Table of Contents

Overview

This page describes multithreading problems and solutions, how those problems are solved in MP2.

You can read all the page or jump quickly to the MP2 Threading Policy section.

Details

When only using a single thread, all jobs in an application need to be serialized one after another. This leads to a) unbeautiful code, because no job can keep the single thread for a long time without jamming other jobs, and b) "stuttering" execution of single jobs, because other jobs might need a too long execution time. So we will use multiple threads for executing different jobs independent from each other.

But at the other hand, multithreading is a very complicated subject and leads to many problems when used without maintaining some rules. This page will focus on deadlocks and race conditions and how they are avoided in the context of MediaPortal 2. We will present rules to accomplish that. Those rules absolutely need to be observed throughout the system, except in rare cases where the correctness can be proved.

General problems when using multiple threads

Simple, naive coding

When code isn't written to be multithreading safe, race conditions can occur:

/// <summary>
/// Example class which can produce race conditions when used from multiple threads.
/// </summary>
public class Even_Unsafe
{
  protected int even = 0; // This variable should always be even

  void Add()
  {
    even = even + 1;
    even = even + 1;
  }

  void Multiply()
  {
    even = even * 2;
  }

Executed in a singlethreading system, this class will always meet its invariant, that the "even" variable is always even. But this method will produce a race condition when both methods are executed by different threads at the same time; in this case, "even" can become odd after both methods have finished their execution. This is called a "race condition". There are many other examples in the web for that.

Protecting critical code sections

So, we use locks (mutexes) to prevent other threads from executing the same code (or code which is protected by the same lock):

/// <summary>
/// Correctly synchronized class which doesn't produce race conditions when used
/// from multiple threads.
/// </summary>
public class Even
{
  protected object _syncObj = new object(); // Synchronizsation object ("lock")
  protected int even = 0; // This variable should always be even

  void Add()
  {
    lock (_syncObj)
    {
      even = even + 1;
      even = even + 1;
    }
  }

  void Multiply()
  {
    lock (_syncObj)
    {
      even = even * 2;
    }
  }

Problems with locks in multithreading environments

If every critical code section is protected by a lock, race conditions are basically prevented in that code part. But using too many locks can also cause problems. First of all, the lock statement is very expensive. Strictly speaking, the entrance into such a region is expensive. So basically, you should prefer using one big protected region over multiple small regions. BUT when calling other code, which also contains protected regions, you can produce a lock cascade. A lock cascade occurs when the runtime environment is instructed to acquire a lock when another lock is already held by the same thread, either in the same stack frame or in any of the calling stack frames.

Dead locks

If two lock cascades are requested from two different threads at the same time and both threads request the same locks but in a different order, a dead lock can occur. In the simplest dead lock situation, one thread T1 already holds lock L1 and is going to request lock L2 while another thread T2 already holds lock L2 and is going to request lock L1.

There are several solutions available you can read in the internet to avoid that problem. One of them suggests to mark all locks with unique numbers and to write all code to request its desired locks in order from the lowest lock number ascending. In the example from above, thread T2 would violate against that guideline and thus the code would need to be rewritten so that it first requests L1 and then L2, which prevents the deadlock situation.

But the problem is that this solution contradicts against our main aspiration to create a modular system. In a modular system, we don't want callers to know how the callee achieves its tasks, especially we don't want to know which locks are reqested by our callees.

 

On 14 Aug 2011, Albert suggested to describe how the .net runtime system handles memory/thread cache synchronization

Threading in MediaPortal 2

MP2 Threading Policy

There are some general guidelines to follow in MP2:

  • Locks are only held inside a single component. Don't call foreign code (for example other system services) or raise events to the outside while holding a multithreading lock which might block other threads calling the current component. It can be ok to do so if only locks are held which can never block the outside world.
  • Exception 1: ServiceRegistration. It is safe to call all public methods on class ServiceRegistration
  • Exception 2: Messages. It is safe to send messages while holding arbitrary locks. It is safe to call all public methods on the IMessageBroker service.
  • Exception 3: Services, which are marked to be callable while holding locks.
  • When receiving messages, locks may only be requested in the asynchronous message handlers (not in synchronous message handlers).
  • For real synchronous calls from one component to another, synchronous message queue receive events can be used. But here, you need to make sure that your called code doesn't lock out any other threads while being executed in the synchronous message thread, because the sender might hold locks. Its the same situation as described under 1.

Coding suggestions to meet the threading policy

To make sure that the first requirement is always met, it can make sense to use a combined naming/locking pattern in methods:

  • Don't include code calling methods from another module inside of the execution block of lock (MyLockObject) { ... } statements (as said in requirement 1 above). Other code next to the calling code is allowed to use the lock statement, off course.
  • Append the suffix _NoLock to the name of all methods, which contain such calls to external code.
  • Also append the suffix _NoLock to the name of all methods, which call other methods with the suffix _NoLock.
  • Never include method calls of _NoLock methods inside a lock statement execution block.

Asynchronous execution contexts in MP2

In other applications, "multithreading" is often motivated by the need to have multiple threads for time-consuming calculation tasks. But in an application like MP2, typically we don't have so many calculation tasks. We need multithreading to have multiple independent execution contexts to request data from the internet, to update data structures in the background, to render the UI, to execute user input etc.

So typically, we have multiple different ways to introduce a new execution context:

  • Create a new thread. That's the right way if you have a real heavy-weight active execution to do like calculation tasks, progressive rendering of something etc. An own thread is used for the SkinEngine's rendering, for example.
  • Use a thread from the MP2 thread pool. That's the right way if you have a more light-weight execution task to do which only takes some time, like calling some method asynchronously.
  • Set up a task in the task scheduler. That's the right way if you have a persistent task which is scheduled for a defined time, with or without repetition. A task could be used to trigger a recording at a specified time, for example.
  • Use an asynchronous message queue. That's the right way if you want to handle system messages. Basically, system messages could also be handled synchronously but according to the MP2 threading policy, asynchoronous message handlers are prefered.

Using asynchronous message queues

See the description about the Messaging concept and the Message Broker service in MP2 to learn how asynchronous message queues are used in MP2.

Asynchronous method calls

For just calling methods asynchronously, use the thread pool:

ServiceRegistration.Get<IThreadPool>().Add(MethodToBeCalled);

Handshake pattern between two or more threads accessing resources

There is often the need to synchronize the access to some shared resources where one of the threads is a worker thread using the resources while other threads call resource management methods (maybe to do some disposal or to exchange the resources). During the management operations, the worker thread must not access the resource collection, off course. I'll show a common pattern to solve that problem.

Lets say we have for example those two methods:

/// <summary>
/// Accesses some shared resources, either periodically or in a neverending loop.
/// This method is executed by a special worker thread.
/// </summary>
public void DoWorkWithResources();

/// <summary>
/// Disposes the shared resources or exchanges them. During the work of this method, the worker thread
/// should be prevented from using the resources.
/// This method is executed by an arbitrary thread, maybe the main/input thread.
/// </summary>
public void DisposeOrExchangeWorkResources();

We could solve that by using some state variables and a mutex (lock) to synchronize multithreaded access. But using locks is often not so good because it can violate the multithreading guidelines which are formulated above.

A pattern to achieve the need without violating the guidelines is to use synchronization event variables.

I'll first describe the basic solution, then I'll show a real-world example. First, we use two variables of type ManualResetEvent. One of them controls the access to the shared resources, lets call it _resourcePresent:

// If this event is set, the shared resource is present
protected ManualResetEvent _resourcePresent = new ManualResetEvent(true);

That variable will be checked in the worker method and its state will be set in the resource management method.

Another variable is used to wait for the worker thread to finish one execution cycle:

// If this event is set, the worker has finished one execution cycle
protected ManualResetEvent _workerFinished = new ManualResetEvent(true);

The method implementations could basically look like this:

protected ManualResetEvent _resourcePresent = new ManualResetEvent(true);
protected ManualResetEvent _workerFinished = new ManualResetEvent(true);

/// <summary>
/// Accesses some shared resources, either periodically or in a neverending loop.
/// This method is executed by a special worker thread.
/// </summary>
public void DoWorkWithResources()
{
  while (!_terminated)
  {
    _workerFinished.Reset();
    try
    {
      _resourcePresent.Wait();
      // ... access the shared resources ...
    }
    finally
    {
      _workerFinished.Set();
    }
  }
}

/// <summary>
/// Disposes the shared resources or exchanges them. During the work of this method, the worker thread
/// should be prevented from using the resources.
/// This method is executed by an arbitrary thread, maybe the main/input thread.
/// </summary>
public void DisposeOrExchangeWorkResources()
{
  _resourcePresent.Reset();
  try
  {
    _workerFinished.Wait();
    // ... exchange resources ...
  }
  finally
  {
    // If we just exchanged the resources, we can set the _resourcePresent flag again.
    // Off course, if we disposed the resources, that should not happen.
    _resourcePresent.Set();
  }
}

That shows quite good the main idea of the pattern to be shown.

But in real-world examples, a problem remains: The check for termination. In the code above, if DisposeOrExchangeWorkResources() is called before the _terminated variable is set to true, the worker won't stop waiting for the resource event.

To avoid blocking the worker thread after the resources were disposed, we can also use an event to signal the termination of the object:

protected ManualResetEvent _resourcePresent = new ManualResetEvent(true);
protected ManualResetEvent _workerFinished = new ManualResetEvent(true);
protected ManualResetEvent _terminatedEvent = new ManualResetEvent(false);

/// <summary>
/// Accesses some shared resources, either periodically or in a neverending loop.
/// This method is executed by a special worker thread.
/// </summary>
public void DoWorkWithResources()
{
  while (!_terminatedEvent.Wait(0))
  {
    _workerFinished.Reset();
    try
    {
      WaitHandle.WaitAny(new WaitHandle[] {_terminatedEvent, _resourcePresent});
      if (_terminatedEvent.Wait(0))
        // System terminated, need to exit this work cycle/this method
        return;
      // ... access the shared resources ...
    }
    finally
    {
      _workerFinished.Set();
    }
  }
}

/// <summary>
/// Exchanges the shared resources. During the work of this method, the worker thread
/// should be prevented from using the resources.
/// This method is executed by an arbitrary thread, maybe the main/input thread.
/// </summary>
public void ExchangeWorkResources()
{
  _resourcePresent.Reset();
  try
  {
    _workerFinished.Wait();
    // ... exchange resources ...
  }
  finally
  {
    _resourcePresent.Set();
  }
}

/// <summary>
/// Disposes the shared resources and ends the worker thread.
/// This method is executed by an arbitrary thread, maybe the main/input thread.
/// </summary>
public void Dispose()
{
  _terminatedEvent.Set();
  _workerFinished.Wait();
}

   

 

This page has no comments.