Achieving Named Lock / Locker functionality in C# 4.0

I was recently writing some code to implement a file system cache for streams and came across an interesting dilemma: how do I lock around a file that is being created by ThreadA until it is ready to be accessed by ThreadB? This is not a classic producer/consumer problem as you might be thinking: Any number of threads within a single app domain could be producing or consuming different parts of the cache (files) simultaneously.

The .NET framework doesn’t appear to provide a way to block multiple threads from accessing the same file short of locking the file it’s self and dealing with exceptions that may occur. I prefer to avoid causing un-necessary exceptions and perhaps have a little fun at the same time so I looked for another solution and actually came up with many…

Full blown lock on the entire resource

Everyone has probably seen the below code more times than they’d like. Lock around an object representing a resource until it’s free and other threads are able to access it. The advantage to this code is that is very straight forward and easy to understand. The problem is that the lock is on the entire resource. This means that anyone wanting to interact with the cache, in my example all files in the cache, are blocked until the entire resource frees up.

  • Pros
    • Easy to understand
  • Cons
    • Slow: Locking on entire resource causes a lot of un-necessary thread blocking on the resource
  • Performance

Usage

static object _resourceLocker = new object();
void Write(string key, Stream s)
{
	lock (_resourceLocker )
	{
		//write file
	}
}

Stream Read(string key)
{
	lock (_resourceLocker )
	{
		//read file
	}
}

Named Locking?

From the above example and performance considerations it’s quite apparent that there must be a better solution. In the file system cache example we don’t care if another file in the resource is being accessed, there is no problem with multiple threads working in the cache at the same time, just on the same file. What we need is the ability to lock a single file within the larger cache resource.

Named Mutex Lock

A mutex is similar to the above C# lock with two important distinctions for this use case:

  • A mutex is named by a string. This means for this example I could actually lock on the individual file rather than the resource as a whole by keying on the file path.
  • A mutex is broadcast to the entire operating system so it is suitable for both itra and inter-process communication. If I was running multiple app domains on this cache this would be a very viable solution as long as each process referenced the same key string in the same way.

The major problem with a mutex however is it’s speed: Since it broadcasts to the entire operating system it is much slower than something that can exist entirely in .NET. In fact the Named Mutex was the slowest of all solutions to this problem clocking in at 5.9 times slower than the fastest solution. Lets also admit it looks pretty ugly too!

  • Pros
    • Suitable for interprocess communication (ex: across app domains)
    • Provides named locker functionality
  • Cons
    • Slow: OS level broadcast is much slower than staying in the App Domain
    • Overly verbose and possibly confusing code
  • Performance

Usage

 

void Write(string key, Stream s)
{
	using (var mutex = new Mutex(false, key))
	{
		mutex.WaitOne();

		//write file

		mutex.ReleaseMutex();
	}
}

Stream Read(string key)
{
	using (var mutex = new Mutex(false, key))
	{
		mutex.WaitOne();

		//read file

		mutex.ReleaseMutex();
	}
}

 

 

Locking on an Interned String

String interning is essentially a way for the .NET CLR to save memory when it comes to string use. As the application is JITing the .NET CLR saves each identical string into a hash table. At run time these ‘different’ strings actually point to the same reference in the hash table so are .Equal() by pointing to the exact same reference in memory.

What makes string interning interesting for the purposes of named locking is that any developer can add to the contents of this cache by using String.Intern() method which both takes and returns the same string. This means that any other time that exact same string is accessed via String.Intern() the same reference in memory will be used which makes it a suitable object to lock on.

Before you get too excited there are a few possible issues with this approach:

If any other code decides to lock on the same string you will likely cause a deadlock at some point in your application because an unexpected lock could be held infinitely blocking one or more threads from running.

The other issue is that the underlying implementation of String.Intern() is very much outside of your control. Furthermore it isn’t really designed for the purposes of locking. Who’s to say that at some future date the CLR might be designed as such to clean up strings from the intern pool to save memory? In fact I couldn’t really find much documentation to guarantee that there is no possible way that a string could leave the intern pool. If this were to happen somehow it’s possible that a lock could be violated compromising the thread safety of our application.

  • Pros
    • Provides named locker functionality and is very fast
    • Simple code and relatively easy to understand
  • Cons
    • Can cause deadlocks if any other code happens to lock on the same string instance
    • Relies on an implementation that is outside of developer control (CLR) and who’s features could possibly change at any time
  • Performance

Usage

void Write(string key, Stream s)
{
	lock (string.Intern(key))
	{
		//write file
	}
}

Stream Read(string key)
{
	lock (string.Intern(key))
	{
		//read file
	}
}

The NamedLocker class

What if we could have something pretty similar to String.Intern() in performance and readability but with none of the cons? This is what caused me to arrive at the NamedLocker class: and it’s only 30 lines of code at most. As you can see below the core of the NamedLocker class relies on a ConcurrentDictionary (.NET 4.0+ is required). Now it’s certainly feasible to roll your own, Microsoft has kindly provided us with a better tested (I hope) and better performing reader writer locked dictionary than I could readily muster.

With the NamedLocker class you have explicit control over the scope of the internal locks and the implementation (the parts that matter) are within your explicit control unlike using String.Intern(). I have also added some code to make quick locks a breeze using lambda expressions to specify the scope of the locks.

The only issue with the NamedLocker class is that reading and writing are dealt with in the same way. In the file system cache example I have no problem with multiple readers, I only want to block readers when there is a writer or block the possiblity of multiple concurrent writers. Not having adistinction between readers and writers should be expected to have a slight performance hit for this situation.

  • Pros
    • Provides named locker functionality and is very fast
    • Simple code and easy to understand
  • Cons
    • Handles readers and writers under the same lock which is a little less than ideal for performance
  • Performance

Usage

static readonly NamedLocker _namedlocker = new NamedLocker();
void Write(string key, Stream s)
{
	lock (_namedlocker.GetLock(key))
	{
		//write file
	}

	//OR

	_namedlocker.RunWithLock(key, () => /*write file*/);
}

Stream Read(string key)
{
	lock (_namedlocker.GetLock(key))
	{
		//read file
	}

	//OR

	return _namedlocker.RunWithLock(key, () => /*read file*/);
}

NamedLocker Implementation

public class NamedLocker
{
	private readonly ConcurrentDictionary<string, object> _lockDict = new ConcurrentDictionary<string, object>();

	//get a lock for use with a lock(){} block
	public object GetLock(string name)
	{
		return _lockDict.GetOrAdd(name, s => new object());
	}

	//run a short lock inline using a lambda
	public TResult RunWithLock<TResult>(string name, Func<TResult> body)
	{
		lock (_lockDict.GetOrAdd(name, s => new object()))
			return body();
	}

	//run a short lock inline using a lambda
	public void RunWithLock(string name, Action body)
	{
		lock (_lockDict.GetOrAdd(name, s => new object()))
			body();
	}

	//remove an old lock object that is no longer needed
	public void RemoveLock(string name)
	{
		object o;
		_lockDict.TryRemove(name, out o);
	}
}

The NamedReaderWriterLocker class

Well here you have it: the best performing class I could muster for a situation where readers and writers should be treated differently. It does everything that NamedLocker does but provides different channels of access for readers than writers allowing for very slightly better performance. The downsides are it can be uglier to use and has a bigger memory footprint than any of the other methods. I have also provided some lambda access patterns that abstract away most of the ugliness of using a ReaderWriterLock. Worth it?

  • Pros
    • Provides named locker functionality and is the fastest solution
    • Provides different channels of access for readers and writers reducing contention
  • Cons
    • The code required to implement a ReaderWriterLock is uglier and more error prone than most of the other options
  • Performance

Usage

static readonly NamedReaderWriterLocker _namedRwlocker = new NamedReaderWriterLocker();
void Write(string key, Stream s)
{
	var rwLock = _namedRwlocker.GetLock(key);
	try
	{
		rwLock.EnterWriteLock();
		//write file
	}
	finally
	{
		rwLock.ExitWriteLock();
	}

	//OR

	_namedRwlocker.RunWithWriteLock(key, () => /*write file*/);
}

Stream Read(string key)
{
	var rwLock = _namedRwlocker.GetLock(key);
	try
	{
		rwLock.EnterReadLock();
		//read file
	}
	finally
	{
		rwLock.ExitReadLock();
	}

	//OR

	return _namedRwlocker.RunWithReadLock(key, () => /*read file*/);
}

NamedLocker Implementation

public class NamedReaderWriterLocker
{
	private readonly ConcurrentDictionary<string, ReaderWriterLockSlim> _lockDict = new ConcurrentDictionary<string, ReaderWriterLockSlim>();

	public ReaderWriterLockSlim GetLock(string name)
	{
		return _lockDict.GetOrAdd(name, s => new ReaderWriterLockSlim());
	}

	public TResult RunWithReadLock<TResult>(string name, Func<TResult> body)
	{
		var rwLock = GetLock(name);
		try
		{
			rwLock.EnterReadLock();
			return body();
		}
		finally
		{
			rwLock.ExitReadLock();
		}
	}

	public void RunWithReadLock(string name, Action body)
	{
		var rwLock = GetLock(name);
		try
		{
			rwLock.EnterReadLock();
			body();
		}
		finally
		{
			rwLock.ExitReadLock();
		}
	}

	public TResult RunWithWriteLock<TResult>(string name, Func<TResult> body)
	{
		var rwLock = GetLock(name);
		try
		{
			rwLock.EnterWriteLock();
			return body();
		}
		finally
		{
			rwLock.ExitWriteLock();
		}
	}

	public void RunWithWriteLock(string name, Action body)
	{
		var rwLock = GetLock(name);
		try
		{
			rwLock.EnterWriteLock();
			body();
		}
		finally
		{
			rwLock.ExitWriteLock();
		}
	}

	public void RemoveLock(string name)
	{
		ReaderWriterLockSlim o;
		_lockDict.TryRemove(name, out o);
	}
}

Results and Demo Project

Feel free to download the demo project used to generate these results. Each result was calculated from an average of 5 runs on each lock method from a cold start using the constraints in the test project. Hopefully this helped and let us all know if you have found a better way in the comments below!

21 thoughts on “Achieving Named Lock / Locker functionality in C# 4.0

  1. I tried writing an INamedLocker that uses concurrentdictionary but that does not have memory pressure problems from the dictionary growing forever. thought I was there.

    … but ran in to some troubles with AddOrUpdate not being atomic. Here is the code (+description of the problem)
    http://stackoverflow.com/questions/17287235/atomic-addorupdate-trying-to-write-named-locker-using-concurrent-dictionary

    Anyways, take a look John if that is interesting to you. Let me know if you have some good ideas 🙂

  2. Hey,
    I wanna warn the users of that code that two last lockers are not thread safe and may lead to incorrect behavior if you use RemoveLock methods. That’s because lock (_lockDict.GetOrAdd(name, s => new object())) will be translated to something similar to:

    1: try{
    2: var obj = _lockDict.GetOrAdd(name, s => new object());
    3: Monitor.Enter(obj);
    …..
    }
    finally{
    Monitor.Exit(obj);
    }

    And there is no critical section between lines 2 and 3, it could happen that between those method calls some other thread could easily call RemoveLock and remove lock object. And all next calls with the same lock name will use different lock object.

    And similar issue applies to the last one.

  3. Hey John,

    Thanks for the NamedReaderWriterLocker, it is working great in my project.

    What would be great is to create an attribute for RunWithWriteLock and RunWithReadLock, so we could use it as follows;

    [RunWithWriteLock(“key”)]
    void Write(string key, Stream s)
    {
    //anycode here
    }

    [RunWithReadLock(“key”)]
    Stream Read(string key)
    {

    }

  4. Hi John,

    Thanks for the samples.

    However I ran into one issue with NamedLocker. You’re using delegates for a new object (s => new object()), which cause the ConcurrentDictionary not to be thread safe anymore (link). But if you change the ‘s => new object()’ with ‘new object()’ it works as expected.

  5. It’s unbelivable, but most time is spent in method GetInteractionType() due to new Random instanc creation. If place it in static field, whol process speeds up.
    Will test further!

  6. There is a slight issue with the last one that ReaderWriterLockSlim is IDisposable. Thus you should make NamedReaderWriterLock implement IDisposable and dispose all the ReaderWriterLockSlim objects.

  7. This code is dangerous and could lead to multi-threading issues that are very hard to debug. Consider the scenario where 2 threads are calling NamedReaderWriterLocker.GetLock at the same time and with the same name, the valueFactory s => new ReaderWriterLockSlim() will be executed twice and those 2 threads eventually get a different instance of `ReaderWriterLockSlim` allowing both threads to execute the same code block.

    1. The scenario you described is not a problem. ConcurrentDictionary.GetOrAdd(TKey,Func) may indeed invoke the value factory on every thread, but all attempts to set a key’s value are protected by a double-checked lock. Multiple threads may approach the critical section, each with its own value for the key, but the first thread to enter wins the race and all other threads return that value. See https://referencesource.microsoft.com/ for the actual source code of ConcurrentDictionary.

  8. One that you’re missing is one that uses the ConditionalWeakTable, of which Microsoft ensured is safe to use.

    public static class KeyedLocker
    {
    public static object GetLockable(T key) where T : class => LockerHelper.Table.GetOrCreateValue(key);

    private static class LockerHelper where T : class
    {
    public static readonly ConditionalWeakTable Table = new ConditionalWeakTable();
    }
    }

    With this you can use any kind of key.

    lock (KeyedLocker.GetLockable(MyKey)) {

    // do your stuff

    }

  9. (My code should be, forgot to wrap code tags ;-))

    public static class LockUtils
    {
    public static object GetLockable(T key) where T : class => LockHelper.Table.GetOrCreateValue(key);

    private static class LockHelper where T : class
    {
    public static readonly ConditionalWeakTable Table = new ConditionalWeakTable();
    }
    }

  10. Why can’t I add generics?? the ConditionalWeakTable should be ConditionalWeakTable(T, object) (but with the correct braces)

  11. Hi,

    Nice article, I found the NamedLocker useful, the action/function delegates take some of the drudgery out of writing lock code.

    One issue with it is – there’s no easy/safe way to cleanup the dictionary, which was important in my scenario.

    Below is an implementation that performs the dictionary clean up. I have tested quite a bit, seems to work well.

    public class NamedLocker
    {
    private class LockObject
    {
    internal long LockCount;
    internal LockObject LockCountAdd(long a)
    {
    Interlocked.Add(ref LockCount, a);
    return this;
    }
    }

    private readonly ConcurrentDictionary lockers = new ConcurrentDictionary();

    public void RunWithLock(string key, Action action)
    {
    LockObject obj = GetOrAddLock(key);
    lock (obj)
    {
    try { action(); }
    finally { DecrementOrRemoveLock(key, obj); }
    }
    }

    public TResult RunWithLock(string key, Func func)
    {
    LockObject obj = GetOrAddLock(key);
    lock (obj)
    {
    try { return func(); }
    finally { DecrementOrRemoveLock(key, obj); }
    }
    }

    private LockObject GetOrAddLock(string key)
    {
    return lockers.GetOrAdd(key, new LockObject()).LockCountAdd(1);
    }

    private bool DecrementOrRemoveLock(string key, LockObject obj)
    {
    return obj.LockCountAdd(-1).LockCount == 0 && lockers.TryRemove(key, out _);
    }

    internal int KeyCount
    {
    get { return lockers.Keys.Count; }
    }
    }

    1. Forgot to add code blocks, hopefully this is readable..


      public class NamedLocker
      {
      private class LockObject
      {
      internal long LockCount;
      internal LockObject LockCountAdd(long i)
      {
      Interlocked.Add(ref LockCount, i);
      return this;
      }
      }

      private readonly ConcurrentDictionary lockers = new ConcurrentDictionary();

      public void RunWithLock(string key, Action action)
      {
      LockObject obj = GetOrAddLock(key);
      lock (obj)
      {
      try { action(); }
      finally { DecrementOrRemoveLock(key, obj); }
      }
      }

      public TResult RunWithLock(string key, Func func)
      {
      LockObject obj = GetOrAddLock(key);
      lock (obj)
      {
      try { return func(); }
      finally { DecrementOrRemoveLock(key, obj); }
      }
      }

      private LockObject GetOrAddLock(string key)
      {
      return lockers.GetOrAdd(key, new LockObject()).LockCountAdd(1);
      }

      private bool DecrementOrRemoveLock(string key, LockObject obj)
      {
      return obj.LockCountAdd(-1).LockCount == 0 && lockers.TryRemove(key, out _);
      }

      internal int KeyCount
      {
      get { return lockers.Keys.Count; }
      }
      }

      1. Low-lock code is not that easy. There’s two bugs that I can see. First is in your DecrementOrRemoveLock:

        1) Thread A calls DecrementOrRemoveLock, sees LockCount == 0, decides to TryRemove.
        2) Thread B calls RunWithLock, enters lock block
        3) Thread A continues execution, removes the lock object.
        4) Thread C calls RunWithLock, GetOrAdd gives it a new LockObject.
        5) Thread C enters the lock block and both thread B and C are now executing code simultaneously!

        A simpler bug is in GetOrAddLock itself.

        1) Thread A calls GetOrAddLock, gets a new LockObject, but doesn’t increment the counter yet.
        2) Thread B calls GetOrAddLock, gets that same LockObject whose LockCount is 0. It increments count, enters lock, executes the function, decrements count back down to zero and removes from dictionary.
        3) Thread C calls GetOrAddLock, gets a new LockObject, starts executing function.
        4) Thread A wakes up, increments the counter on the old LockCount object, starts executing function. Both A and C are now executing!

      2. Here’s my attempt. But like I said, low-lock programming is tricky so this might not be correct.

        Usage would be:


        using (keyedLock.Enter(key))
        {
        // some code
        }

        In any case, I doubt this will be useful. Removing the unneeded entries will make sure there’s not many of them in the dictionary (this is the whole point). However, ConcurrentDictionary locks when writing. This locking is usually only partial, but for such a low number of entries it will probably just lock the whole dictionary for every Remove and AddOrUpdate call.

        Taking all of this into account, I’d rather just use a regular Dictionary and use one central lock for managing entries. This is a simple operation and Dictionary is fast so this lock should introduce less overhead than a small ConcurrentDictionary which gets modified all the time.


        public sealed class KeyedLock : IKeyedLock
        {
        public IDisposable Enter(TKey key)
        {
        var entry = GetEntry(key);
        return new Unlocker(cache, new KeyValuePair(key, entry));
        }

        private Entry GetEntry(TKey key)
        => cache.AddOrUpdate(
        key,
        k => new Entry(0, new object()),
        (k, w) => new Entry(w.Id + 1, w.Locker));

        private readonly ConcurrentDictionary cache = new ConcurrentDictionary();

        private struct Entry : IEquatable
        {
        public Entry(int id, object locker)
        {
        Id = id;
        Locker = locker;
        }

        public readonly int Id;
        public readonly object Locker;

        public bool Equals(Entry other) => Id == other.Id && Locker == other.Locker;
        public override int GetHashCode() => Locker.GetHashCode() ^ Id.GetHashCode();

        public override bool Equals(object obj) => obj is Entry other && Equals(other);
        public static bool operator ==(Entry left, Entry right) => left.Equals(right);
        public static bool operator !=(Entry left, Entry right) => !(left == right);
        }

        private sealed class Unlocker : IDisposable
        {
        public Unlocker(ConcurrentDictionary cache, KeyValuePair keyValuePair)
        {
        this.cache = cache;
        this.keyValuePair = keyValuePair;

        Monitor.Enter(keyValuePair.Value.Locker);
        }

        private ConcurrentDictionary cache;
        private readonly KeyValuePair keyValuePair;

        public void Dispose()
        {
        if (cache == null)
        return;

        Monitor.Exit(keyValuePair.Value.Locker);

        // ConcurrentDictionary's implementation of ICollection.Remove removes conditionally,
        // only if both key and value are as expected.
        // From: https://devblogs.microsoft.com/pfxteam/little-known-gems-atomic-conditional-removals-from-concurrentdictionary/
        // It is guaranteed that all dictionary operations are atomic so it must work.
        ICollection<KeyValuePair> asCollection = cache;
        asCollection.Remove(keyValuePair);

        cache = null;
        }
        }
        }

  12. Hello, I was trying to use the NamedReaderWriterLocker class with entity framework core. I essentially am creating a named lock and trying to async await some database updates. It seems my thread switches context for the async await so the lock is no longer valid. Is there a way to do something like NamedReaderWriterLocker but in a way that works with async await code?

Leave a Reply

Your email address will not be published. Required fields are marked *

Proudly powered by WordPress | Theme: Cute Blog by Crimson Themes.