The Singleton design pattern is one of the most well known from the famous book by the Gang of Four, but it’s well overused and often poorly implemented. Let’s discuss why that is and how the singleton pattern can be properly implemented (hint: it’s easier than you think). In a follow-up post, we’ll discuss why you almost never want to do this.
The starting point for many when implementing the Singleton pattern is to hide the instance creation behind an accessor property or function like this:
Often this will be coupled with making the constructor of the target class private, ensuring that the only place that construction occurs is within the
Instance property (or a similar function).
This works well enough in a single threaded application (though it has a number of limitations - keep reading to find out more). But, none of us are writing code for single-threaded computers anymore - even our mobile phones have multiple processors and it’s naive to assume that any piece of code will remain single threaded for very long. (In some cases, the promotion to multi-threading happens through a configuration change to the host process, making it completely invisible to your code.)
The most natural way to make this thread safe is to add a lock around access:
Note the use of a separate object (
_padlock) to control the locks - if you use an instance that is externally visible outside the object (as would happen if you used this for your locks), it becomes spectacularly easy to cause deadlocks, not something you want to happen in production.
Unfortunately, this is where the bad advice often starts - with complaints that the lock is expensive and needs to be avoided at all costs.
Yes, locks are relatively expensive, so you certainly don’t want to have them in the middle of a tight loop. But they’re not so expensive that you need to worry about their overhead very much - your Operating System (whether Windows, Linux, Android, iOS or MacOS) does this kind of thing very efficiently precisely because locks must work effectively in performance sensitive areas.
It’s pretty common to attempt to address the performance concerns by wrapping the lock with another condition. The idea is to try and avoid the cost of the lock if you’ve already created the instance.
This looks safe - and is a really common approach. The problem is that this only mostly works. It can - and probably will - lead to your application blowing up on rare occasions that are very hard to reproduce.
Why? Because multi-threading means that different execution paths can be interrupted at any time - including in the middle of operations that you might otherwise assume are atomic.
On some processors, the assignment to
_instance might be split into two parts - one part writing the top 32 bits of the object reference (assuming a 64 bit environment), the other writing the bottom 32 bits. These might occur in any order. (For the nitpickers amongst you, Yes, the .NET Framework guarantees that writes of object references are atomic - but it’s my understanding that the looser specifications for other .NET platforms, such as .NET Core and Xamarin, permit these writes to be nonatomic when running on lower end processors such as the Atom and ARM ranges.)
All this means the thread creating our connection might be interrupted after writing half of the object reference into
_instance. Along comes another thread trying to access the singleton, blithely reading a non-null reference that points somewhere random in memory. Shortly thereafter, your application goes BANG in a nasty way - and you’re very unlikely to be able to reproduce that on demand, precisely because the error requires a task switch to occur exactly between those two halves of the assignment.
Thus the misplaced fear of the cost of using lock leads to applications with a nasty latent bug that is likely to cause pain, especially once your system working under high load.
Add to this the actions of the optimizers - both the C# compiler and the runtime JIT engine - who can (and will) aggressively reorder your code to achieve better performance, and the opportunities for things to go wrong just escalate. The optimizers are carefully written to preserve the semantics of your code, even while executing things out of order - but since this code isn’t actually thread safe to start with, they’re free to make further changes.
You can mitigate this by using the volatile keyword, which ensures the optimizers check the value properly, avoid caching, and other changes.
There’s a better way, one that avoids all of these problems - the
Lazy<T> class. Where available (NET Framework >= 4.0, .NET Core >= 1.0, .NET Standard >= 1.0 as well as Xamarin),
Lazy<T> provides a safe alternative to implementing the pattern ourselves, allowing us to write this code instead:
All of the mechanics of doing the lazy instantiation - and of ensuring only a single instance is created - are handled by the
Lazy<T>. Code that you never need to write or debug yourself is perhaps the best kind.
If you’re concerned about the performance of locking (which means, of course, that you’ve measured the performance of your code with a profiler and you’ve found that there’s an issue to address), you can use different values for the
LazyThreadSafetyMode parameter to avoid the lock overhead, at the cost of perhaps creating additional instances that are thrown away without being used.
Next time we’ll discuss why even a correct implementation of the singleton pattern is probably not what you really want.