Thread safe singleton in swift
Swift Thread safe Singleton
[GCD]
[Swift barrier flag for thread safe]
You are able to implement Swift's Singleton pattern for concurrent envirompment using GCD
and 3 main things:
- Custom concurrent queue - local queue for better performance where multiple reads can be happened at the same time
sync
-customQueue.sync
for reading a shared resource - to have clear API without callbacksbarrier flag
-customQueue.async(flags: .barrier)
for writing operation: wait when running operations are done -> execute write task -> proceed executing task
public class MySingleton {
public static let shared = Singleton()
//1. custom queue
private let customQueue = DispatchQueue(label: "com.mysingleton.queue", qos: .default, attributes: .concurrent)
//shared resource
private var sharedResource: String = "Hello World"
//computed property can be replaced getters/setters
var computedProperty: String {
get {
//2. sync read
return customQueue.sync {
sharedResource
}
}
set {
//3. async write
customQueue.async(flags: .barrier) {
sharedResource = newValue
}
}
}
private init() {
}
}
Thanks to @rmaddy comments which pointed me in the right direction I was able to solve the problem.
In order to make the property foo
of the Singleton
thread safe, it need to be modified as follows:
class Singleton {
static let shared = Singleton()
private init(){}
private let internalQueue = DispatchQueue(label: "com.singletioninternal.queue",
qos: .default,
attributes: .concurrent)
private var _foo: String = "aaa"
var foo: String {
get {
return internalQueue.sync {
_foo
}
}
set (newState) {
internalQueue.async(flags: .barrier) {
self._foo = newState
}
}
}
func setup(string: String) {
foo = string
}
}
Thread safety is accomplished by having a computed property foo
which uses an internalQueue
to access the "real" _foo
property.
Also, in order to have better performance internalQueue
is created as concurrent. And it means that it is needed to add the barrier
flag when writing to the property.
What the barrier
flag does is to ensure that the work item will be executed when all previously scheduled work items on the queue have finished.