Append not thread-safe?
In Go no value is safe for concurrent read/write, slices (which are slice headers) are no exception.
Yes, your code has data races. Run with the -race
option to verify.
Example:
type myClass struct {
AttributeName string
}
sourceSlice := make([]myClass, 100)
destSlice := make([]myClass, 0)
var wg sync.WaitGroup
for _, myObject := range sourceSlice {
wg.Add(1)
go func(closureMyObject myClass) {
defer wg.Done()
var tmpObj myClass
tmpObj.AttributeName = closureMyObject.AttributeName
destSlice = append(destSlice, tmpObj)
}(myObject)
}
wg.Wait()
Running it with
go run -race play.go
Output is:
==================
WARNING: DATA RACE
Read at 0x00c420074000 by goroutine 6:
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0x69
Previous write at 0x00c420074000 by goroutine 5:
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0x106
Goroutine 6 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
Goroutine 5 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
==================
==================
WARNING: DATA RACE
Read at 0x00c42007e000 by goroutine 6:
runtime.growslice()
/usr/local/go/src/runtime/slice.go:82 +0x0
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0x1a7
Previous write at 0x00c42007e000 by goroutine 5:
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0xc4
Goroutine 6 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
Goroutine 5 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
==================
==================
WARNING: DATA RACE
Write at 0x00c420098120 by goroutine 80:
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0xc4
Previous write at 0x00c420098120 by goroutine 70:
main.main.func1()
/home/icza/gows/src/play/play.go:20 +0xc4
Goroutine 80 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
Goroutine 70 (running) created at:
main.main()
/home/icza/gows/src/play/play.go:21 +0x1cb
==================
Found 3 data race(s)
exit status 66
Solution is simple, use a sync.Mutex
to protect writing the destSlice
value:
var (
mu = &sync.Mutex{}
destSlice = make([]myClass, 0)
)
var wg sync.WaitGroup
for _, myObject := range sourceSlice {
wg.Add(1)
go func(closureMyObject myClass) {
defer wg.Done()
var tmpObj myClass
tmpObj.AttributeName = closureMyObject.AttributeName
mu.Lock()
destSlice = append(destSlice, tmpObj)
mu.Unlock()
}(myObject)
}
wg.Wait()
You could also solve it in other ways, e.g. you could use a channel on which you'd send the value to be appended, and have a designated goroutine receiving from this channel and do the append.
Also note that while slice headers are not safe, slice elements act as different variables and different slice elements can be written concurrently without synchronization (because they are distinct variables). See Can I concurrently write different slice elements
It's quite an old question but there is another small improvement that helps to get rid of mutex. You can use index to add to array. Each go routine will use it's own index. In this case synchronization is not necessary.
destSlice := make([]myClass, len(sourceSlice))
var wg sync.WaitGroup
for i, myObject := range sourceSlice {
wg.Add(1)
go func(idx int, closureMyObject myClass) {
defer wg.Done()
var tmpObj myClass
tmpObj.AttributeName = closureMyObject.AttributeName
destSlice[idx] = tmpObj
}(i, myObject)
}
wg.Wait()
Question has been answered, but my favorite way to solve this problem is with errgroup. One of the examples in the docs is this exact problem plus one nice addition the handling of errors.
Below is the meat of the example from the docs:
g, ctx := errgroup.WithContext(ctx)
searches := []Search{Web, Image, Video}
results := make([]Result, len(searches))
for i, search := range searches {
i, search := i, search // https://golang.org/doc/faq#closures_and_goroutines
g.Go(func() error {
result, err := search(ctx, query)
if err == nil {
results[i] = result
}
return err
})
}
if err := g.Wait(); err != nil {
return nil, err
}
return results, nil
Hope this is helpful to those who are not aware of the errgroup package.
To give a more recent solution to this problem, it looks like Go has released a new map for sync purposes:
https://godoc.org/golang.org/x/sync/syncmap