Go's sync.RWMutex can deadlock when the same goroutine tries to acquire RLock() multiple times while a writer is waiting. This is by design due to Go's writer-preferring behavior that prevents writer starvation.
Key insight: RWMutex has a dual personality - readers can delay writers indefinitely until a writer starts waiting, then no new readers are allowed.
From the official Go documentation:
If any goroutine calls RWMutex.Lock while the lock is already held by one or more readers, concurrent calls to RWMutex.RLock will block until the writer has acquired (and released) the lock, to ensure that the lock eventually becomes available to the writer.
Key insight: "concurrent calls" includes multiple calls from the same goroutine.
RWMutex exhibits different bias depending on the situation:
// Continuous readers can starve writers indefinitely
for i := 0; i < 10; i++ {
go func() {
for {
mu.RLock()
time.Sleep(10 * time.Millisecond) // Hold lock briefly
mu.RUnlock()
time.Sleep(1 * time.Millisecond) // Brief gap
}
}()
}
// A writer trying Lock() may wait a VERY long time!Rule: Active readers delay writes; any RLock can postpone a waiting writer.
mu.RLock() // Reader 1 gets in
mu.Lock() // Writer starts waiting -> blocks NEW readers
mu.RLock() // Reader 2 is now BLOCKED (even same goroutine!)Rule: Waiting writers block new readers to prevent writer starvation.
This pattern appears frequently in concurrent Go code and can cause deadlocks:
type DataStore struct {
mu sync.RWMutex
data map[string]interface{}
}
// ❌ DEADLOCK RISK
func (ds *DataStore) GetFormattedData() string {
ds.mu.RLock() // First RLock
defer ds.mu.RUnlock()
keys := ds.GetKeys() // GetKeys() also calls RLock() - POTENTIAL DEADLOCK!
var result strings.Builder
for _, key := range keys {
result.WriteString(fmt.Sprintf("%s: %v\n", key, ds.data[key]))
}
return result.String()
}
func (ds *DataStore) GetKeys() []string {
ds.mu.RLock() // Second RLock from same goroutine
defer ds.mu.RUnlock()
keys := make([]string, 0, len(ds.data))
for k := range ds.data {
keys = append(keys, k)
}
return keys
}The deadlock sequence:
GetFormattedData()gets firstRLock()✅UpdateData()method tries to getLock()- starts waiting ⏳- Phase shift: RWMutex switches to writer-preferring mode
GetFormattedData()callsGetKeys()which tries secondRLock()- BLOCKED (counts as "new reader") ❌- DEADLOCK: Writer can't proceed (Reader 1 still holds lock), Reader can't proceed (Writer is waiting)
// deadlock_demo.go
package main
import (
"fmt"
"sync"
"time"
)
func main() {
fmt.Println("Testing RWMutex nested RLock deadlock...")
var mu sync.RWMutex
var wg sync.WaitGroup
wg.Add(2)
// Goroutine 1: Simulates Sprint() -> Keys() pattern
go func() {
defer wg.Done()
fmt.Println("Reader: Got first RLock")
mu.RLock()
// Let writer queue up
time.Sleep(100 * time.Millisecond)
fmt.Println("Reader: Trying second RLock...")
mu.RLock() // ← DEADLOCK happens here
fmt.Println("Reader: Got second RLock") // Never prints
mu.RUnlock()
mu.RUnlock()
}()
// Goroutine 2: Simulates Delete() method
go func() {
defer wg.Done()
time.Sleep(50 * time.Millisecond) // Let reader get first lock
fmt.Println("Writer: Requesting write lock...")
mu.Lock() // This queues up and blocks new RLocks
fmt.Println("Writer: Got write lock")
mu.Unlock()
}()
// Detect deadlock
done := make(chan bool)
go func() {
wg.Wait()
done <- true
}()
select {
case <-done:
fmt.Println("✅ No deadlock")
case <-time.After(2 * time.Second):
fmt.Println("❌ DEADLOCK DETECTED!")
fmt.Println("Explanation:")
fmt.Println("1. Reader holds RLock")
fmt.Println("2. Writer requests Lock (gets queued)")
fmt.Println("3. RWMutex switches to writer-preferring mode")
fmt.Println("4. Reader tries second RLock (blocked - counts as 'new reader')")
fmt.Println("5. Writer can't proceed (reader still holds first RLock)")
fmt.Println("6. DEADLOCK!")
}
}Run it:
$ go run deadlock_demo.go
Testing RWMutex nested RLock deadlock...
Reader: Got first RLock
Writer: Requesting write lock...
Reader: Trying second RLock...
❌ DEADLOCK DETECTED!// no_deadlock_demo.go
package main
import (
"fmt"
"sync"
"time"
)
func main() {
fmt.Println("Testing nested RLock without writer...")
var mu sync.RWMutex
fmt.Println("Getting first RLock...")
mu.RLock()
fmt.Println("Getting second RLock...")
mu.RLock() // ← Works fine when no writer waiting
fmt.Println("✅ Got both RLocks successfully!")
mu.RUnlock()
mu.RUnlock()
fmt.Println("✅ Released both locks")
}Run it:
$ go run no_deadlock_demo.go
Testing nested RLock without writer...
Getting first RLock...
Getting second RLock...
✅ Got both RLocks successfully!
✅ Released both locksWhile Go prevents writer starvation, it allows reader-induced writer delays:
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
func demonstrateWriterStarvation() {
var mu sync.RWMutex
var readerCount int32
writerDone := false
// Start 10 continuous readers
for i := 0; i < 10; i++ {
go func(id int) {
for !writerDone {
mu.RLock()
current := atomic.AddInt32(&readerCount, 1)
if current == 1 {
fmt.Printf("👥 Reader %d in (active: %d)\n", id, current)
}
time.Sleep(20 * time.Millisecond) // Hold briefly
atomic.AddInt32(&readerCount, -1)
mu.RUnlock()
time.Sleep(5 * time.Millisecond) // Small gap
}
}(i)
}
// Writer attempts to get lock after 100ms
time.Sleep(100 * time.Millisecond)
fmt.Println("\n🔴 Writer: Attempting to acquire lock...")
start := time.Now()
mu.Lock()
writerDone = true
elapsed := time.Since(start)
fmt.Printf("✅ Writer: Finally got lock after %v!\n", elapsed)
if elapsed > 100*time.Millisecond {
fmt.Printf("⚠️ Writer was delayed by reader activity!\n")
}
mu.Unlock()
}Key insight: Even though no single reader blocks for long, the collective reader activity can significantly delay writers.
In read-heavy systems (95% reads, 5% writes):
- Read latency: ~0.1ms (concurrent access)
- Write latency: 10-100ms+ (waiting for reader gaps)
This is why some systems use alternatives like:
atomic.Valuefor read-heavy data- Copy-on-write patterns
- Versioned data structures (MVCC)
func (ds *DataStore) GetFormattedData() string {
ds.mu.RLock()
defer ds.mu.RUnlock()
keys := ds.GetKeys() // GetKeys() also needs RLock = deadlock risk
// ...
}func (ds *DataStore) GetFormattedData() string {
var result strings.Builder
for _, k := range ds.GetKeys() { // GetKeys() manages own lock
ds.mu.RLock()
v := ds.data[k]
ds.mu.RUnlock()
result.WriteString(fmt.Sprintf("%s: %v\n", k, v))
}
return result.String()
}func (ds *DataStore) GetFormattedData() string {
ds.mu.RLock()
defer ds.mu.RUnlock()
// Extract keys inline (avoid nested call)
keys := make([]string, 0, len(ds.data))
for k := range ds.data {
keys = append(keys, k)
}
sort.Strings(keys)
// Use data directly
var result strings.Builder
for _, k := range keys {
result.WriteString(fmt.Sprintf("%s: %v\n", k, ds.data[k]))
}
return result.String()
}
### ✅ Solution 3: Separate Locked/Unlocked APIs
```go
// Public API - handles locking
func (ds *DataStore) GetFormattedData() string {
ds.mu.RLock()
defer ds.mu.RUnlock()
return ds.getFormattedDataUnsafe()
}
// Private implementation - no locking
func (ds *DataStore) getFormattedDataUnsafe() string {
keys := ds.getKeysUnsafe() // No locking needed
var result strings.Builder
for _, k := range keys {
result.WriteString(fmt.Sprintf("%s: %v\n", k, ds.data[k]))
}
return result.String()
}
func (ds *DataStore) getKeysUnsafe() []string {
keys := make([]string, 0, len(ds.data))
for k := range ds.data {
keys = append(keys, k)
}
return keys
}- RWMutex has dual personality: reader-friendly until writer arrives, then writer-preferring
- Active readers can delay writers indefinitely until writer starts waiting
- Waiting writers block ALL new readers including nested RLock from same goroutine
- "New reader" = any RLock attempt after writer starts waiting, even from same thread
- Nested RLock is never safe in concurrent code - timing-dependent deadlocks
- Writer starvation vs reader delays: Go prevents the former, allows the latter
- Design principle: Avoid nested mutex calls or separate locked/unlocked APIs
- Go sync.RWMutex Documentation
- GitHub Issue: golang/go#7576 - "document RWMutex.RLock shouldn't be used recursively"
Test the code yourself! Copy the examples above into .go files and run them to see the behavior firsthand.
Note: This analysis and code examples were generated with assistance from AI to explore and demonstrate Go's RWMutex behavior.