Both
Mutex<T>
and
RwLock<T>
protect shared data across threads, but they differ in
who can access the data simultaneously.
Mutex<T>
— exclusive access only. One lock, one holder at a time, regardless of intent.
RwLock<T>
— distinguishes reads from writes:
- Any number of threads can read simultaneously
- Only one thread can write, and only when no readers exist
Mutex: Thread A (read) ──────────┤
Thread B (read) ├──────────┤ waits
Thread C (write) ├──────────┤
RwLock: Thread A (read) ──────────┤
Thread B (read) ──────────┤ concurrent!
Thread C (write) ├──────────┤ waits for readers
When to Use Mutex
Use it when writes are as frequent as reads, or when access patterns are mixed enough that the distinction doesn't matter.
Real-world example: a job queue
use std::sync::{Arc, Mutex}; use std::collections::VecDeque; struct JobQueue { inner: Mutex<VecDeque<Job>>, } impl JobQueue { fn push(&self, job: Job) { self.inner.lock().unwrap().push_back(job); } fn pop(&self) -> Option<Job> { self.inner.lock().unwrap().pop_front() } }
Every operation mutates the queue — push and pop both need exclusive access.
RwLock
would buy you nothing here, since you can never safely let two threads pop simultaneously anyway.
Other good fits for Mutex:
- Connection pools (checkout/return both mutate state)
- Write-heavy counters or accumulators
- Any data where reads and writes are equally frequent
-
When in doubt —
Mutexis simpler and harder to misuse
When to Use RwLock
Use it when reads heavily outnumber writes and reads are non-trivial in duration (i.e., they hold the lock long enough for contention to matter).
Key insight
RwLock
shines only when reads outnumber writes by a large margin
and the lock is held long enough
for concurrent readers to actually overlap. If either condition is missing, you won't see a benefit over
Mutex.
Real-world example: in-memory config / feature flags
use std::sync::{Arc, RwLock}; use std::collections::HashMap; struct FeatureFlags { flags: RwLock<HashMap<String, bool>>, } impl FeatureFlags { // Called by every request handler — thousands of times per second fn is_enabled(&self, name: &str) -> bool { self.flags.read().unwrap().get(name).copied().unwrap_or(false) } // Called rarely — admin panel, config reload, etc. fn set(&self, name: &str, value: bool) { self.flags.write().unwrap().insert(name.to_string(), value); } }
Hundreds of request handlers read the flags concurrently with zero contention. A config update is rare — it briefly blocks new readers while it writes, then releases.
Real-world example: DNS / routing table cache
struct RouteTable { routes: RwLock<HashMap<IpAddr, RouteEntry>>, }
Routing lookups happen on every packet — potentially millions per second — and are pure reads. Route updates happen when BGP routes change, which is comparatively rare.
Other good fits for RwLock:
- In-memory caches (read always, invalidate occasionally)
- Application configuration loaded at startup, reloaded rarely
- Shared lookup tables or registries
Pitfalls
Mutex Pitfalls
1. Deadlock from double-locking
let lock = Arc::new(Mutex::new(0)); let _guard = lock.lock().unwrap(); // Later in the same thread... let _guard2 = lock.lock().unwrap(); // Deadlocks! Already holding the lock.
⚠ Watch out
Mutex
in Rust's stdlib is not reentrant.
If you try to lock a mutex you already hold, the thread blocks waiting for itself forever.
2. Holding a lock across an .await
// DON'T do this with std::sync::Mutex in async code async fn bad(state: Arc<Mutex<State>>) { let guard = state.lock().unwrap(); some_async_operation().await; // Lock held across await point! drop(guard); }
⚠ Watch out
The locked guard is held while the future is suspended, potentially blocking other Tokio tasks on the same thread.
Use
tokio::sync::Mutex
in async contexts instead.
3. Poisoning on panic
If a thread panics while holding a
Mutex,
the lock becomes poisoned. Subsequent
.lock()
calls return
Err.
Most code blindly calls
.unwrap()
here — make sure that's intentional.
RwLock Pitfalls
1. Write starvation
On some platforms (notably Linux with
pthread_rwlock),
a continuous stream of readers can indefinitely block a writer. If your read rate is high enough, writes may never get through.
// If many threads do this in a tight loop... loop { let _r = rwlock.read().unwrap(); // Writer can never acquire the write lock }
⚠ Watch out
Write starvation is platform-dependent and hard to reproduce in testing.
If your system has a continuous stream of readers, a waiting writer may be starved indefinitely.
Consider whether your workload truly benefits from
RwLock
before committing to it.
2. Often slower than Mutex for short-held locks
RwLock
has more bookkeeping overhead than
Mutex.
For short-held locks on small data (e.g., a single integer),
Mutex
often wins on benchmarks. Only reach for
RwLock
when locks are held long enough for concurrent reads to actually overlap.
3. Deadlock from trying to upgrade read → write
let r = rwlock.read().unwrap(); // ... decide you need to write ... let w = rwlock.write().unwrap(); // Deadlock! Still holding `r`.
⚠ Watch out
There's no "upgrade" operation in Rust's stdlib
RwLock.
You must drop the read guard before acquiring a write lock.
4. Same .await problem as Mutex
Use
tokio::sync::RwLock
in async code.
Quick Decision Guide
| Situation | Use |
|---|---|
| Writes as common as reads | Mutex |
| Reads vastly outnumber writes | RwLock |
| You're in async code | tokio::sync::Mutex / tokio::sync::RwLock |
| Simple, small data, short lock duration | Mutex (lower overhead) |
| Long-held read locks, rare writes | RwLock |
| You need reentrancy | Neither — restructure your code |
Key insight
The practical rule of thumb: start with Mutex.
Profile, observe contention, then consider
RwLock
if reads truly dominate and you can measure the improvement.