summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md10
-rw-r--r--src/collection/boxed.rs6
-rw-r--r--src/collection/owned.rs4
-rw-r--r--src/collection/ref.rs4
-rw-r--r--src/collection/retry.rs4
-rw-r--r--src/collection/utils.rs4
-rw-r--r--src/lockable.rs2
-rw-r--r--src/mutex/mutex.rs8
-rw-r--r--src/poisonable/poisonable.rs4
-rw-r--r--src/rwlock/rwlock.rs14
10 files changed, 33 insertions, 27 deletions
diff --git a/README.md b/README.md
index 9dea5c4..bac3024 100644
--- a/README.md
+++ b/README.md
@@ -123,16 +123,20 @@ println!("{}", data[1]);
## Future Work
-Are the ergonomics here any good? This is completely uncharted territory. Maybe there are some useful helper methods we don't have here yet. Maybe `try_lock` should return a `Result`. Maybe `lock_api` or `spin` implements some useful methods that I kept out for this proof of concept. Maybe there are some lock-specific methods that could be added to `LockCollection`. More types might be lockable using a lock collection.
+I want to have another go at `RefLockCollection` and `BoxedLockCollection`. I understand pinning better now than I did when I first wrote this, so I might be able to coalesce them now. `Pin` is not a very good API, so I'd need to implement a workaround for `Unpin` types.
+
+I'd like some way to mutate the contents of a `BoxedLockCollection`. Currently this can be done by taking the child, mutating it, and creating a new `BoxedLockCollection`. The reason I haven't done this yet is because the set of sorted locks would need to be recalculated afterwards.
It'd be nice to be able to use the mutexes built into the operating system, saving on binary size. Using `std::sync::Mutex` sounds promising, but it doesn't implement `RawMutex`, and implementing that is very difficult, if not impossible. Maybe I could implement my own abstraction over the OS mutexes. I could also simply implement `Lockable` for the standard library mutex.
-I've been thinking about adding `Condvar` and `Barrier`, but I've been stopped by two things. I don't use either of those very often, so I'm probably not the right person to try to implement either of them. They're also weird, and harder to prevent deadlocking for. They're sort of the opposite of a mutex, since a mutex guarantees that at least one thread can always access each resource.
+I've been thinking about adding types like `Condvar` and `Barrier`, but I've been stopped by two things. I don't use either of those very often, so I'm probably not the right person to try to implement either of them. They're also weird, and harder to prevent deadlocking for. They're sort of the opposite of a mutex, since a mutex guarantees that at least one thread can always access each resource. I think I can at least implement a deadlock-free `Once`, but it doesn't fit well into the existing lock collection API. There are other types that can deadlock too, like `JoinHandle` and `Stdio`, but I'm hesitant to try those.
-Is upgrading an `RwLock` even possible here? I don't know, but I'll probably look into it at some point. Downgrading is definitely possible in at least some cases.
+It's becoming clearer to me that the main blocker for people adopting this is async-support. `ThreadKey` doesn't work well in async contexts because multiple tasks can run on a single thread, and they can move between threads over time. I think the future might hold an `async-happylock` trait which uses a `TaskKey`. Special care will need to be taken to make sure that blocking calls to `lock` don't cause a deadlock.
It'd be interesting to add some methods such as `lock_clone` or `lock_swap`. This would still require a thread key, in case the mutex is already locked. The only way this could be done without a thread key is with a `&mut Mutex<T>`, but we already have `as_mut`. A `try_lock_clone` or `try_lock_swap` might not need a `ThreadKey` though. A special lock that looks like `Cell` but implements `Sync` could be shared without a thread key, because the lock would be dropped immediately (preventing non-preemptive allocation). It might make some common operations easier.
+Maybe `lock_api` or `spin` implements some useful methods that I kept out. Maybe there are some lock-specific methods that could be added to `LockCollection`. More types might be lockable using a lock collection. Is upgrading an `RwLock` even possible here? I don't know, but I'll probably look into it at some point. Downgrading is definitely possible in at least some cases.
+
We could implement a `Readonly` wrapper around the collections that don't allow access to `lock` and `try_lock`. The idea would be that if you're not exclusively locking the collection, then you don't need to check for duplicates in the collection. Calling `.read()` on twice on a recursive `RwLock` twice dooes not cause a deadlock. This would also require a `Recursive` trait.
I want to try to get this working without the standard library. There are a few problems with this though. For instance, this crate uses `thread_local` to allow other threads to have their own keys. Also, the only practical type of mutex that would work is a spinlock. Although, more could be implemented using the `RawMutex` trait. The `Lockable` trait requires memory allocation at this time in order to check for duplicate locks.
diff --git a/src/collection/boxed.rs b/src/collection/boxed.rs
index 2397bd3..a048d2b 100644
--- a/src/collection/boxed.rs
+++ b/src/collection/boxed.rs
@@ -21,9 +21,9 @@ fn contains_duplicates(l: &[&dyn RawLock]) -> bool {
}
unsafe impl<L: Lockable> RawLock for BoxedLockCollection<L> {
- fn kill(&self) {
+ fn poison(&self) {
for lock in &self.locks {
- lock.kill();
+ lock.poison();
}
}
@@ -196,6 +196,8 @@ impl<L> BoxedLockCollection<L> {
self.locks.clear();
// safety: this was allocated using a box, and is now unique
let boxed: Box<UnsafeCell<L>> = Box::from_raw(self.data.cast_mut());
+ // to prevent a double free
+ std::mem::forget(self);
boxed.into_inner()
}
diff --git a/src/collection/owned.rs b/src/collection/owned.rs
index e4cfe46..59e1ff8 100644
--- a/src/collection/owned.rs
+++ b/src/collection/owned.rs
@@ -14,10 +14,10 @@ fn get_locks<L: Lockable>(data: &L) -> Vec<&dyn RawLock> {
}
unsafe impl<L: Lockable> RawLock for OwnedLockCollection<L> {
- fn kill(&self) {
+ fn poison(&self) {
let locks = get_locks(&self.data);
for lock in locks {
- lock.kill();
+ lock.poison();
}
}
diff --git a/src/collection/ref.rs b/src/collection/ref.rs
index 4fa5485..a9c3579 100644
--- a/src/collection/ref.rs
+++ b/src/collection/ref.rs
@@ -40,9 +40,9 @@ where
}
unsafe impl<L: Lockable> RawLock for RefLockCollection<'_, L> {
- fn kill(&self) {
+ fn poison(&self) {
for lock in &self.locks {
- lock.kill();
+ lock.poison();
}
}
diff --git a/src/collection/retry.rs b/src/collection/retry.rs
index 687c5ec..cb6a1fb 100644
--- a/src/collection/retry.rs
+++ b/src/collection/retry.rs
@@ -36,10 +36,10 @@ fn contains_duplicates<L: Lockable>(data: L) -> bool {
}
unsafe impl<L: Lockable> RawLock for RetryingLockCollection<L> {
- fn kill(&self) {
+ fn poison(&self) {
let locks = get_locks(&self.data);
for lock in locks {
- lock.kill();
+ lock.poison();
}
}
diff --git a/src/collection/utils.rs b/src/collection/utils.rs
index f418386..36b19be 100644
--- a/src/collection/utils.rs
+++ b/src/collection/utils.rs
@@ -96,7 +96,7 @@ pub unsafe fn attempt_to_recover_locks_from_panic(locked: &RefCell<Vec<&dyn RawL
locked_lock.raw_unlock();
}
},
- || locked.borrow().iter().for_each(|l| l.kill()),
+ || locked.borrow().iter().for_each(|l| l.poison()),
)
}
@@ -108,6 +108,6 @@ pub unsafe fn attempt_to_recover_reads_from_panic(locked: &RefCell<Vec<&dyn RawL
locked_lock.raw_unlock_read();
}
},
- || locked.borrow().iter().for_each(|l| l.kill()),
+ || locked.borrow().iter().for_each(|l| l.poison()),
)
}
diff --git a/src/lockable.rs b/src/lockable.rs
index 1154d16..d599820 100644
--- a/src/lockable.rs
+++ b/src/lockable.rs
@@ -17,7 +17,7 @@ use std::mem::MaybeUninit;
pub unsafe trait RawLock {
/// Causes all subsequent calls to the `lock` function on this lock to
/// panic. This does not affect anything currently holding the lock.
- fn kill(&self);
+ fn poison(&self);
/// Blocks until the lock is acquired
///
diff --git a/src/mutex/mutex.rs b/src/mutex/mutex.rs
index 4671b4f..2cf6bbf 100644
--- a/src/mutex/mutex.rs
+++ b/src/mutex/mutex.rs
@@ -13,7 +13,7 @@ use crate::poisonable::PoisonFlag;
use super::{Mutex, MutexGuard, MutexRef};
unsafe impl<T: ?Sized, R: RawMutex> RawLock for Mutex<T, R> {
- fn kill(&self) {
+ fn poison(&self) {
self.poison.poison();
}
@@ -22,7 +22,7 @@ unsafe impl<T: ?Sized, R: RawMutex> RawLock for Mutex<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.lock(), || self.kill())
+ handle_unwind(|| this.raw.lock(), || self.poison())
}
unsafe fn raw_try_lock(&self) -> bool {
@@ -32,13 +32,13 @@ unsafe impl<T: ?Sized, R: RawMutex> RawLock for Mutex<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.try_lock(), || self.kill())
+ handle_unwind(|| this.raw.try_lock(), || self.poison())
}
unsafe fn raw_unlock(&self) {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.unlock(), || self.kill())
+ handle_unwind(|| this.raw.unlock(), || self.poison())
}
// this is the closest thing to a read we can get, but Sharable isn't
diff --git a/src/poisonable/poisonable.rs b/src/poisonable/poisonable.rs
index 79f90d9..3ef1cdd 100644
--- a/src/poisonable/poisonable.rs
+++ b/src/poisonable/poisonable.rs
@@ -12,8 +12,8 @@ use super::{
};
unsafe impl<L: Lockable + RawLock> RawLock for Poisonable<L> {
- fn kill(&self) {
- self.inner.kill()
+ fn poison(&self) {
+ self.inner.poison()
}
unsafe fn raw_lock(&self) {
diff --git a/src/rwlock/rwlock.rs b/src/rwlock/rwlock.rs
index 66c7362..8bb170c 100644
--- a/src/rwlock/rwlock.rs
+++ b/src/rwlock/rwlock.rs
@@ -14,7 +14,7 @@ use crate::lockable::{
use super::{PoisonFlag, RwLock, RwLockReadGuard, RwLockReadRef, RwLockWriteGuard, RwLockWriteRef};
unsafe impl<T: ?Sized, R: RawRwLock> RawLock for RwLock<T, R> {
- fn kill(&self) {
+ fn poison(&self) {
self.poison.poison();
}
@@ -26,7 +26,7 @@ unsafe impl<T: ?Sized, R: RawRwLock> RawLock for RwLock<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.lock_exclusive(), || self.kill())
+ handle_unwind(|| this.raw.lock_exclusive(), || self.poison())
}
unsafe fn raw_try_lock(&self) -> bool {
@@ -36,13 +36,13 @@ unsafe impl<T: ?Sized, R: RawRwLock> RawLock for RwLock<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.try_lock_exclusive(), || self.kill())
+ handle_unwind(|| this.raw.try_lock_exclusive(), || self.poison())
}
unsafe fn raw_unlock(&self) {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.unlock_exclusive(), || self.kill())
+ handle_unwind(|| this.raw.unlock_exclusive(), || self.poison())
}
unsafe fn raw_read(&self) {
@@ -53,7 +53,7 @@ unsafe impl<T: ?Sized, R: RawRwLock> RawLock for RwLock<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.lock_shared(), || self.kill())
+ handle_unwind(|| this.raw.lock_shared(), || self.poison())
}
unsafe fn raw_try_read(&self) -> bool {
@@ -63,13 +63,13 @@ unsafe impl<T: ?Sized, R: RawRwLock> RawLock for RwLock<T, R> {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.try_lock_shared(), || self.kill())
+ handle_unwind(|| this.raw.try_lock_shared(), || self.poison())
}
unsafe fn raw_unlock_read(&self) {
// if the closure unwraps, then the mutex will be killed
let this = AssertUnwindSafe(self);
- handle_unwind(|| this.raw.unlock_shared(), || self.kill())
+ handle_unwind(|| this.raw.unlock_shared(), || self.poison())
}
}