Skip to content

Conversation

@Thomas-Mewily
Copy link

@Thomas-Mewily Thomas-Mewily commented Nov 10, 2025

This RFC introduce DerefInto and DerefMutInto, supertraits of Deref / DerefMut that return targets by value, enabling ergonomic RAII access for types like RefCell, Mutex, and RwLock.

Rendered


```rust
let cell: RefCell<Foo> = RefCell::new(Foo{ bar: 42 });
let bar = &cell.bar;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is the Ref<'out, T> stored so you can borrow from it? if it's a temporary like in &cell.borrow().bar, you can't actually store the returned reference with let bar since the lifetime expires at the end of the statement when the Ref<'out, T> is dropped. If you say it has its lifetime extended and is dropped at the end of the block, imo that makes it ill-suited for RefCell and other locking-style types since you can't drop the Ref before you need to access the RefCell again for borrow_mut() or similar:

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
let foo = &cell.foo;
let bar = &mut cell.bar; // `borrow_mut()` panics since we have a live `Ref` still
dbg!(foo);

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe borrowing the .foo field (implying you will use it later) wasn't the best example, but there are also other practical use cases such as:

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});

let sum = cell.foo_plus_bar(); // calling method
let x : i32 = cell.foo;        // read  field
cell.foo = 42;                 // write field

About the storage behavior

Option 1: The temporary expires at the end of the statement:

Since deref_into() is still a method, I would assume it produce a temporary value in the expression that expires at the end of the statement.

That would make it usable as an instantaneous reference usage where the RAII guard (like Ref<'stmt, T>) lives only for the duration of the statement (like in the code above).
If you tried to store a reference you'd get a compiler error like "cannot borrow temporary value as it does not live long enough"

But it feels weird not being able to borrow it, which is why I prefer option 2:

Option 2: Temporary +lifetime extension (+ different drop semantic):

However, if its lifetime is extended and it lives until the end of the block, you can't manually drop the temporary t1 Ref before the end of the block, as you noted. Therefore, the following code would not work:

{
  let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
  
  let foo = &cell.foo; // create a temporary t1: `Ref<'block,i32>` here 
                       // + lifetime extend it, then deref t1 to take a 
                       // reference to foo: `&'block i32` using the deref trait.
                      
  let bar = &mut cell.bar; // panics since t1 still exist
  // t1 dropped here
}

The issue is that the temporary t1 : Ref<'block,i32> act like a reference wrapper with some RAII, but it follow the same lexical drop semantic (like a struct/enum) : it is dropped at the end of the block.

Regular reference (&T / &mut T) instead follow non-lexical lifetimes. If this temporary t1 behaved the same way, the following code would work:

{
  let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
  
  let foo = &cell.foo; // create a temporary t1: Ref<'block,i32> here 
                       // + lifetime extend it, then deref t1 to take a 
                      // reference to foo using the deref trait.
  
  // t1 dropped here
  let bar = &mut cell.bar; // don't panic since t1 is dropped
}

So maybe types that behave like temporary references ( Ref / RefMut / MutexGuard / RwLockReadGuard / RwLockWriteGuard... i.e.: that need to be dropped early to avoid holding a borrow) should be marked with a special compiler trait marker to follow non lexical lifetime ?

Something like that:

trait NonLexicalDrop {}

impl<'a,T> NonLexicalDrop for Ref<'a,T> {}

That way this code can compile and run fine:

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
let foo = &cell.foo; // temporary reference t1 created here,

                        // temporary t1 dropped here
let bar = &mut cell.bar; // `borrow_mut()` work fine since t1 is dropped

However, your code will indeed compile successfully, but it will panic at runtime due to improper use of the RefCell's borrowed value (a programmer error in that case I guess?).

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
let foo = &cell.foo;
let bar = &mut cell.bar; // `borrow_mut()` panics since we have a live `Ref` still
dbg!(foo);

Anyway, the support for the core library is secondary, I'm just using the RefCell as an example, but it could be any custom user defined struct or enum.

Thank for your feedback :)

Do you think I should update the RFC with this NonLexicalDrop trait for reference like type now, or keep discussing it a bit more first?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a non-lexical drop is too confusing when it actually needs to run code to drop the types (especially for locks), non-lexical lifetimes doesn't have this problem because dropping a reference doesn't actually do anything significant.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I find non lexical drop more intuitive to think about than lexical one, but ok ^^.

The way I understand non lexical drop is "drop it as soon as possible" (but maybe I'm wrong).
So they are ideal for releasing resource as soon as possible.

  // /!\ The comment indicate the status/name of the `y` variable 
  // lifetime if it was still active at the commented line.
  let mut x = 42;

  let y: &mut i32 = &mut x; // x borrow starts here: `Active` (lexical + non lexical)
  println!("foo");          // `Active`

  dbg!(y);                  // `Active
                            // `Extended`, because it is no longer used, but still here (lexical, and implementation detail for non lexical)
  println!("bar");          // `Extended`

  let z: &mut i32 = &mut x; // `Over Extended`: would cause a compile time error, even if it is no longer used (lexical only)

P.S.: my NonLexicalDrop trait suggestion may be replaced by a constant enum in the Drop trait to define when to drop it (but it makes it unusable in a generic constraint).

pub trait Drop 
{
	const LIFETIME : DropLifetime = DropLifetime::Lexical;
	fn drop(&mut self);
}

pub enum DropLifetime
{
	Lexical,
	NonLexical,
}

@ehuss ehuss added the T-lang Relevant to the language team, which will review and decide on the RFC. label Nov 11, 2025
Comment on lines +199 to +203
Implementing them on `Mutex` or `RwLock` may be undesirable because:

- The `.lock()` / `.read()` / `.write()` can fail, so the `DerefInto`/`DerefMutInto` implementation may panic. (The `Index`/`IndexMut` trait work in a similar way, panic on invalid indices via [`index()`](https://doc.rust-lang.org/std/ops/trait.Index.html#tymethod.index)).

- Most notably, if `DerefInto` and `DerefMutInto` are implemented for `Mutex` or `RwLock`, it's not clear how these trait should behave on poisoned state.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another disadvantage for Defef(Mut)Into for Mutex and RwLock might be that it’s less clear when one of these is locked or unlocked, possibly leading to race conditions when some code that should unlock and lock only once accidentally does so multiple times.

Copy link
Author

@Thomas-Mewily Thomas-Mewily Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. It can make the syntax more ergonomic (especially for experienced programmers), but it also makes the code more complex to reason about and can provoque more accidental race conditions.

I wonder if there is a way to detect such a things in "trivial" situation with compile-time warnings.
This sounds much more complex and beyond the scope of this RFC. It would probably require introducing a new rule or syntax to tell the compiler:

"Hey, I take a reference &'a T in parameter, but the output type will safely have borrowed mutable reference &'a mut T."

eg, the + &'a mut in:

pub fn borrow_mut<'a>(s: &'a RefCell<T>) -> RefMut<'a, T> + &'a mut
{
   ...
}


pub fn borrow<'a>(s: &'a RefCell<T>) -> Ref<'a, T>
{
   ...
}

So, situations like this can trigger a borrow error at compile time (for free, thanks to the existing borrow checker?):

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});

let x = cell.borrow_mut(); // take a &self, but return the compiler is aware that 
                           // this is a &mut self borrow. (&self -> &mut Self)
                    
let y = cell.borrow(); // error: cannot borrow `cell` as immutable because it is also borrowed as mutable
dbg!(x.bar);

And maybe the same kind of error could be detected for:

let cell = RefCell::new(MyStruct { foo: 1, bar: 2});
let y = cell.borrow(); // &mut self borrow
let x = cell.borrow_mut(); // error: cannot borrow `cell` as mutable because it is also borrowed as mutable
dbg!(y.bar);

So maybe these problems could be detected at compile time (which would be even better than now, since they are currently detected only at runtime)?

(I haven’t dug deeper; there may be more problem, but the code that will handle the fn (&self) -> &mut Self will be unsafe)
Thank for the answer btw!


However, the standard `Deref`/`DerefMut` traits are limited in that their output types are fixed references (`&Self::Target` / `&mut Self::Target`).

This makes them incompatible with interior mutability or synchronization types such as `RefCell`, `Mutex`, or `RwLock`, which return RAII guards by value (`Ref` / `RefMut` for `RefCell`, `MutexGuard` for `Mutex`, and `RwLockReadGuard` / `RwLockWriteGuard` for `RwLock`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I think these "streaming deref" traits are useful, making the locks and cells implement them are very bad idea.

The life time of the guards are significant, the behavior of the following 2 pieces of code are different:

let sum_from_different_guards = { mutex.lock().unwrap().a } + { mutex.lock().unwrap().b };

let sum_from_same_guard = {
    let guard = mutex.lock().unwrap();
    guard.a + guard.b
};

When you simplify it to mutex.a + mutex.b, it is ambiguous which locking pattern do you expect

Copy link
Author

@Thomas-Mewily Thomas-Mewily Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, it is very different.

I expect the sum_from_different_guards locking pattern to be used for mutex.a + mutex.b, but that also means it will be locked twice for the same operation (which is undesirable). Perhaps the sum_from_same_guard pattern can be used if the DerefInto trait is pure/idempotent.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you simplify it to mutex.a + mutex.b, it is ambiguous which locking pattern do you expect

"Ambiguous" is too kind. It's clear mutex.a + mutex.b always means 'sum_from_different_guards', so that's almost always the wong choic, since it cases a race condition. It's unworkable for locking and guards.

Now, there are advanced algebra packages (for zero-knowledge proofs) that essentially do impl Add<Thunk> for Thunk { type Ouput = Thunk; .. } and the Thunk type (aka contraint system) contains some Arc<Mutex<..>> which this Add::add transforms. It's possible these trait help there somehow, but having them for Mutex itself can only cause harm.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is desirable and in fact that we need to go further and combine this with the ability to project to subvalues. I was just working on a proposal like this in the context of the Field Projections initiative: https://hackmd.io/N0sjLdl1S6C58UddR7Po5g .

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agee, this is helpful, and it can help solving some problem with Deref and DerefMut.

Personally I'm more interested in partial borrows or view types like described in the blog you cited by Niko Matsakis because the syntax is closer to destructuring.

I hope that partial borrows/moves and view types will eventually follow a similar syntax to destructuring, and that it will be compositional:

Especially if:

  • &self borrow the whole struct,
  • &mut self borrow the whole struct as mutable,
  • and self move the struct,

It would be great if we can be more explicit about how each field is reference or moved.

For example, given struct Foo { bar: i32, buz: i32 }:

(<=> mean equivalent)

  • &self <=> self { &bar, &buz } (distribute the reference)
  • &mut self <=> self { &mut bar, &mut buz } <=> &self { mut bar, mut buz }
  • self <=> self { bar, buz }

Combinaison like self { &mut bar, &buz } <=> &self { mut bar, buz } should be possible
(borrow bar as a mutably, and buz immutably).

Unlisted fields would not be used (neither referenced nor moved):
self { &bar, buz: _ } <=> self { &bar }

And a way to qualify all the unused fields could be to use the struct update syntax:
&self <=> self { &bar, &buz } <=> self { &bar, &.. } <=> self { &.. } <=> &self { .. }

That way, implementations of Deref can be more specific and borrow only a subset of the fields.

struct Foo
{
    pub bar: Bar,
    pub value_independant_to_bar: i32,
}
struct Bar;

impl Deref for Foo
{
    type Target=Bar;
    // here:
    fn deref(&self { bar }) -> &Self::Target {
        &self.bar
    }
}

Even though this design is not generic-compatible, abstract field solve it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial borrows/view types are compatible and orthogonal to place-based field projections I believe. Place-based field projections just extend all the borrowck things from references to custom smart pointers.

Comment on lines +23 to +35
```rust
pub struct Foo
{
pub bar: i32,
}

let cell: RefCell<Foo> = RefCell::new(Foo{ bar: 42 });

let _bar = cell.borrow().bar; // Current way to write it

let _bar = &cell.bar; // Error: Impossible to express right now,
// but more ergonomic to write
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't find this code more clear or ergonomic. Sure, okay, "ergonomic to write" maybe? But while it may be easier to write, it is much harder to read. We should not aspire to be a write-once language.

@sarah-quinones
Copy link

adding a use case for this

being able to deref to a non-reference type would allow linalg libraries as an example to have Matrix deref to MatrixRef/MatrixMut, which means i don't have to duplicate my api three times

pub struct Matrix<T> { ... }
pub struct MatrixRef<'a, T> { ... }
pub struct MatrixMut<'a, T> { ... }

impl Matrix<T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl MatrixRef<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl MatrixMut<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

becomes

impl MatrixRef<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl<T> DerefInto for Matrix<T> {
    type Target<'out> = MatrixRef<'out, T>;
    fn deref_into(&self) -> MatrixRef<'_, T> { ... }
}

impl<T> DerefInto for MatrixMut<'_, T> {
    type Target<'out> = MatrixRef<'out, T>;
    fn deref_into(&self) -> MatrixRef<'_, T> { ... }
}

the more general example would be non-lang reference types, but i mostly care about matrices for my code (faer).

@blueglyph
Copy link

My 2 cents:

  • The first link in the "Motivation" section is a reddit post. Moreover, it's been downvoted, so its conclusion is unsure. I don't think that link should be used to illustrate, let alone prove, anything. Also, what is an "ol' reference"?
  • The code in the "Motivation" and later sections is based on "foo/bar" examples, which are meaningless and sound more theoretical than motivated by a real example. A concrete example would be clearer and easier to discuss (and indeed appear more motivated); it would also be more educational should this RFC pass eventually.
  • You're suggesting to add a supertrait to Deref (and DerefMut) for another behaviour that would be specific to the type. This trait must be implemented where the type is defined, so in the example of RefCell, there should be an implementation of both Deref and DerefInto for RefCell. But in "Backward compatibility", you show what seems to be a blanket implementation for all T: Deref that calls the base implementation. Wouldn't you get conflicts when defining DerefInto and DerefIntoMut? You also mention that Deref would delegate to DerefInto, but didn't you mean the opposite? I might have misunderstood the code (by the way, nitpicking: there's a notation for supertraits, and I believe the code could use some reformatting).
  • Deref hides an implicit behaviour, but it preserves the intent of reference to an item contained in a smart pointer. In the examples you've shown for DerefInto (and mut), there is another implicit behaviour for varying actions, like borrowing a reference at run-time, locking a mutex, and so on. It's not the same at all, and makes me frankly uncomfortable, as it goes against Rust's philosophy of making any safety risk very visible in the code.
  • Some of the example of types you give, like RefCell, etc, may panic. One of the big conditions to implement Deref is that it mustn't fail unexpectedly, and that's only with the standard Deref trait, so I feel even more uncomfortable with some hidden, possibly fail-prone code.

@kennytm
Copy link
Member

kennytm commented Dec 3, 2025

For the Matrix case IMO the more proper solution is to support "custom dynamic-sized-type" like type Mat<T> = [[T]];1, so that instead of Matrix<T>, MatrixRef<'a, T>, MatrixMut<'a, T> you use Box<Mat<T>>, &'a Mat<T>, &'a mut Mat<T>.

Footnotes

  1. [[T]] is a rectangular slice of slice of T. consider it as unsizing of [[T; m]; n] moving m, n into the pointer-metadata.

@Thomas-Mewily
Copy link
Author

Thomas-Mewily commented Dec 3, 2025

Hello @sarah-quinones, thank for the example ! I encountered a similar issue in the past with your Matrix for a Grid type (which supports subviews and mutable subviews).

My way to solve it was to move the MatrixRef / MatrixMut logic into some traits (MatrixView and MatrixViewMut ).

I still need to impl the traits for Matrix / MatrixRef / MatrixMut when necessary,
but I'm sure the api will stay consistent, and I can easily add new method with a default implementation that I don't need to impl again (except if the impl can provide better performance).

pub struct Matrix<T> { ... }
pub struct MatrixRef<'a, T> { ... }
pub struct MatrixMut<'a, T> { ... }

trait MatrixView<T>
{
    fn nrows(&self) -> usize { ... }
    // other method useful for view
}
trait MatrixViewMut<T> : MatrixView<T> {  /*method for mutable view*/ }

impl<T> MatrixView<T> for  Matrix<T> { ... }
impl<T> MatrixViewMut<T> for  Matrix<T> { ... }

impl<'a,T> MatrixView<'a, T> for  MatrixRef<'a, T> { ... }

impl<'a,T> MatrixView<'a, T> for  MatrixMut<'a, T> { ... }
impl<'a,T> MatrixViewMut<'a, T> for  MatrixMut<'a, T> { ... }

It is also possible to introduce another trait for owning matrix type that expose constructor if needed.

(
For my use case, I have a large number of subtraits, and my trait IGridViewMut looks like this:

pub trait IGridViewMut<G, T, Idx, const N: usize> :
      IGridView<G,T,Idx,N>
    + GetMut<Vector<Idx,N>,Output = T> + GetManyMut<Vector<Idx,N>,Output=T>
    + IndexMut<Vector<Idx,N>, Output = T>
    where
    G : IGrid<T, Idx, N>,
    Idx : Integer

Where GetMut is the same as IndexMut, but fallible, and GetManyMut implements logic similar to get_disjoint_mut, enabling operations such as swapping two indices. Note: it need some renaming, it is still work in progress. (Anyways, doing it for high performance matrice require different api than a grid)
)

Maybe making some traits more generic can also help in your case:

pub trait ShapeCore {
	fn nrows(&self) -> usize;
	fn ncols(&self) -> usize;
}

=>

pub trait ShapeCore<Rows=usize,Col=usize> {
	fn nrows(&self) -> Rows;
	fn ncols(&self) -> Col;
}

You will still need to duplicate the api three times, but at least it is centralized in one trait, so renaming/updating the documention is a little bit easier.

If the view is tightly coupled with the shape, the trait can also depend on it:

trait MatrixView<T,Rows=usize,Col=usize> : ShapeCore<Rows,Col> { ... }

Edit: impl MatrixView that expose nrows for Matrix MatrixRef and MatrixMut also make it easier to use them in generic context. For example, a uniform function such as
fn nrows<T>(value: &T) -> usize where T: ShapeCore { value.nrows() }
is straightforward. However, supporting both T: ShapeCore and T: DerefInto<MatrixRef<'a, T>> requires at least two separate functions to avoid conflicting implementations.

@Thomas-Mewily
Copy link
Author

Thomas-Mewily commented Dec 4, 2025

Thank you, @blueglyph, for your remark.
I agree with most of your points, and I will try to respond to each one (sorry for the long post).

1) The first link in the "Motivation" section is a reddit post.

Yeah, sorry, this was my first RFC.

Indeed in the motivation there is a citation to a Reddit post about ergonomics, because it was relevant to the topic.
My goal was to create an ergonomic interface for non-locking and guarded access.

I tried to keep my post short and focused on the core idea: Simple RAII access, ideal for singletons and user-made guarded resources.

2) The code in the "Motivation" and later sections is based on "foo/bar" examples

The main point of it was RAII and singleton access. Now what the RAII or singleton need to access (a database, a render context) don't really matter.
If you want more context, here it is:

I was previously creating a single-threaded singleton for a rendering API, similar to Macroquad. In Macroquad, the singleton is hidden in the following way (simplified example, but the idea is similar):

struct Context
{
    pub drawer: Drawer,
}

static mut CONTEXT: Option<Context> = None;

fn context_mut() -> &'static mut Context {
    thread_assert::same_thread();
    unsafe { CONTEXT.as_mut().unwrap_or_else(|| panic!()) }
}

The context is hidden, and only method inside the lib that need it can access it.

I will omit the deeper discussion of the underlying unsoundness and the reasons this pattern should be avoided, because for simplicity, in this case, it made Macroquad way more pleasant to use rather than passing around some reference to the context everywhere.

Some method such as drawing are exposed in some function:

pub mod shapes
{
    fn draw_rectangle(r: Rectangle)
    {
        context_mut().drawer.draw_rectangle(r)
    }

    fn draw_line(l: Line)
    {
        context_mut().drawer.draw_line(l)
    }
}

impl Drawer
{
    fn draw_rectangle(&mut self, r: Rectangle) { ... }
    fn draw_line(&mut self, l: Line) { ... }
}

However, I'm conflicted about this approach. It's clean because it doesn't expose the singleton, but:

  • The function need to be written twice:

    • One in for the singleton access: fn draw_rectangle(r: Rectangle) { ... }
    • Another one inside the drawer: impl Drawer { fn draw_rectangle(r: Rectangle) { ... } }
  • From a user's perspective, it's impossible to add new draw functions inside the same shapes module. So drawing methods from the engine and custom user-made methods will be split into different modules (or the user can create a shapes module in their game and re-export the content of macroquad's shapes and add custom drawing logic, but this feels more like wrapping the game framework crate rather than using it).

  • Extending the Drawer functionality in the user's crate correctly requires twice as much work:

pub trait DrawerExtension
{
    fn draw_rectangle_outline(&mut self, r: Rectangle);
}

impl DrawerExtension for Drawer { ... }

fn draw_rectangle_outline(r: Rectangle)
{
    // Either duplicate the code logic here, or expose the context_mut().drawer publicly
    // to allow the user to externaly extend the main singleton drawing mechanism, even
    // if context_mut().drawer should not be used directly for drawing
    context_mut().drawer.draw_line(r);
}

My first iteration over that was to remove the global draw function and call them on the singleton directly

fn drawer_mut() -> &mut Drawer { ... }
fn drawer_ref() -> &    Drawer { ... }

(Technically maybe I don't need drawer_ref() if I can have drawer_mut() that return a mutable reference, but a part of me still want it).
But calling different methods depending on mutability, like drawer_mut().draw_rectangle(...) or drawer_ref().current_color(), feels awkward.

So I experimented with empty structs and abusing deref/deref_mut:

struct Draw;

impl Deref for Draw
{
    type Target=Drawer;
    fn deref(&self) -> &Self::Output { ... }
}
impl DerefMut for Draw
{
    type Target=Drawer;
    fn deref_mut(&mut self) -> &mut Self::Output { ... }
}

That way I can write Draw.draw_rectangle() or Draw.current_color() without needing to call different methods depending on whether I need mutable or immutable access.

User can extend the Drawer struct, and directly access the Drawer method from the singleton using the empty struct Draw.draw_foo() without writting another function.

Returning a reference like that with deref() and deref_mut() for the singleton is really conveniant, but it's also so easy so mess it up:

let d1 = Draw;
let drawer_mut1 : &'static Drawer = d1.deref_mut();

let d2 = Draw;
let drawer_mut2 : &'static Drawer = d2.deref_mut(); // Boom, 2 mutables references to the same resource

I can use some kind of guard mecanism like RefCell for single thread, or RwLock for multithreaded avoid these case, but in either way, those type return some kind of guard that impl deref/deref_mut into the target (ex: cell::Ref<'a,T>/cell::RefMut<'a,T> for RefCell). Even if the guards generally act like a reference to the target with some custom RAII/drop behavior (and some extra data about how to drop it correctly), they are not a reference.

fn drawer_mut() -> RefMut<'static,Drawer> { ... }

By making it safer, this removes the convenient deref()/deref_mut() hack access with empty structs (maybe it was a bad idea).
So for better safety, I need to sacrifice some convenient ways to call the singleton, and the code Draw.draw_rectangle(...) becomes draw_mut().draw_rectangle(...) (maybe it's better and I'm probably overthinking it).

Now Idk if I want my singleton to be multithreaded or single threaded, so I was thinking about how to wrap it into an api that support both access.

Somethings like:

pub trait ReadGuard : Sized
{
    type Target;
    type ReadGuard<'a> : Deref<Target = Self::Target> where Self: 'a;
    /// Can panic on exceptional behavior (ex poisoned mutex)
    fn read<'a>(&'a self) -> Self::ReadGuard<'a>;
}
impl<T> ReadGuard for std::sync::RwLock<T>
{
    type Target = T;
    type ReadGuard<'a> = std::sync::RwLockReadGuard<'a,T> where Self: 'a;
    fn read<'a>(&'a self) -> Self::ReadGuard<'a> { self.read().expect("poisoned") }
}
pub struct ReferenceReadGuard<'a,T> where T: ?Sized
{
    inner: &'a T,
}
impl<'a,T> Deref for ReferenceReadGuard<'a, T> where T: ?Sized { ... }

(In case you are wondering, I have another trait for faillible read guard access.

pub trait TryReadGuard : ReadGuard
{
    type Error<'a>: Debug where Self: 'a;
    fn try_read<'a>(&'a self) -> Result<Self::ReadGuard<'a>, Self::Error<'a>>;
}

)

But I still miss the convenniance to use the deref()/deref_mut() trait that was possible with a less safe code.
I'm also worried about the distinction between reference and non-reference type in the language, since some type act like a reference (cell::Ref<'a,T>), but are not actual reference, making some trait impossible to use.

(If someone have any different approch than drawer_mut().draw_stuff() I'm interested for a safe singleton with some guard.)

3) RefCell example

The Deref trait always return a reference: &Self::Target,
but DerefInto always return a Self::Target<'out> (can return a reference or a non reference type).

So Deref imply DerefInto, but DerefInto don't necessary imply Deref because it don't necessary return a reference (can return a non-reference type like a guard, e.g., RwReadGuard<'a,T>)

So all existing type that impl Deref can automatically impl DerefInto

impl<T> DerefInto for T where T: Deref
{
    type Target<'out> = &'out T::Target where Self: 'out;
    fn deref_into(&self) -> Self::Target<'_> {
        self.deref()
    }
}

If you have any link to the notation for supertraits, I'm interested :)

4) Deref hides an implicit behaviour.

Yes you are right. While I'm not sure myself that I want it on RefCell, Mutex, and RwLock, they are useful for user-defined types and library authors.

(line 218)

The implementation of these traits for existing types such as RefCell, Mutex, and RwLock could be addressed later, rejected or not, or considered out of scope for this initial RFC.

(line 220)

Since this feature primarily benefits user-defined types and library authors seeking more ergonomic access patterns, native support in the core library can reasonably come at a later stage once the mechanism is stable.

The Index and IndexMut traits with array[1] can panic if the index is invalid, but people use it instead of array.get(1).unwrap() because they expect the index to be valid in most cases (and if not, it safely panics), so it's convenient.

It's the same for DerefInto / DerefMutInto: if I'm using it to access a singleton, I would expect the access to succeed (or panic safely if it fails for my API).

The original trait Deref and DerefMut don't deref to a field, but contains a method to access the target reference, that can contains some logic that I'm trying to abuse it for ergonomic. The logic can still contains an implicit behavior, but it is very conveniant:

struct DirtyFlag<T>
{
    value: T,
    pub is_dirty: bool,
}

impl Deref for DirtyFlag<T>
{
    type Target=T;
    fn deref(&self) -> &Self::Target { &self.value }
}
impl DerefMut for DirtyFlag<T>
{
    fn deref_mut(&mut self) -> &mut Self::Target { self.is_dirty = true; &mut self.value }
}

5) Deref should not fail

Okay, you got me with that one. In my usage, deref can fail, even though it should not fail most of the time.
It's shifting from "deref can never fail" to "deref should not be expected to fail in most cases", which is different.

Other stuff

I'd like to thank everyone for the feedback.

I actually agree with the criticisms because of all the valid points:

  • Value and reference destructor order differences (lexical vs. non-lexical),
  • Hidden behavior (though I'm more hesitant about this one),
  • Potential confusion (don't impl it for std type?)
  • Many traits return references, not just Deref/DerefMut (e.g., AsRef<T>::as_ref(&self) -> &T). I'm not sure if solving this for other traits could be beneficial, or if there a way to make reference and guard type in some way compatible (and I'm not sure if this is desirable).

But sacrificing Draw.draw_rectangle(...) for draw_mut().draw_rectangle(...) is probably the biggest point about ergonomics for me, but maybe that the way to go.

@blueglyph
Copy link

@Thomas-Mewily

For 1) and 2), I was mostly talking about your motivation intro.

I also have the impression that the feature you're proposing is not so much related to RAII or "singletons" than to the general desire to access some guarded content without writing the access method explicitly. I'd avoid relying so much on OOP terms for a language that isn't really OOP, at least not in the traditional sense.

For 3), what I meant is that your blanket implementation will conflict with any implementation of DerefInto a user will write. Besides, it seems unnecessary, if not even undesirable.

Try to compile this example to see what I mean.

For the supertrait notation, check the reference. However, it seems the compiler doesn't prioritize which Target is meant in the implementation of the subtrait, so it finds it ambiguous and requires <Self as DerefInto>::Target, or another name... Maybe it's not worth it, after all.

Note also the Rust style, which prefers to consistently place the open brackets on the same line (there's a Rustfmt in the playground, if you want to reformat your code and if your editor / IDE doesn't do it). That's just nitpicking.

The other part of the remark was more about the vocabulary:

The Deref and DerefMut trait implementation of the core library will just delegate to DerefInto and DerefMutInto

A base class/trait can't delegate to a subclass/subtrait, obviously.

It doesn't look like you wrote that text, though; someone else did, right?

For 4), if they're not used for the very traits you're showing in your motivation section, well, not only it makes the RFC confusing and misleading, but it would be confusing for the users of that feature: why use it in their code if the standard library doesn't? What sort of mixed code would that produce, if a .borrow or similar was sometimes explicit, sometimes not?

And the core problem remains: your intent is to hide some mechanism that is very likely to have secondary effects and that Rust generally prefers to expose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

T-lang Relevant to the language team, which will review and decide on the RFC.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants