async fundamentals initiative

initiative status: active

What is this?

This page tracks the work of the async fundamentals initiative, part of the wg-async-foundations vision process! To learn more about what we are trying to do, and to find out the people who are doing it, take a look at the charter.

Current status

This is an umbrella initiative and, as such, it covers a number of subprojects.

See the roadmap for a list of individual milestones and their status.

SubprojectIssueProgressStateStage
async fn#50547▰▰▰▰▰Stabilized
static async fn in traitRFC#3185▰▰▱▱▱🦀Experimental
dyn async fn in trait▰▱▱▱▱🦀Proposal
async drop▰▱▱▱▱🦀Proposal
async closures▰▱▱▱▱💤Proposal

Key:

  • ✅ – phase complete
  • 🦀 – phase in progress
  • 💤 – phase not started yet

How Can I Get Involved?

  • Check for 'help wanted' issues on this repository!
  • If you would like to help with development, please contact the owner to find out if there are things that need doing.
  • If you would like to help with the design, check the list of active design discussions first.
  • If you have questions about the design, you can file an issue, but be sure to check the FAQ or the design discussions first to see if there is already something that covers your topic.
  • If you are using the feature and would like to provide feedback about your experiences, please open a "experience report" issue.
  • If you are using the feature and would like to report a bug, please open a regular issue.

We also participate on Zulip, feel free to introduce yourself over there and ask us any questions you have.

Building Documentation

This repository is also an mdbook project. You can view and build it using the following command.

mdbook serve

✏️ Updates

Lang-team initiatives give monthly updates. This section collects the updates from this initiative for posterity.

2021-Oct: Lang team update

  • Owner: tmandry
  • Liaison/author: nikomatsakis, currently (looking for a replacement)

Although the async fundamentals initiative hasn't technically formed yet, I'm going to write an update anyhow as "acting liaison". To start, I would like to find another liaison! I think that I am a bit close to the work here and the group would benefit from a liaison who is a bit more distant.

Our overall charter: Make it possible to write async fn in traits, as well as enabling key language features that bring async more into parity with sync:

  • Async functions in traits
    • in both static and dyn contexts
  • Async drop
  • Async closures

This is a key enabler for most of the async vision doc. For example, the various interop traits (e.g., async iteration, async read, async write, etc) all build on async functions in traits.

We have identified an MVP, which aims to support async fn in traits in static contexts by desugaring to an (anonymous) associated GAT plus (on the impl side) a TAIT. We are preparing an RFC describing this MVP and talking to various folks about doing the implementation work.

We are assembling a group of stakeholders that we will talk to in order to get feedback on the MVP and on future design decisions (in addition to the lang team and so forth).

In addition to the MVP, we are drafting an evaluation doc that identifies further challenges along with possible solutions. Once we feel good about the coverage for a particular challenge, we will create targeted RFCs for that specific item.

One specific direction of interest is creating core enablers that can be used to experiment with the most ergonomic syntax or capabilities. As an example, for dyn async traits, there is a need to return some form of "boxed dyn future", but there are many runtiem techniques one might use for this (the most obvious being to return a Box, of course). Supporting those options requires being able to manipulate vtables and the like. It may be an optional to make those kind of "core capabilities" available as simple primitives. This would allow us to experiment with a procedural macro that generates an easy to use wrapper built on these primitives; once we have a clear idea what exactly that should be, we can bring it into the language. (It remains to be seen if this is a better path than trying to build the say thing first and work out the primitives later.)

Niko has also been writing blog posts to walk through the dyn logic in more detail (starting at part 1 and continuing in part 2).

📜 async fn fundamentals Charter

This initiative is part of the wg-async-foundations vision process.

Proposal

async fn exists today, but does not integrate well with many core language features like traits, closures, and destructors. We would like to make it so that you can write async code just like any other Rust code.

Goals

Able to write async fn in traits and trait impls

The goal in general is that async fn can be used in traits as widely as possible:

  • for foundational traits, like reading, writing, and iteration;
  • for async closures;
  • for async drop, which is built in to the language;
  • in dyn values, which introduce some particular complications;
  • in libraries, for all the usual reasons one uses traits;
  • in ordinary programs, using all manner of executors.

Key outcomes

Support async drop

Users should be able to write "async fn drop" to declare that the destructor may await.

Key outcomes

  • Types can perform async operations on cleanup, like closing database connections
  • There's a way to detect and handle async drop types that are dropped synchronously
  • Await points that result from async cleanup can be identified, if needed

Support async closures

Support async closures and AsyncFn, AsyncFnMut, AsyncFnOnce traits.

Key outcomes

  • Async closures work like ordinary closures but can await values
  • Traits analogous to Fn, FnMut, FnOnce exist for async
  • Reconcile async blocks and async closures

Membership

RoleGithub
Ownertmandry
Liaisonnikomatsakis

Stakeholders

Stakeholder representatives

The pitch

The async fundamentals initiative is developing designs to bring async Rust "on par" with synchronous Rust in terms of its core capabilities:

  • async functions in traits (our initial focus)
  • async drop (coming later)
  • async closures (coming later)

We need feedback from people using Rust in production to ensure that our designs will meet their needs, but also to help us get some sense of how easy they will be to understand. One of the challenges with async Rust is that it has a lot of possible variations, and getting a sense for what kinds of capabilities are most important will help us to bias the designs.

We also want people to commit to experimenting with these designs while they are on nightly! This doesn't mean that you have to ship production software based on the nightly compiler. But it does mean that you agree to, perhaps, port your code over to use the nightly compiler on a branch and tell us how it goes. Or experiment with the nightly compiler on other codebases you are working on.

Expected time commitment

  • One 90 minute meeting per month + written feedback
    • Stucture:
      • ~30 minute presentation covering the latest thoughts
      • ~60 minutes open discussion with Tyler, Niko, other stakeholders
    • Written feedback:
      • answer some simple questions, provide overall perspective
      • expected time: ~30 minutes or less
  • Once features become available (likely early next year), creating branch that uses them
    • We will do our best to make this easy
    • For example, I expect us to offer an alternative to async-trait procedural macro that generates code that requires the nightly compiler
    • But this will still take some time! How much depends a bit on you.
    • Let's guess-timate 2-3 hours per month

Benefits

  • Shape the design of async fn in traits
  • Help ensure that it works for you
  • A t-shirt!

Goals of the stakeholder program

The goal of the stakeholder program is to make Rust's design process even more inclusive. We have observed that existing mechanisms like the RFC process or issue threads are often not a very good fit for certain categories of users, such as production users or the maintainers of large libraries, as they are not able to keep up with the discussion. As a result, they don't participate, and we wind up depriving ourselves of valuable feedback. The stakeholder program looks to supplement those mechanisms with direct contact.

Another goal is to get more testing: one problem we have observed is that features are often developed and deployed on nightly, but production users don't really want to try them out until they hit stable! We would like to get some commitment from people to give things a try so that we have a better chance of finding problems before stabilization.

We want to emphasize that we welcome design feedback from all Rust users, regardless of whether you are a named stakeholder or not. If you're using async Rust, or have read through the designs and have a question or idea for improvement, please feel free to open an issue and tell us about it!

Number of stakeholder representatives

We are selecting a small number of stakeholders covering various points in the design space, e.g.

  • Web services author
  • Embedded Rust
  • Web framework author
  • Web framework consumer
  • High-performance computing
  • Operating systems

If you have thoughts or suggestions for good stakeholders, or you think that you yourself might be a good fit, please reach out to tmandry or nikomatsakis!

Async fn fundamentals

This initiative is part of the overal async vision roadmap.

Impact

  • Able to write async fn in traits and trait impls
    • Able to easily declare that T: Trait + Send where "every async fn in Trait returns a Send future"
    • Traits that use async fn can still be dyn safe though some tuning may be required
    • Async functions in traits desugar to impl Trait in traits
  • Able to write "async fn drop" to declare that the destructor may await
  • Support for async closures

Milestones

MilestoneStateKey participants
Author evaluation doc for static async trait🦀tmandry
Author evaluation doc for dyn async trait🦀tmandry
Author evaluation doc for async drop🦀tmandry
Author evaluation doc for impl Trait in traits💤
Stabilize type alias impl trait💤
Stabilize generic associated types💤
Author RFC for async fn in traits💤
Author evaluation doc for async closures💤
Author RFC for async fn in traits💤
Feature complete for async fn in traits💤
Feature complete for impl Trait in traits💤
Feature complete for async drop💤
Feature complete for async closures💤

MVP: Static async fn in traits

This section defines an initial minimum viable product (MVP). This MVP is meant to be a subset of async fns in traits that can be implemented and stabilized quickly.

In a nutshell

  • In traits, async fn foo(&self) desugars to
    • an anonymous associated type type Foo<'me>: Future<Output = ()> (as this type is anonymous, users cannot actually name it; the name Foo here is for demonstrative purposes only)
    • a function fn foo(&self) -> Self::Foo<'_> that returns this future
  • In impls, async fn foo(&self) desugars to
    • a value for the anonymous associated type type Foo<'me> = impl Future<Output = ()>
    • a function fn foo(&self) -> Self::Foo<'_> { async move { ... } }
  • If the trait used async fn, then the impl must use async fn (and vice versa)
  • Traits that use async fn are not dyn safe
    • In the MVP, traits using async fn can only be used with impl Trait or generics

What this enables

  • The MVP is sufficient for projects like embassy, which already model async fns in traits in this way.
  • TODO: Once we have a list of stakeholders, try to get a sense for how many uses of async-trait could be replaced

Notable limitations and workarounds

  • No support for dyn
    • This is a fundamental limitation; the only workaround is to use async-trait
  • No ability to name the resulting futures:
    • This means that one cannot build non-generic adapters that reference those futures.
    • Workaround: define a function alongside the impl and use a TAIT for its return type
  • No ability to bound the resulting futures (e.g., to require that they are Send)
    • This rules out certain use cases when using work-stealing executor styles, such as the background logging scenario. Note that many other uses of async fn in traits will likely work fine even with a work-stealing executor: the only limitation is that one cannot write generic code that invokes spawn.
    • Workaround: do the desugaring manually when required, which would give a name for the relevant future.

Implementation plan

  • This MVP relies on having generic associated types and type alias impl trait, but they are making good progress.
  • Otherwise, the implementation is a straightforward desugaring, similar to how inherent async fns are implemented
  • We may wish to also ship a variant of the async-trait macro that lets people easily experiment with this feature

Forward compatibility

The MVP sidesteps a number of the more challenging design problems. It should be forwards compatible with:

  • Adding support for dyn traits later
  • Adding a mechanism to bound the resulting futures
  • Adding a mechanism to name the resulting futures
    • The futures added here are anonymous, and we can always add explicit names later.
    • If we were to name the resulting futures after the methods, and users had existing traits that used those same names already, this could present a conflict, but one that could be resolved.
  • Supporting and bounding async drop
    • This trait will not exist yet with the MVP, and supporting async fn doesn't enable anything fundamental that we don't have to solve anyway.

🔬 Evaluation

The evaluation surveys the various design approaches that are under consideration. It is not required for all initiatives, only those that begin with a problem statement but without a clear picture of the best solution. Often the evaluation will refer to topics in the design-discussions for more detailed consideration.

Goals

Write async fn in traits, impls

The goal of the impact is to enable users to write async fn in traits and impls in a natural way. As a simple example, we would like to support the ability to write an async fn in any trait:


#![allow(unused)]
fn main() {
trait Connection {
    async fn open(&mut self);
    async fn send(&mut self);
    async fn close(&mut self);
}
}

Along with the corresponding impl:


#![allow(unused)]
fn main() {
impl Connection for MyConnection {
    async fn open(&mut self) {
        ...
    }

    async fn send(&mut self) {
        ...
    }

    async fn close(&mut self) {
        ...
    }
}
}

The goal in general is that async fn can be used in traits as widely as possible:

  • for foundational traits, like reading, writing, and iteration;
  • for async closures;
  • for async drop, which is built in to the language;
  • in dyn values, which introduce some particular complications;
  • in libraries, for all the usual reasons one uses traits;
  • in ordinary programs, using all manner of executors.

Support async drop

One particular trait worth discussing is the Drop trait. We would like to support "async drop", which means the ability to await things during drop:


#![allow(unused)]
fn main() {
trait AsyncDrop {
    async fn drop(&mut self);
}
}

Like Drop, the AsyncDrop trait would be

Executor styles and Send bounds

One key aspect of async fn in traits has to do with how to communicate Send bounds needed to spawn tasks. The key question is roughly "what send bounds are required to safely spawn a task?"

  • A single threaded executor runs all tasks on a single thread.
  • A thread per core executor selects a thread to run a task when the task is spawned, but never migrates tasks between threads. This can be very efficient because the runtimes never need to communicate across threads except to spawn new tasks.
    • In this scenario, the "initial state" must be Send but not the future once it begins executing.
    • Example: glommio::spawn
  • A work-stealing executor can move tasks between threads even mid-execution.
    • In this scenario, the future must be Send at all times (or we have to rule out the ability to have leaks of data out from the future, which we don't have yet).
    • Example: tokio::spawn

Reference scenarios

Background logging

In this scenario, the start_writing_logs function takes an async iterable and spawns out a new task. This task will pull items from the iterator and send them to some server:


#![allow(unused)]
fn main() {
trait AsyncIterator {
    type Item;
    async fn next(&mut self) -> Self::Item;
}

// Starts a task that will write out the logs in the background
async fn start_writing_logs(
    logs: impl AsyncIterator<Item = String> + 'static
) {
    spawn(async move || {
        while let Some(log) = logs.next().await {
            send_to_serve(log).await;
        }
    });
}
}

The precise signature and requirements for the spawn function here will depend on what kind of executor you are using, so let's consider each case but let's consider each case separately.

One note: in [tokio] and other existing executors, the spawn function takes a future, not an async closure. We are using a closure here because that is more analogous to the synchronous signature, but also because it enables a distinction between the initial state and the future that runs.

Thread-local executor

This is the easy case. Nothing has to be Send.

Work-stealing executor

In this case, the spawn function will require both that the initial closure itself is Send and that the future it returns is Send (so that it can be moved from place to place as code executes).

We don't have a good way to express this today! The problem is that there is a future that results from calling logs.next(), let's call it F. The future to be spawned has to be sure that F: Send. There isn't a good way to do this today, and even explaining the problem is surprisingly hard. Here is a "desugared version" of the program that shows what is needed:


#![allow(unused)]
fn main() {
trait AsyncIterator {
    type Item;
    type NextFuture: Future<Output = Self::Item>;

    fn next(&mut self) -> impl Self::NextFuture;
}

// Starts a task that will write out the logs in the background
async fn start_writing_logs<I>(
    logs: I
) 
where
    I: AsyncIterator<Item = String> + 'static + Send,
    I::NextFuture: Send,
{
    spawn(async move || {
        while let Some(log) = logs.next().await {
            send_to_serve(log).await;
        }
    });
}
}

(With RFC 2289, you could write logs: impl AsyncIterator<Item = String, NextFuture: Send> + Send, which is more compact, but still awkward.)

Implementing AsyncRead

AsyncRead is being used here as a "stand-in" for some widely used trait that appears in the standard library. The details of the trait are not important.

Self is send

In this scenario, the Self type being used to implement is sendable, but the actual future that is created is not.


#![allow(unused)]
fn main() {
struct MySocketBuddy {
    x: u32
}

impl AsyncRead for MySocketBuddy {
    async fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        let ssh_key = Rc::new(vec![....]);
        do_some_stuff(ssh_key.clone());
        something_else(self.x).await;
    }
    // ERROR: `ssh_key` is live over an await;
    //        Self implements Send
    //        therefore resulting future must implement Send
}
}

Self is not send

In this scenario, the Self type being used to implement is not sendable.


#![allow(unused)]
fn main() {
struct MySocketBuddy {
    x: u32,
    ssh_key: Rc<Vec<u8>>,
}

impl AsyncRead for MySocketBuddy {
    async fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        do_some_stuff(self.ssh_key.clone());
        something_else(self.x).await;
    }
    // OK
}
}

Challenges

This section describes the challenges that we need to find solutions for.

Bounding futures

The challenge is to be able to concisely and intuitively bound the futures resulting from async fn calls, as described in the Background Logging scenario.

Naming futures

Status

Seems unlikely to be adopted. Doesn't feel right, and doesn't give any expanded abilities, such as the ability to name the future type.

Summary

The challenge is to be able to name the future that results from a particular async function. This can be used, for example, when naming future types, or perhaps as part of bounding futures.

It is likely that this problem is best solved in the context of the impl trait initiative.

Dyn traits

Supporting dyn Trait when Trait contains an async fn is challenging:


#![allow(unused)]
fn main() {
trait Trait {
    async fn foo(&self);
}

impl Trait for TypeA {
    async fn foo(&self);
}

impl Trait for TypeB { ... }
}

Consider the desugared form of this trait:


#![allow(unused)]
fn main() {
trait Trait {
    type Foo<'s>: Future<Output = ()> + 's;

    fn foo(&self) -> Self::Foo<'_>;
}

impl Trait for TypeA {
    type Foo<'s> = impl Future<Output = ()> + 's;

    fn foo(&self) -> Self::Foo<'_> {
        async move { ... } // has some unique future type F_A
    }
}

impl Trait for TypeB { ... }
}

The primary challenge to using dyn Trait in today's Rust is that dyn Trait today must list the values of all associated types. This means you would have to write dyn for<'s> Trait<Foo<'s> = XXX> where XXX is the future type defined by the impl, such as F_A. This is not only verbose (or impossible), it also uniquely ties the dyn Trait to a particular impl, defeating the whole point of dyn Trait.

For this reason, the async_trait crate models all futures as Box<dyn Future<...>>:


#![allow(unused)]
fn main() {
#[async_trait]
trait Trait {
    async fn foo(&self);
}

// desugars to

trait Trait {
    fn foo(&self) -> Box<dyn Future<Output = ()> + Send + '_>;
}
}

This compiles, but it has downsides:

  • Allocation is required, even when not using dyn Trait.
  • The user must state up front whether Box<dyn Future...> is Send or not.
    • In async_trait, this is declared by writing #[async_future(?Send)] if desired.

Desiderata

Here are some of the general constraints:

  • The ability to use async fn in a trait without allocation
  • When using a dyn Trait, the type of the future must be the same for all impls
    • This implies a Box or other pointer indirection, or something like inline async fn.
  • It would be nice if it were possible to use dyn Trait in an embedded context (without access to Box)
    • This will not be possible "in general", but it could be possible for particular traits, such as AsyncIterator

Bounding async drop

As a special case of the bounding futures problem, we must consider AsyncDrop.


#![allow(unused)]
fn main() {
async fn foo<T>(t: T) {
    runtime::sleep(22).await;
}
}

The type of foo(t) is going to be a future type like FooFuture<T>. This type will also include the types of all futures that get awaited (e.g., the return value of runtime::sleep(22) in this case). But in the case of T, we don't yet know what T is, and if it should happen to implement AsyncDrop, then there is an "implicit await" of that future. We have to ensure that the contents of that future are taken into account when we determine if FooFuture<T>: Send.

Guaranteeing async drop

One challenge with AsyncDrop is that we have no guarantee that it will be used. For any type MyStruct that implements AsyncDrop, it is always possible in Rust today to drop an instance of MyStruct in synchronous code. In that case, we cannot run the async drop. What should we do?

Obvious alternatives:

  • Panic or abort
  • Use some form of "block on" or other default executor to execute the asynchronous await
  • Extend Rust in some way to prevent this condition.

We can also mitigate this danger through lints (e.g., dropping value which implements AsyncDrop).

Some types may implement both synchronous and asynchronous drop.

Implicit await with async drop

Consider this code:


#![allow(unused)]
fn main() {
async fn foo(input: &QueryInput) -> anyhow::Result<()> {
    let db = DatabaseHandle::connect().await;
    let query = assemble_query(&input)?;
    let results = db.perform_query(query).await;
    while let Some(result) = results.next().await? {
        ...
    }
}
}

Now let us assume that DatabaseHandle implements AsyncDrop to close the connection. There are numerous points here where db could be dropped (e.g., each use of ?). At each of those points, there is effectively an implicit await similar to AsyncDrop::async_drop(db).await. It seems clear that users should not be required to manually write those things, but it is also a weakening of the existing .await contract (that all blocking points are visible).

Design documents

This section contains detailed design documents aimed at various challenges.

DocumentChallenges addressedStatus
Implied SendBounding futures
Trait multiplicationBounding futures🤔
Inline async fnBounding futures, Dyn traits (partially)🤔
Custom dyn implsDyn traits🤔
[Auto traits consider AsyncDrop][Bounding drop]🤔

Implied Send

Status

❌ Rejected. This idea can be quite productive, but it is not versatile (it rules out important use cases) and it is not supportive (it is confusing).

(FIXME: I think the principles aren't quite capturing the constriants here! We should adjust.)

Summary

Targets the bounding futures challenge.

The core idea of "implied Send" is to say that, by default at least, the future that results from an async fn must be Send if the Self type that implements the trait is Send.

In Chalk terms, you can think of this as a bound like

if (Implemented(Self: Send)) { 
    Implemented(Future: Send)
}

Mathematically, this can be read as Implemented(Self: Send) => Implemented(Future: Send). In other words, if you assume that Self: Send, then you can show that Future: Send.

Desugared semantics

If we extended the language with if bounds a la Chalk, then the desugared semantics of "implied send" would be something like this:


#![allow(unused)]
fn main() {
trait AsyncIterator {
    type Item;
    type NextFuture: Future<Output = Self::Item>
                   + if (Self: Send) { Send };

    fn next(&mut self) -> impl Self::NextFuture;
}
}

As a result, when you implement AsyncIterator, the compiler will check that your futures are Send if your input type is assumed to be Send.

What's great about this

The cool thing about this is that if you have a bound like T: AsyncIterator + Send, that automatically implies that any futures that may result from calling AsyncIterator methods will also be Send. Therefore, the background logging scenario works like this, which is perfect for a work stealing executor style.


#![allow(unused)]
fn main() {
async fn start_writing_logs(
    logs: impl AsyncIterator<Item = String> + Send + 'static
) {
    ...
}
}

What's not so great

Negative reasoning: Semver interactions, how to prove

In this proposal, when one implements an async trait for some concrete type, the compiler would presumably have to first check whether that type implements Send. If not, then it is ok if your futures do not implement Send. That kind of negative reasoning is actually quite tricky -- it has potential semver implications, for example -- although auto traits are more amenable to it than other things, since they already interact with semver in complex ways.

In fact, if we use the standard approach for proving implication goals, the setup would not work at all. The typical approach to proving an implication goal like P => Q is to assume P is true and then try to prove Q. But that would mean that we would just wind up assuming that the Self type is Send and trying to use that to prove the resulting Future is Send, not checking whether Self is Send to decide.

Not analogous to async fn outside of traits

With inherent async functions, we don't check whether the resulting future is Send right away. Instead, we remember what state it has access to, and then if there is some part of the code that requires a future to be Send, we check then.

But this "implied send" approach is different: the trait is effectively declaring up front that async functions must be send (if the Self is send, at least), and so you wind up with errors at the impl. This is true regardless of whether the future ever winds up being used in a spawn.

The concern here is not precisely that the result is too strict (that's covered in the next bullet), but rather that it will be surprising behavior for people. They'll have a hard time understanding why they get errors about Send in some cases but not others.

Stricter than is required for non-work-stealing executor styles

Building on the previous point, this approach can be stricter than what is required when not using a work stealing executor style.

As an example, consider a case where you are coding in a thread-local setting, and you have a struct like the following


#![allow(unused)]
fn main() {
struct MyCustomIterator {
    start_index: u32
}
}

Now you try to implement AsyncIterator. You know your code is thread-local, so you decide to use some Rc data in the process:


#![allow(unused)]
fn main() {
impl AsyncIterator for MyCustomIterator {
    async fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        let ssh_key = Rc::new(vec![....]);
        do_some_stuff(ssh_key.clone());
        something_else(self.start_index).await;
    }
}
}

But now you get a compilation error:

error: `read` must be `Send`, since `MyCustomIterator` is `Send`

Frustrating!

Trait multiplication

Status

Seems unlikely to be adopted, but may be the seed of a better idea

Summary

Introduce a new form of bound, trait multiplication. One can write T: Iterator * Send and it means T: Iterator<Item: Send> + Send (using the notation from RFC 2289). More generally, Foo * Bar means Foo + Bar but also that Foo::Assoc: Bar for each associated type Assoc defined in Foo (including anonymous ones defined by async functions).

What's great about this

The cool thing about this is that it means that the check whether things are Send occurs exactly when it is needed, and not at the definition site. This makes async functions in trait behave more analogously with ordinary impls. (In contrast to implied bounds.)

With this proposal, the background logging scenario would play out differently depending on the executor style being used:

  • Thread-local: logs: impl AsyncIterator<Item = String> + 'static
  • Thread per core: logs: impl AsyncIterator<Item = String> + Send + 'static
  • Work stealing: logs: impl AsyncIterator<Item = String> * Send + 'static

The key observation here is that + Send only tells you whether the initial value (here, logs) is Send. The * Send is needed to say "and the futures resulting from this trait are Send", which is needed in work-stealing sceanrios.

What's not so great about this

Complex

The distinction between + and * is subtle but crucial. It's going to be a new thing to learn and it just makes the trait system feel that much more complex overall.

Reptitive for multiple traits

If you had a number of async traits, you would need * Send for each one:


#![allow(unused)]
fn main() {
trait Trait1 {
    async fn method1(&self);
}

trait Trait2 {
    async fn method2(&self);
}

async fn foo<T>()
where
    T: Send * (Trait1 + Trait2)
{

}
}

Inline async fn

Status

Seems unlikely to be adopted, but may be the seed of a better idea

Status quo

Until now, the only way to make an "async trait" be dyn-safe was to use a manual poll method. The AsyncRead trait in futures, for example, is as follows:


#![allow(unused)]
fn main() {
pub trait AsyncRead {
    fn poll_read(
        self: Pin<&mut Self>, 
        cx: &mut Context<'_>, 
        buf: &mut [u8]
    ) -> Poll<Result<usize, Error>>;

    unsafe fn initializer(&self) -> Initializer { ... }
    
    fn poll_read_vectored(
        self: Pin<&mut Self>, 
        cx: &mut Context<'_>, 
        bufs: &mut [IoSliceMut<'_>]
    ) -> Poll<Result<usize, Error>> { ... }
}
}

Implementing these traits is a significant hurdle, as it requires the use of Pin. It also means that people cannot leverage .await syntax or other niceties that they are accustomed to. (See Alan hates writing a stream for a narrative description.)

It would be nice if we could rework those traits to use async fn. Today that is only possible using the async_trait procedural macro:


#![allow(unused)]
fn main() {
#[async_trait]
pub trait AsyncRead {
    async fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>;

    async fn poll_read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>;

    unsafe fn initializer(&self) -> Initializer { ... }
    
}
}

Unfortunately, using async-trait has some downsides (see Alan needs async in traits). Most notably, the traits are rewritten to return a Box<dyn Future>. For many purposes, this is fine, but in the case of foundational traits like AsyncRead, AsyncDrop, AsyncWrite, it is a significant hurdle:

  • These traits should be included in libcore and available to the no-std ecosystem, like Read and Write.
  • These traits are often on the performance "hot path" and forcing a memory allocation there could be significant for some applications.

There are some other problems with the poll-based design. For example, the buffer supplied to poll_read can change in between invocations (and indeed existing adapters take advantage of this sometimes). This means that the traits cannot be used for zero copy, although this is not the only hurdle.

For drop especially, the state must be embedded within the self type

If we want to have an async version of drop, it is really important that it does not return a separate future, but only makes use of state embedded within the type. This is because we might have a Box<dyn Future> or some other type that implements AsyncDrop, but where we don't know the concrete type. We are going to want to be able to drop those, which implies that they will live on the stack, which implies that we have to know the contents of the resulting future to know if it is Send.

Problem: returning a future

The fundamental problem that makes async fn not dyn-safe (and the reason that allocation is required) is that every implementation of AsyncRead requires different amounts of state. The future that is returned is basically an enumeration with fields for each value that may be live across an await point, and naturally that will vary per implementation. This means that code which doesn't know the precise type that it is working with cannot predict how much space that type will require. One solution is certainly boxing, which sidesteps the problem by returning a pointer to memory in the heap.

Using poll methods, as the existing traits do, sidesteps this in a different way: the poll methods basically require that any state that the AsyncRead impl requires across invocations of poll must be present within the self field itself. This is a perfectly valid solution for many applications, but figuring out that state and tracking it efficiently is tedious for users.

Proposal: "inline" futures

The idea is to allow users to opt-in to "inline futures". Users would write a repr attribute on traits that contain async fn methods (the attribute):


#![allow(unused)]
fn main() {
#[repr(inline_async)]
trait AsyncRead {
    async fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>;
    
    ...
}
}

The choice of repr is significant here:

  • Like repr on a struct, this is meant to be used for things that affect how the code is compiled and its efficiency, but which don't affect the "mental model" of how the trait works.
  • Like repr on a struct, using repr may imply some limitations on the things you can do with the trait in order to achieve those benefits.

When a trait is as repr(inline_async), the state for all of its async functions will be added into the type that implements the trait (this attribute could potentially also be used per method). This does imply some key limitations:

  • repr(inline_async) traits can only be implemented on structs or enums defined in the current crate. This allows the compiler to append those fields into the layout of that struct or enum.
  • repr(inline_async) traits can only contain async fn with &mut self methods.

Desugaring

The desugaring for an inline_async function is different. Rather than an async fn becoming a type that returns an impl Future, the async fn always returns a value of a fixed type. This is a kind of variant on Future::PollFn, which will simply invoke the poll_read function each time it is called. What we want is something like this, although this doesn't quite work (and relies on unstable features Niko doesn't love):


#![allow(unused)]
fn main() {
trait AsyncRead {
    // Standard async fn desugaring, with a twist:
    fn read(&mut self, buf: &mut [u8]) -> Future::PollFn<
        typeof(<Self as AsyncRead>::poll_read)
    >;

    // 
    fn poll_read(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<usize, Error>>;
}
}

Basically the read method would

  • initialize the state of the future and then
  • construct a Future::PollFn-like struct that contains a pointer to the poll_read function.

FAQ

What's wrong with that desugaring?

The desugaring is pretty close. It has the nice property that, when invoked with a known type, the Future::PollFn dispatches statically the poll function, so there is no dynamic dispatch or loss of efficiency.

However, it also has a problem. The return type is still dependent on Self, so per our existing dyn Rules, that doesn't work.

It should be possible to extend our dyn Rules, though. All that is needed is a bit of "adaptation glue" in the code that is included in the vtable so that it will convert from a Future::PollFn for some fixed T to one that uses a fn pointer. That seems eminently doable, but I'm not sure if it can be expressed in the language today.

Pursuing this road might lead to a fundamental extension in dyn safety, which would be nice!

What state is added precisely to the struct?

  • An integer recording the await point where the future is blocked
  • Fields for any data that outlives the await

What if I don't want lots of state added to my struct?

We could limit the use of variables live across an await.

Could we extend this to other traits?

e.g., simulacrum mentioned -> impl Iterator in a (dyn-safe) trait. Seems plausible.

Why do you only permit &mut self methods?

Since the state for the future is stored inline in the struct, we can only have one active future at a time. Using &mut self ensures that the poll function is only in use by one future at a time, since that future would be holding an &mut reference to the receiver.

We would like to implement AsyncRead for all &mut impl AsyncRead, how can we enable that?

I think this should be possible. The trick is that the poll function would just dispatch to another poll function. We might be able to support it by detecting the pattern of the async fn directly awaiting something reachable from self and supporting that for arbitrary types:


#![allow(unused)]
fn main() {
impl<T: AsyncRead> AsyncRead for &mut T {
    async fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error> {
        T::read(self, buf).await
    }
}
}

Basically this compiles to a poll_read that just tweaks dispatches to another poll_read with some derefs.

Can you implement both AsyncRead and AsyncWrite for the same type with this technique?

You can, but you can't simultaneously read and write from the same value. You would need a split-like API.

Custom dyn impls

As described in dyn traits, dyn Trait types cannot include the types of each future without defeating their purpose; but outside of a dyn context, we want those associated types to have unique values for each impl. Threading this needle requires extending Rust so that the value of an associated type can be different for a dyn Trait and for the underlying impl.

How it works today

Conceptually, today, there is a kind of "generated impl" for each trait. This impl implements each method by indirecting through the vtable, and it takes the value of associated types from the dyn type:


#![allow(unused)]
fn main() {
trait Foo {
    type Bar;

    fn method(&self);
}

impl<B> Foo for dyn Foo<Bar = B> {
    type Bar = B;

    fn method(&self) {
        let f: fn(&Self) = get_method_from_vtable(self)
        f(self)
    }
}
}

Meanwhile, at the point where a type (say u32) is coerced to a dyn Foo, we generate a vtable based on the impl:


#![allow(unused)]
fn main() {
// Given
impl Foo for u32 {
    fn method(self: &u32) { XXX }
}

// we could a compile for `method`:
// fn `<u32 as Foo>::method`(self: &u32) { XXX }

fn eg() {
    let x: u32 = 22;
    &x as &dyn Foo // <-- this case
}

// generates a vtable with a pointer to that method:
//
// Vtable_Foo = [ ..., `<u32 as Foo>::method` ]
}

Note that there are some known problems here, such as soundness holes in the coherence check.

Rough proposal

What we would like is the ability for this "dyn" impl to diverge more from the underlying impl. For example, given a trait Foo with an async fn method:


#![allow(unused)]
fn main() {
trait Foo {
    async fn method(&self);
}
}

The compiler might generate an impl like the following:


#![allow(unused)]
fn main() {
impl<B> Foo for dyn Foo {
    //          ^^^^^^^ note that this type doesn't include Bar = ...

    type Bar = Box<dyn Future<Output = ()>>;
    //         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ because the result is hardcoded

    fn method(&self) -> Box<dyn Future<Output = ()>> {
        let f: fn(&Self) = get_method_from_vtable(self)
        f(self) 
    }
}
}

The vtable, meanwhile, resembles what we had before, except that it doesn't point directly to <u32 as Foo>::method, but rather to a wrapper function (let's call it methodX) that has the job of coercing from the concrete type into a Box<dyn Future>:

// Vtable_Foo = [ ..., `<u32 as Foo>::methodX`]
// fn `<u32 as Foo>::method`(self: &u32) { XXX  }
// fn `<u32 as Foo>::methodX`(self: &u32) -> Box<dyn> { Box::new(TheFuture)  }

Auto traits

To handle "auto traits", we need multiple impls. For example, assuming we adopted trait multiplication, we would have multiple impls, one for dyn Foo and one for dyn Foo * Send:


#![allow(unused)]
fn main() {
trait Foo {
    async fn method(&self);
}

impl<B> Foo for dyn Foo {
    type Bar = Box<dyn Future<Output = ()>>;

    fn method(&self) -> Box<dyn Future<Output = ()>> {
            
    }
}

impl<B> Foo for dyn Foo * Send {
    //          ^^^^^^^^^^^^^^

    type Bar = Box<dyn Future<Output = ()> + Send>;
    //         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    fn method(&self) -> Box<dyn Future<Output = ()>> {
            ....
    }
}

// compiles to:
//
// Vtable_Foo = [ ..., `<u32 as Foo>::methodX`]
// fn `<u32 as Foo>::method`(self: &u32) { XXX  }
// fn `<u32 as Foo>::methodX`(self: &u32) -> Box<dyn> { Box::new(TheFuture)  }
}

Hard-coding box

One challenge is that we are hard-coding Box in the above impls. We could control this in a number of ways:

  • Annotate the trait with an alternate wrapper type
  • Extend dyn types with some kind of indicator of the wrapper (dyn(Box)) that they use for this case
  • Generate impls for Box<dyn> -- has several shortcomings

Applicable

Everything here is applicable more broadly, for example to types that return Iterator.

It'd be nice if we extended this capability of "writing your own dyn impls" to end-users.

Auto traits consider async drop

One way to solve the bounding async drop challenge is to require that, if a type X implements AsyncDrop, then X: Send only if the type of its async drop future is also Send. The drop trait is already integrated quite deeply into the language, so adding a rule like this would not be particularly challenging.

Simple names

One simple way to give names to async functions is to just generate a name based on the method. For example:


#![allow(unused)]
fn main() {
trait AsyncIterator {
    type Item;
    async fn next(&mut self) -> Self::Item;
}
}

could desugar to


#![allow(unused)]
fn main() {
trait AsyncIterator {
    type Item;
    type Next<'me>: Future<Output = Self::Item> + 'me;
    fn next(&mut self) -> Self::Next<'_>;
}
}

Users could then name the future with <T as AsyncIterator>::Next<'a>.

This is a simple solution, but not a very general one, and perhaps a bit surprising (for example, there is no explicit declaration of Next).

Bound Items

Summary

Targets the bounding futures challenge.

This is a series of smaller changes (see detailed design), with the goal of allowing the end user to name the bound described in the challenge text in this way (i'll let the code speak for itself):


#![allow(unused)]
fn main() {
pub trait AsyncIterator {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

pub async fn start_writing_logs<F>(
    logs: F
) where IsThreadSafeIterator<F>, F: 'static {  // IsThreadSafeIterator is defined below
    todo!()
}

pub bound IsThreadSafeAsyncIterator<T> {
    T: AsyncIterator + Send,
    // Add a `fn` keyword here to refer to the type of associated function.
    <T as AsyncIterator>::fn next: SimpleFnOnceWithSendOutput,
    // Use `*` to refer to potentially long list of all associated functions.
    // this is useful in certain cases.
    <T as AsyncIterator>::fn *: SimpleFnOnceWithSendOutput,
}

}

Detailed design

I'm not good at naming things, and all names and identifiers are subject to bikeshedding.

  • Bound Item(Language construct). Allow user to name a combination of bounds. No Self allowed here. Could replace the trait_alias language feature. Improve ergonomics of many existing code if used properly.

    I'm hesitant on which of PascalCase, snake_case, or UPPER_CASE should be used for this naming though.

  • Syntax for refering to the type of a associated item in path. Currently using fn keyword. Can expand to const and type keywords if needed. Turbofish might be involved if the assoc item is generic.

  • SimpleFnOnce trait (Language item). A function can only accept one set of arguments, so all functions and closures shall implement this, while user-defined callable types doesn't have to. This is used for reasoning about the output parameters of associated functions.

    
    #![allow(unused)]
    fn main() {
    pub trait SimpleFnOnce: FnOnce<Self::Arg> {
        type Arg;
    }
    }
    
  • SimpleFnOnceWithSendOutput trait (Lib construct or user-defined).

    
    #![allow(unused)]
    fn main() {
    pub trait SimpleFnOnceWithSendOutput : SimpleFnOnce where
        Self::Output: Send
    {}
    
    impl<T> SimpleFnOnceWithSendOutput for T where T:SimpleFnOnce, T::Output: Send {}
    }
    

What's great about this

  • Absolutely minimal type system changes.
  • Quite easy to learn. (Name it and use the name)
  • Easy to write library documentation, and give examples.

What's not so great

  • New syntax item constructs. Three parsers (rustc, ra, syn) needs to change to support this.
  • Very verbose code writing style. Many more lines of code.
  • Will become part of library API in many cases.

With clauses

Status

Crazy new idea that solves all problems

Summary

  • Introduce a with(x: T) clause that can appear wherever where clauses can appear
    • These variables are in scope in any (non-const) code block that appears within those scopes.
  • Introduce a new with(x = value) { ... } expression
    • Within this block, you can invoke fuctions that have a with(x: T) clause (presuming that value is of type T); you can also invoke code which calls such functions transitively.
    • The values are propagated from the with block to the functions you invoke.

More detailed look

Simple example

Suppose that we have a generic visitor interface in a crate visit:


#![allow(unused)]
fn main() {
trait Visitor {
    fn visit(&self);
}

impl<V> Visitor for Vec<V>
where
    V: Visitor,
{
    fn visit(&self) {
        for e in self {
            e.visit();
        }
    }
}
}

I would like to use this interface in my crate. But I was hoping to increment a counter each time one of my types is visited. Unfortunately, the Visitor trait doesn't offer any way to thread access to this counter into the impl. With with, though, that's no problem.


#![allow(unused)]
fn main() {
struct Context { counter: usize }

struct MyNode {

}

impl Visitor for MyNode 
with(cx: &mut Context)
{
    fn visit(&self) {
        cx.counter += 1;
    }
}
}

Now I can use this visitor trait as normal:


#![allow(unused)]
fn main() {
fn process_item() {
    let cx = Context { counter: 0 };
    let v = vec![MyNode, MyNode, MyNode];
    with(cx: &mut cx) {
        v.visit();
    }
    assert_eq!(cx.counter, 3);
}
}

How it works

We extend the environment with a with(name: Type) clause. When we typecheck a with(name: value) { ... } statement, we enter those clauses into the environment. When we check impls that contain with clauses, they match against those clauses like any other where clause.

After matching an impl, we are left with a "residual" of implicit parameters. When we monomorphize a function applied to some particular types, we will check the where clauses declared on the function against those types and collect the residual parameters. These are added to the function and supplied by the caller (which must have them in scope).

Things to overcome

Dyn value construction: we need some way to permit impls that use with to be made into dyn values. This is very hard, maybe impossible. The problem is that we don't want to "capture" the with values into the dyn -- so what do we do if somebody packages up a Box<dyn> and puts it somewhere?

We could require that context values implement Default but .. that stinks. =)

We could panic. That kind of stinks too!

We could limit to traits that are not dyn safe, particularly if there was a manual impl of dyn safety. The key problem is that, today, for a dyn safe trait, one can make a dyn trait without knowing the source type:


#![allow(unused)]
fn main() {
fn foo<T: Visitor + 'static>(v: T) {
    let x: Box<dyn Visitor> = Box::new(v);
}
}

But, now, what happens if the Box<dyn Visitor> is allowed to escape the with scope, and the methods are invoked?

Conceivably we could leverage lifetimes to prevent this, but I'm not exactly sure how. It would imply a kind of "lifetime view" on the type T that ensures it is not considered to outlive the with scope. That doesn't feel right. What we really want to do is to put a lifetime bound of sorts on the... use of the where clause.

We could also rework this in an edition, so that this capability is made more explicit. Then only traits and impls in the new edition would be able to use with clauses. This would harm edition interop to some extent, we'd have to work that out too.

📚 Explainer

The "explainer" is "end-user readable" documentation that explains how to use the feature being deveoped by this initiative. If you want to experiment with the feature, you've come to the right place. Until the feature enters "feature complete" form, the explainer should be considered a work-in-progress.

Async functions in traits are expected to "roll out" in stages, so we actually have several explainers, one for each phase:

  • Phase 1: Stable compiler supports async fn in traits, dynamic dispatch supported via dyner crate
  • Phase 2: Build on async fn in traits support:
    • async closures
    • async drop
    • AsyncRead, AsyncWrite, Stream (the [portability initiative] is driving the design of these traits)
  • Phase 3: Circle back to dynamic dispatch, incorporating the lessons we've learned from the dyner crate

Async Fundamentals Stage 1 Explainer

This document describes Stage 1 of the "Async Fundamentals" plans. Our eventual goal is to make it so that, in short, wherever you write fn, you can also write async fn and have things work as transparently as possible. This includes in traits, even special traits like Drop.

Now that I've got you all excited, let me bring you back down to earth: That is our goal, but Stage 1 does not achieve it. However, we do believe that Stage 1 does enable async functions in traits in such a way that everything is possible, though it may not be easy.

The hope is that by having a stage 1 where stable Rust exposes the core functionality needed for async traits, we can get more data about how async traits will be used in practice. That can guide us as we try to resolve some of the (rather sticky) design questions that block making things more ergonomic. =)

Review: how async fn works

When you write an async function in Rust:


#![allow(unused)]
fn main() {
async fn test(x: &u32) -> u32 {
    *x
}
}

This is actually shorthand for a function that returns an impl Future and which captures all of its arguments. The body of this function is an async move block, which simply creates and returns a future:


#![allow(unused)]
fn main() {
fn test<'x>(x: &'x u32) -> impl Future<Output = u32> + 'x {
    async move {
        *x
    }
}
}

The await operation can be performed on any value that implements Future. It causes the async fn to execute the future's "poll" function to see if it's value is ready. If not, then it suspends the current frame until it is re-invoked.

Async fn in traits

Writing an async function in a trait desugars in just the same way as an async function elsewhere. Hence this trait:


#![allow(unused)]
fn main() {
trait HttpRequestProvider {
    async fn request(&mut self, request: Request) -> Response;
}
}

becomes:


#![allow(unused)]
fn main() {
trait HttpRequestProvider {
    fn request<'a>(&'a mut self, request: Request) -> impl Future<Output = Response> + 'a;
}
}

On stable Rust today, impl Trait is not permitted in "return position" within a trait, but we plan to allow it. It will desugar to a function that returns an associated type with the same name as the method itself:


#![allow(unused)]
fn main() {
trait HttpRequestProvider {
    type request<'a>: Future<Output = Response> + 'a;
    fn request<'a>(&'a mut self, request: Request) -> Self::request<'a>;
}
}

Calling t.request(...) thus returns a value of type T::request<'a>. We will reference this request variable later.

Using async fn in traits

When you have async fn in a trait, you can use it with generic types in the usual way. For example, you could write a function that uses the above trait like so:


#![allow(unused)]
fn main() {
async fn process_request(
    mut provider: impl HttpRequestProvider,
    request: Request,
) {
    let response = provider.request(request).await;
    ...
}
}

Naturally you could also write the above example using explicit generics as well (just as with any impl trait):


#![allow(unused)]
fn main() {
async fn process_request<P>(
    mut provider: P,
    request: Request,
) where
    P: HttpRequestProvider,
{
    let response = provider.request(request).await;
    ...
}
}

Naming the future that is returned

Like any function that returns an impl Trait, the return type from an async fn in a trait is anonymous. However, there are times when it can be very useful to be able to talk about it. One particular place where this comes up is when spawning tasks. Consider a function that invokes process_request (above) many times in parallel:


#![allow(unused)]
fn main() {
fn process_all_requests<P>(
    provider: P,
    requests: Vec<RequestData>,
) where
    P: HttpRequestProvider + Clone + Send,
{
    let handles =
        requests
            .into_iter()
            .map(|r| {
                let provider = provider.clone();
                tokio::spawn(async move {
                    process_request(provider, r).await;
                })
            })
            .collect();
    join_all(handles).await;
}
}

As is, compiling this function gives the following error, even though P is marked as Send:

error[E0277]: the future returned by `HttpRequestProvider::request` (invoked on `P`) cannot be sent between threads safely
   --> src/lib.rs:21:17
    |
21  |                 tokio::spawn(async move {
    |                 ^^^^^^^^^^^^ `<P as HttpRequestProvider>::request` cannot be sent between threads safely
    |
note: future is not `Send` as it awaits another future which is not `Send`
   --> src/lib.rs:35:5
    |
35  |     let response = provider.request(request).await?;
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^ this future is not `Send`
note: required by a bound in `tokio::spawn`
   --> /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.13.0/src/task/spawn.rs:127:21
    |
127 |         T: Future + Send + 'static,
    |                     ^^^^ required by this bound in `tokio::spawn`

The problem here is that, while P: Send, the future returned by request is not necessarily Send (it could, for example, create Rc state and store it in local variables). To fix this, one can add the bound on the P::request type:


#![allow(unused)]
fn main() {
fn process_all_requests<P>(
    provider: P,
    requests: Vec<RequestData>,
)
where
    P: HttpRequestProvider + Clone + Send,
    for<'a> P::request<'a>: Send,
{
    let handles =
        requests
            .into_iter()
            .map(|r| {
                let provider = provider.clone();
                your_runtime::spawn(async move {
                    process_request(provider, r).await;
                })
            })
            .collect();
    your_runtime::join_all(handles).await
}
}

The new bound is here:


#![allow(unused)]
fn main() {
    for<'a> P::request<'a>: Send,
}

The "higher-ranked" requirement says that "no matter what lifetime provider has, the result is Send". Specifying these higher-ranked lifetimes can be a bit cumbersome, and sometimes the GATs accumulate quite a large number of parameters. Therefore, the compiler also supports a shorthand form; if you leave off the GAT parameters, the for<'a> is assumed:


#![allow(unused)]
fn main() {
where
    P: HttpRequestProvider + Clone + Send,
    P::request: Send,
}

In fact, using a nightly feature, we can write this more compactly:


#![allow(unused)]
fn main() {
where
    P: HttpRequestProvider<request: Send> + Clone + Send,
}

When would send bounds be required?

One thing we are not sure of is how often send bounds would be required in practice. The main point where such bounds are required are for generic async functions that will be "spawned". For example, the process_request function itself didn't require any kind of bounds on request:


#![allow(unused)]
fn main() {
async fn process_request(
    mut provider: impl HttpRequestProvider,
    request: Request,
) {
    let response = provider.request(request).await;
    ...
}
}

This is because nothing in this function was required to be Send. The problems only arise when you have a call to spawn or some other function that imposes a Send bound -- and even then, there may be no issue. For example, invoking process_request on a known type doesn't require any sort of where clauses:


#![allow(unused)]
fn main() {
your_runtime::spawn(async move {
    process_request(MyProvider::new(), some_request);
})
}

This works because the compiler is able to see that process_request is being called with a MyProvider, and hence it can determine exactly what future process_request will call when it invokes provider.request, and it can see that this future is Send.

The problems only arise when you invoke spawn on a value of a generic type, like P in our example above. In that case, the compiler doesn't know exactly what future will run, so it needs some bounds on P::request.

Caveat: Async fn in trait are not dyn safe

There is one key limitation in stage 1: traits that contain async fn are not dyn safe. Instead, support for dynamic dispatch is provided via a procedural macro called #[dyner]. In future stages, we expect to support dynamic dispatch "natively", but we still need more experimentation and feedback on the requirements to find the best design.

The #[dyner] macro works as follows. You attach it to a trait:


#![allow(unused)]
fn main() {
#[dyner]
trait HttpRequestProvider {
    async fn request(&mut self, request: Request) -> Response;
}
}

It will generate, alongside the trait, a type DynHttpRequestProvider (Dyn + the name of the trait, in general):


#![allow(unused)]
fn main() {
// Your trait, unchanged:
trait HttpRequestProvider { .. }

// A struct to represent a trait object:
struct DynHttpRequestProvider<'me> { .. }

impl<'me> HttpRequestProvider for DynHttpRequestProvider<'me>
{ .. }
}

This type is a replacement for Box<dyn HttpRequestProvider>. You can create an instance of it by writing DynHttpRequestProvider::from(x), where x is of some type that implements HttpRequestProvider. The DynHttpRequestProvider implements HttpRequestProvider but it forwards each method through using dynamic dispatch.

Example usage

Suppose we had a Context type that was generic over an HttpRequestProvider:


#![allow(unused)]
fn main() {
struct Context<T: HttpRequestProvider> {
   provider: T,
}

async fn process_request(
    provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider,
    };
    // ...
}
}

Now suppose that we wanted to remove the T type parameter, so that Context included a trait object. You could do this with dyner by altering the code as follows:


#![allow(unused)]
fn main() {
struct Context<'me> {
   provider: DynHttpRequestProvider<'me>,
}

async fn process_request(
    provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider: DynHttpRequestProvider::new(provider),
    };
    // ...
}
}

You might be surprised to see the 'me parameter -- this is needed because we don't whether the provider given to us includes borrowed data. If we knew that it had no references, we might also write:


#![allow(unused)]
fn main() {
struct Context {
   provider: DynHttpRequestProvider<'static>,
}

async fn process_request(
    provider: impl HttpRequestProvider + 'static,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider: DynHttpRequestProvider::from(provider),
    };
    // ...
}
}

Dyner with references

Dyner currently requires an allocator. The DynHttpRequestProvider::new method, for example, allocates a Box internally, and invoking an async function allocates a box to store the resulting future (the #[async_trait] crate does the same; the difference with this approach is that you only use the boxing when you are using dynamic dispatch).

Sometimes though we would like to construct our dynamic objects without using Box. This might be because we only have a &T access or simply because we don't need to allocate and would prefer to avoid the runtime overhead. In cases like that, you can use the from_ref and from_mut methods. from_ref takes a &impl Trait and gives you a Ref<DynTrait>; Ref is a special type that ensures you only use &self methods. from_mut works the same way but for &mut Trait types. Since the provider type never escapes process_request, we could rework our previous example to use mutable references instead of boxing:


#![allow(unused)]
fn main() {
struct Context<'me> {
   provider: &'me mut DynHttpRequestProvider<'me>,
}

async fn process_request(
    mut provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider: DynHttpRequestProvider::from_mut(&mut provider),
    };
    // ...
}
}

The rest of the code would work just the same... you can still invoke methods in the usual way (e.g., cx.provider.request(r).await).

Dyner and no-std

Because dyner requires an allocator, it is not currently suitable for no-std settings. We are exploring alternatives here: it's not clear if there is a suitable general purpose mechanism that could be used in settings like that. But part of the beauty of the dyner approach is that, in principle, there might be "dyner-like" crates that can be used in a no-std environment.

Dyner all the things

Dyner is meant to be a "general purpose" replacement for dyn Trait. It expands the scope of what kinds of traits are dyn safe to include traits that use impl Trait in argument- and return-position. For example, the following trait is not dyn safe, but it is dyner safe:


#![allow(unused)]
fn main() {
#[dyner]
trait Print {
    fn print(&self, screen: &mut impl Screen);
    //                           ^^^^^^^^^^^
    //                    impl trait is not dyn safe;
    //                    it is dyner-safe if the trait
    //                    is also procssed with dyner
}

#[dyner]
trait Screen {
    fn output(&mut self, x: usize, y: usize, c: char);
}
}

Dyner for external traits

For dyner to work well, all the traits that you reference from your dyner trait must be processed with dyner. But sometimes those traits may be out of your control! What can you do then? To support this, dyner permits you to apply dyner to an external trait definition. For example, support you had this trait, referencing the Clear trait from cc_traits:


#![allow(unused)]
fn main() {
#[dyner]
trait MyOp {
    fn apply_op(x: &mut impl cc_traits::Clear);
}
}

Applying dyner to MyOp will yield an error that "the type cc_traits::DynClear is not found". This is because the code that dyner generates expects that, for each Foo trait, there will be a DynFoo type available at the same location, and in this case there is not. You can fix this by using the dyner::external_trait! macro, but unfortunately doing so requires that you copy and paste the trait definition:


#![allow(unused)]
fn main() {
mod ext {
    dyner::external_trait! {
        pub trait cc_traits::Clear {
    	    fn clear(&mut self);
        }
    }
}
}

The dyner::external_trait! macro will generate two things:

  • A re-export of cc_traits::Clear
  • a type DynClear

Now we can rewrite our MyOp trait to reference Clear from this new location and everything works:


#![allow(unused)]
fn main() {
mod ext { ... }

#[dyner]
trait MyOp {
    fn apply_op(x: &mut impl ext::Clear);
    //                       ^^^ Changed this.
}
}

The ext::Clear path is just a re-exported version of cc_traits::Clear, so there is no change there, but the type ext::DynClear is well-defined.

Dyner for things in the stdlib

The dyner crate already includes dyner-ified versions of things from the stdlib. However, to take advantage of them, you have to import the traits from dyner::std. For example, instead of referencing std::iter::Iterator, try dyner::std::iter::Iterator:


#![allow(unused)]
fn main() {
use dyner::std::iter::{Iterator, DynIterator};
//  ^^^^^^^                      ^^^^^^^^^^^

#[dyner]
trait WidgetStream {
    fn request(&mut self, r: impl Iterator<Item = String>);
    //                            ^^^^^^^^ the macro will look for DynIterator
}
}

You can use use dyner::std::prelude::* to get all the same traits as are present in the std prelude, along with their Dyn equivalents.

Feedback requested

We'd love to hear what you think. Nothing here is set in stone, quite far from it!

Some questions we are specifically interested in getting answers to:

  • What about this document was confusing to you? We want you to understand, of course, but we also want to improve the way we explain things.
  • How often do you think you would use static vs dynamic dispatch in traits?
  • How are you managing async fn in trait today? Do you think the dyner crate would work for you?
  • How often do you think you use functions that require Send. Do you anticipate any problems from the bound scheme described above?

Async Fundamentals Mini Vision Doc

Grace and Alan are working at BoggleyCorp, developing a network service using async I/O in Rust. Internally, they use a trait to manage http requests, which allows them to easily change to different sorts of providers:


#![allow(unused)]
fn main() {
trait HttpRequestProvider {
    async fn request(&mut self, request: Request) -> anyhow::Result<Response>;
}
}

They start out using impl HttpRequest and things seem to be working very well:


#![allow(unused)]
fn main() {
async fn process_request(
    mut provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let response = provider.request(request).await?;
    process_response(response).await;
}
}

As they refactor their system, though, they decide they would like to store the provider in a Context type. When they create the struct, though, they realize that impl Trait syntax doesn't work in that context:


#![allow(unused)]
fn main() {
struct Context {
   provider: impl HttpRequestProvider,
}

async fn process_request(
    mut provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context { 
        provider
    };
    // ...
}
}

Alan looks to Grace, "What do we do now?" Grace says, "Well, we could make an explicit type parameter, but I think a dyn might be easier for us here." Alan says, "Oh, ok, that makes sense." He alters the struct to use a Box<dyn HttpRequestProvider>:


#![allow(unused)]
fn main() {
struct Context {
   provider: Box<dyn HttpRequestProvider>,
}
}

However, this gets a compilation error: "traits that contain async fn are not dyn safe". The compiler does, however, suggest that they check out the experimental dyner crate. The README for the crate advises them to decorate their trait with dyner::make_dyn, and to give a name to use for the "dynamic dispatch" type:


#![allow(unused)]
fn main() {
#[dyner::make_dyn(DynHttpRequestProvider)]
trait HttpRequestProvider {
    async fn request(&mut self, request: Request) -> anyhow::Result<Response>;
}
}

Following the readme, they also modify their Context struct like so:


#![allow(unused)]
fn main() {
struct Context {
   provider: DynHttpRequestProvider<'static>,
}

async fn process_request(
    provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider: DynHttpRequestProvider::from(provider),
    };
    // ...
}
}

However, the code doesn't compile just as is. They realize that the Context is only being used inside process_request and so they follow the "time-limited" pattern of adding a lifetime parameter 'me to the context. This represents the period of time in which the context is in use:


#![allow(unused)]
fn main() {
struct Context<'me> {
   provider: DynHttpRequestProvider<'me>,
}

async fn process_request(
    provider: impl HttpRequestProvider,
    request: Request,
) -> anyhow::Result<()> {
    let cx = Context {
        provider: DynHttpRequestProvider::from(provider),
    };
    // ...
}
}

At this point, they are able to invoke provider.request() as normal.

Spawning tasks

As their project expands, Alan and Grace realize that they are going to have process a number of requests in parallel. They insert some code to spawn off tasks:


#![allow(unused)]
fn main() {
fn process_all_requests(
    provider: impl HttpRequestProvider + Clone,
    requests: Vec<RequestData>,
) -> anyhow::Result<()> {
    let handles =
        requests
            .into_iter()
            .map(|r| {
                let provider = provider.clone();
                tokio::spawn(async move {
                    process_request(provider, r).await;
                })
            })
            .collect();
    tokio::join_all(handles).await
}
}

However, when they write this, they get an error:

error: future cannot be sent between threads safely
   --> src/lib.rs:21:17
    |
21  |                 tokio::spawn(async move {
    |                 ^^^^^^^^^^^^ future created by async block is not `Send`
    |
note: captured value is not `Send`
   --> src/lib.rs:22:37
    |
22  |                     process_request(provider, r).await;
    |                                     ^^^^^^^^ has type `impl HttpRequestProvider + Clone` which is not `Send`
note: required by a bound in `tokio::spawn`
   --> /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.13.0/src/task/spawn.rs:127:21
    |
127 |         T: Future + Send + 'static,
    |                     ^^^^ required by this bound in `tokio::spawn`
help: consider further restricting this bound
    |
13  |     provider: impl HttpRequestProvider + Clone + std::marker::Send,
    |                                                +++++++++++++++++++

"Ah, right," thinks Alan. "I need to add Send to show that the provider is something we can send to another thread."


#![allow(unused)]
fn main() {
async fn process_all_requests(
    provider: impl HttpRequestProvider + Clone + Send,
    //                                           ^^^^ added this
    requests: Vec<Request>,
) {
    ...
}
}

But when he adds that, he gets another error. This one is a bit more complex:

error[E0277]: the future returned by `HttpRequestProvider::request` (invoked on `impl HttpRequestProvider + Clone + Send`) cannot be sent between threads safely
   --> src/lib.rs:21:17
    |
21  |                 tokio::spawn(async move {
    |                 ^^^^^^^^^^^^ `<impl HttpRequestProvider + Clone + Send as HttpRequestProvider>::Request` cannot be sent between threads safely
    |
note: future is not `Send` as it awaits another future which is not `Send`
   --> src/lib.rs:35:5
    |
35  |     let response = provider.request(request).await?;
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^ this future is not `Send`
note: required by a bound in `tokio::spawn`
   --> /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.13.0/src/task/spawn.rs:127:21
    |
127 |         T: Future + Send + 'static,
    |                     ^^^^ required by this bound in `tokio::spawn`
help: introduce a `provider: Send` bound to require this future to be send:
    |
13  ~     provider: impl HttpRequestProvider<request: Send> + Clone + Send,,
    |

Alan and Grace are a bit puzzled. They decide to to their friend Barbara, who knows Rust pretty well.

Barbara looks over the error. She explains to them that when they call an async function -- even in a trait -- that results in a future. "Yeah, yeah", they say, "we know that." Barbara explains that this means that saying that provider is Send does not necessarily imply that the future resulting from provider.request() is Send. "Ah, that makes sense," says Alan. "But what is this suggestion at the bottom?"

"Ah," says Barbara, "the notation T: Foo<Bar: Send> generally means the same as T: Foo, T::Bar: Send. In other words, it says that T implements Foo and that the associated type Bar is Send."

"Oh, I see," says Alan, "I remember reading now that when we use an async fn, the compiler introduces an associated type with the same name as the function that represents the future that gets returned."

"Right", answers Barbara. "So when you say impl HttpRequestProvider<request: Send> that means that the request method returns a Send value (a future, in this case)."

Alan and Grace change their project as the compiler suggested, and things start to work:


#![allow(unused)]
fn main() {
fn process_all_requests(
    provider: impl HttpRequestProvider<request: Send> + Clone,
    requests: Vec<RequestData>,
) -> anyhow::Result<()> {
    ...
}
}

Dyner all the things (even non-async things)

As Alan and Grace get used to dyner, they start using it for all kinds of dynamic dispatch, including some code which is not async. It takes getting used to, but dyner has some definite advantages over the builtin Rust functionality, particularly if you use it everywhere. For example, you can use dyner with traits that have self methods and even methods which take and return impl Trait values (so long as those traits also use dyner, and they use the standard naming convention of prefixing the name of the trait with Dyn):


#![allow(unused)]
fn main() {
// The dyner crate re-exports standard library traits, along
// with Dyn versions of them (e.g., DynIterator). You do have
// to ensure that the `Iterator` and `DynIterator` are reachable
// from the same path though:
use dyner::iter::{Iterator, DynIterator};

#[dyner::make_dyn(DynWidgetStream)]
trait WidgetTransform {
    fn transform(
        mut self, 
        //  ^^^^ not otherwise dyn safe
        w: impl Iterator<Item = Widget>,
        // ^^^^^^^^^^^^^ this would not ordinarily be dyn-safe
    ) -> impl Iterator<Item = Widget>;
    //   ^^^^^^^^^^^^^ this would not ordinarily be dyn-safe
}
}

Dyn trait objects for third-party traits

Using dyner, Alan and Barbara are basically unblocked. After a while, though, they hit a problem. One of their dependencies defines a trait that has no dyn equivalent (XXX realistic example?):


#![allow(unused)]
fn main() {
// In crate parser
trait Parser {
    fn parse(&mut self, input: &str);
}
}

They are able to manage this by declaring the "dyn" type themselves. To do so, however, they have to copy and paste the trait definition, which is kind of annoying:


#![allow(unused)]
fn main() {
mod parser {
    dyner::make_dyn_extern {
        trait parser::Parser {
            fn parse(&mut self, input: &str);
        }
    }
}
}

They can now use crate::parser::{Parser, DynParser} to get the trait and its Dyn equivalent; the crate::parser::Parser is a re-export. "Ah, so that's how dyner re-exports the traits from libstd", Grace realizes.

What's missing?

  • Spawning and needing Send bounds
  • Appendix: how does dyner work?

XXX

  • replacement for where Self: Sized?
    • a lot of those problems go away but
    • we can also offer an explicit #[optimization] annotation:
      • causes the default version to be reproduced for dyn types and exempts the function from all limitations
  • inherent async fn
  • impl A + B
    • DynerAB-- we could even do that, right?
  • https://docs.rs/hyper/0.14.14/hyper/body/trait.HttpBody.html

Phase 2

TODO

Phase 3

TODO

✨ RFC

The RFC exists here in draft form. It will be edited and amended over the course of this initiative. Note that some initiatives produce multiple RFCs.

Until there is an accepted RFC, any feature gates must be labeled as experimental.

When you're ready to start drafting, copy in the template text from the rfcs repository.

Draft RFC: Static async fn in traits

This is a draft RFC that will be submitted to the rust-lang/rfcs repository when it is ready.

Feedback welcome!

Summary

Support async fn in traits that can be called via static dispatch. These will desugar to an anonymous associated type.

Motivation

Async/await allows users to write asynchronous code much easier than they could before. However, it doesn't play nice with other core language features that make Rust the great language it is, like traits.

In this RFC we will begin the process of integrating these two features and smoothing over a wrinkle that async Rust users have been working around since async/await stabilized nearly 3 years ago.

Guide-level explanation

You can write async fn in traits and trait impls. For example:


#![allow(unused)]
fn main() {
trait Service {
    async fn request(&self, key: i32) -> Response;
}

struct MyService {
    db: Database
}

impl Service for MyService {
    async fn request(&self, key: i32) -> Response {
        Response {
            contents: self.db.query(key).await.to_string()
        }
    }
}
}

This is useful for writing generic async code.

Currently, if you use an async fn in a trait, that trait is not dyn safe. If you need to use dynamic dispatch combined with async functions, you can use the async-trait crate. We expect to extend the language to support this use case in the future.

Note that if a function in a trait is written as an async fn, it must also be written as an async fn in your implementation of that trait. With the above trait, you could not write this:


#![allow(unused)]
fn main() {
impl Service for MyService {
    fn request(&self, key: i32) -> impl Future<Output = Response> {
        async {
            ...
        }
    }
}
}

Doing so will give you an "expected async fn" error. If you need to do this for some reason, you can use an associated type in the trait:


#![allow(unused)]
fn main() {
trait Service {
    type RequestFut<'a>: Future<Output = Response>
    where
        Self: 'a;
    fn request(&self, key: i32) -> RequestFut;
}

impl Service for MyService {
    type RequestFut<'a> = impl Future + 'a
    where
        Self: 'a;
    fn request<'a>(&'a self, key: i32) -> RequestFut<'a> {
        async { ... }
    }
}
}

Note that in the impl we are setting the value of the associated type to impl Future, because async blocks produce unnameable opaque types. The associated type is also generic over a lifetime 'a, which allows it to capture the &'a self reference passed by the caller.

Reference-level explanation

New syntax

We introduce the async fn sugar into traits and impls. No changes to the grammar are needed because the Rust grammar already support this construction, but async functions result in compilation errors in later phases of the compiler.


#![allow(unused)]
fn main() {
trait Example {
    async fn method(&self);
}

impl Example for ExampleType {
    async fn method(&self);
}
}

Semantic rules

When an async function is present in a trait or trait impl...

The trait is not considered dyn safe

This limitation is expected to be lifted in future RFCs.

Both the trait and its impls must use async syntax

It is not legal to use an async function in a trait and a "desugared" function in an impl.

Equivalent desugaring

Trait

Async functions in a trait desugar to an associated function that returns a generic associated type (GAT):

  • Just as with ordinary async functions, the GAT has a generic parameter for every generic parameter that appears on the fn, along with implicit lifetime parameters.
  • The GAT has the complete set of where clauses that appear on the fn, including any implied bounds.
  • The GAT is "anonymous", meaning that its name is an internal symbol that cannot be referred to directly. (In the examples, we will use $ to represent this name.)

#![allow(unused)]
fn main() {
trait Example {
    async fn method<P0..Pn>(&self)
    where
        WC0..WCn;
}

// Becomes:

trait Example {
    type $<'me, P0..Pn>: Future<Output = ()>
    where
        WC0..WCn, // Explicit where clauses
        Self: 'me; // Implied bound from `&self` parameter

    fn method(&self) -> Self::$<'_>
    where
        WC0..WCn;
}
}

async fn that appear in impls are desugared in the same general way as an existing async function, but with some slight differences:

  • The value of the associated type $ is equal to an impl Future type, rather than the impl Future being the return type of the function
  • The function returns a reference to Self::$<...> with all the appropriate generic parameters

Otherwise, the desugaring is the same. The body of the function becomes an async move { ... } block that both (a) captures all parameters and (b) contains the body expression.


#![allow(unused)]
fn main() {
impl Example for ExampleType {
    async fn method(&self) {
        ...
    }
}

impl Example for ExampleType {
    type $<'me, P0..Pn> = impl Future<Output = ()> + 'me
    where
        WC0..WCn, // Explicit where clauses
        Self: 'me; // Implied bound from `&self` parameter

    fn method(&self) -> Self::$<'_, P0..Pn> {
        async move { ... }
    }
}
}

Rationale and alternatives

Why are we adding this RFC now?

This RFC represents the least controversial addition to async/await that we could add right now. It was not added before due to limitations in the compiler that have now been lifted – namely, support for Generic Associated Types and Type Alias Impl Trait.

Why are the result traits not dyn safe?

Supporting async fn and dyn is a complex topic -- you can read the details on the dyn traits page of the async fundamentals evaluation doc.

Can we add support for dyn later?

Yes, nothing in this RFC precludes us from making traits containing async functions dyn safe, presuming that we can overcome the obstacles inherent in the design space.

What are users using today and why don't we just do that?

Users in the ecosystem have worked around the lack of support for this feature with the async-trait proc macro, which desugars into Box<dyn Future>s instead of anonymous associated types. This has the disadvantage of requiring users to use Box<dyn> along with all the performance implications of that, which prevent some use cases. It is also not suitable for users like embassy, which aim to support the "no-std" ecosystem.

Will anyone use async-trait crate once this RFC lands?

The async-trait crate will continue to be useful after this RFC, because it allows traits to remain dyn-safe. This is a limitation in the current design that we plan to address in the future.

Prior art

The async-trait crate

The most common way to use async fn in traits is to use the async-trait crate. This crate takes a different approach to the one described in this RFC. Async functions are converted into ordinary trait functions that return Box<dyn Future> rather than using an associated type. This means that the resulting traits are dyn safe and avoids a dependency on generic associated types, but it also has two downsides:

  • Requires a box allocation on every trait function call; while this is often no big deal, it can be prohibitive for some applications.
  • Requires the trait to state up front whether the resulting futures are Send or not. The async-trait crate defaults to Send and users write #[async_trait(?Send)] to disable this default.

Since the async function support in this RFC means that traits are not dyn safe, we do not expect it to completely displace uses of the #[async_trait] crate.

The real-async-trait crate

The real-async-trait lowers async fn to use GATs and impl Trait, roughly as described in this RFC.

Unresolved questions

  • What parts of the design do you expect to resolve through the RFC process before this gets merged?
  • What parts of the design do you expect to resolve through the implementation of this feature before stabilization?
  • What related issues do you consider out of scope for this RFC that could be addressed in the future independently of the solution that comes out of this RFC?

Future possibilities

Dyn compatibility

It is not a breaking change for traits to become dyn safe. We expect to make traits with async functions dyn safe, but doing so requires overcoming a number of interesting challenges, as described in the async fundamentals evaluation doc.

Impl trait in traits

The impl trait initiative is expecting to propose "impl trait in traits" (see the explainer for a brief summary). This RFC is compatible with the proposed design.

Ability to name the type of the returned future

This RFC does not propose any means to name the future that results from an async fn. That is expected to be covered in a future RFC from the impl trait initiative; you can read more about the proposed design in the explainer.

Design discussions

This directory hosts notes on important design discussions along with their resolutions. In the table of contents, you will find the overall status:

  • ✅ -- Settled! Input only needed if you have identified a fresh consideration that is not covered by the write-up.
  • 💬 -- Under active discussion. Check the write-up, which may contain a list of questions or places where feedback is desired.
  • 💤 -- Paused. Not under active discussion, but we may be updating the write-up from time to time with details.

Static async fn in traits

Impact

  • Able to write async fn in traits and impls and use them in statically dispatched contexts
  • Able to easily declare that T: Trait + Send where "every async fn in Trait returns a Send future"

Design notes

Support async fn syntax in traits.

The core idea is that it desugars into impl trait in traits:


#![allow(unused)]
fn main() {
trait SomeTrait {
    async fn foo(&mut self);
}

// becomes:

trait SomeTrait {
    fn foo<(&mut self) -> impl Future<Output = ()> + '_;
}
}

Naturally it should also work in an impl:


#![allow(unused)]
fn main() {
impl SomeTrait for someType {
    async fn foo(&mut self);
}
}

For async functions in traits to be useful, it is important that traits containing async fn be dyn-safe, which introduces a number of challenges that we have to overcome.

Frequently asked questions

Can users easily bound those GATs with Send, maybe even in the trait definition?

  • People are likely to want to say "I want every future produced by this trait to be Send", and right now that is quite tedious.
  • We need a way to do this.
  • This applies equally to other "-> impl Trait in trait" scenarios.

What about "dyn" traits?

  • See the sections on "inline" and "dyn" async fn in traits below!

impl Trait in traits

This effort is part of the impl trait initiative. Some notes are kept here as a summary.

Summary

Requires

Design notes

Support -> impl Trait (existential impl trait) in traits. Core idea is to desugar such thing into a (possibly generic) associated type:


#![allow(unused)]
fn main() {
trait SomeTrait {
    fn foo<(&mut self) -> impl Future<Output = ()> + '_;
}

// becomes something like:
//
// Editor's note: The name of the associated type is under debate;
// it may or may not be something user can name, though they should
// have *some* syntax for referring to it.

trait SomeTrait {
    type Foo<'me>: Future<Output = ()> + 'me
    where
        Self: 'me;

    async fn foo(&mut self) -> Self::Foo<'_>;
}
}

We also need to support -> impl Trait in impls, in which case the body desugars to a "type alias impl trait":


#![allow(unused)]
fn main() {
impl SomeTrait for SomeType {
    fn foo<(&mut self) -> impl Future<Output = ()> + '_ {

    }
}

// becomes something using "type alias impl Trait", like this:

trait SomeTrait {
    type Foo<'me> = impl Future<Output = ()> + 'me
    where
        Self: 'me;

    fn foo(&mut self) -> Self::Foo<'_> {
        ...
    }
}
}

Frequently asked questions

What is the name of that GAT we introduce?

  • I called it Bar here, but that's somewhat arbitrary, perhaps we want to have some generic syntax for naming the method?
  • Or for getting the type of the method.
  • This problem applies equally to other "-> impl Trait in trait" scenarios.
  • Exploration doc

Dyn async trait

Impact

  • Traits that contain async fn or impl trait in traits can still be dyn safe
  • Costs like boxing of futures are limited to code that uses dyn Trait and not to all users of the trait
  • Reasonable defaults around things like Send + Sync and what kind of boxing is used
  • Ability to customize those defaults for individual traits or on a crate-wide or module-wide basis

Requires

Design notes

  • Permit a trait TheTrait containing async fn or impl trait in traits to be used with dyn TheTrait, at least if other criteria are met.
  • Do not require annoying annotations.
  • Permit the user to select, for TheTrait, how the futures will be boxed or otherwise represented, which would permit us to use Box or potentially other types like SmallBox etc.
  • User should also be able to control whether the resulting futures are assumed to be send.

Older notes

The most basic desugaring of async fn in traits will make the trait not dyn-safe. "Inline" async fn in traits is one way to circumvent that, but it's not suitable for all traits that must be dyn-safe. There are other efficient options:

  • Return a Box<dyn Async<...>> -- but then we must decide if it will be Send, right? And we'd like to only do that when using the trait as a dyn Trait. Plus it is not compatible with no-std (it is compatible with alloc).
    • This comes down to needing some form of opt-in.

This concern applies equally to other "-> impl Trait in trait" scenarios.

We have looked at revising how "dyn traits" are handled more generally in the lang team on a number of occasions, but this meeting seems particularly relevant. In that meeting we were discussing some soundness challenges with the existing dyn trait setup and discussing how some of the directions we might go enabled folks to write their own impl Trait for dyn Trait impls, thus defining for themselves how the mapping from Trait to dyn Trait. This seems like a key piece of the solution.

One viable route might be:

  • Traits using async fn are not, by default, dyn safe.
  • You can declare how you want it to be dyn safe:
    • #[repr(inline)]
    • or #[derive(dyn_async_boxed)] or some such
      • to take an #[async_trait]-style approach
    • It would be nice if users can declare their own styles. For example, Matthias247 pointed out that the Box used to allocate can be reused in between calls for increased efficiency.
  • It would also be nice if there's an easy, decent default -- maybe you don't even have to opt-in to it if you are not in no_std land.

Frequently asked questions

What are the limitations around allocation and no-std code?

"It's complicated". A lot of no-std code does have an allocator (it depends on alloc), though it may require fallible allocation, or permit allocation of fixed quantities (e.g., only at startup, or so long as it can be known to be O(1)).

Dyn trait

Impact

  • Soundness holes relating to dyn Trait are closed.
  • The semver implication of whether a trait is "dyn or not" are clear.
  • More kinds of traits are dyn-safe.
  • Easily able to have a "dynamically dispatched core" with helper methods.
  • Users are able to the "adaptation" from a statically known type (T: Trait) into a dyn Trait.

Design notes

Soundness holes

FIXME-- list various issues here :)

Semver implications

Today, the compiler automatically determines whether a trait is "dyn-safe". This means that otherwise legal additions to the trait (such as new )

More kinds of traits are dyn-safe

Currently dyn-safe traits exclude a lot of functionality, such as generic methods. We may be able to lift some of those restrictions.

Easily able to have a "dynamically dispatched core" with helper methods

There is a common pattern with e.g. Iterator where there is a dynamically dispatched "core method" (fn next()) and then a variety of combinators and helper methods that use where Self: Sized to side-step dyn-safety checks. These methods often involve generics. We should make this pattern easier and more obvious, and (ideally) make it work better -- e.g., by having those methods also available on dyn Trait receivers (which seems fundamentally possible).

Adaptation

In the case of async Rust, given a trait Foo that contains async fn methods, we wish to be able to have the user write dyn Foo without having to specify the values of the associated types that contain the future types for those methods. Consider the fully desugard example:


#![allow(unused)]
fn main() {
trait Foo {
    type Method<..>: Future;
    fn method() -> Self::Method<..>
}
}

Roughly speaking we wish to be able to supply an impl like


#![allow(unused)]
fn main() {
impl Foo for dyn Foo {
    type Method<..> = Box<dyn Future<..>>;
    fn method() -> Self::Method {
        // call, via vtable, a shim that will create the `Box`
        // (or whichever smart pointer is desired)
    }
}
}

Ideally, this would be a general capability that users can use to control the adaptation of "known types" to dyn types for other traits.

Async drop

Impact

  • Able to create types (database connections etc) that perform async operations on cleanup
  • Able to detect when such types are dropped synchronously
  • Able to identify the await points that result from async cleanup if needed

Design notes

We can create a AsyncDrop variant that contains an async fn:


#![allow(unused)]
fn main() {
impl AsyncDrop for MyType {
    async fn drop(&mut self) {
        ...
    }
}
}

Like Drop, the AsyncDrop trait must be implemented for all values of its self-type.

Async drop glue

Within async functions, when we drop a value, we will invoke "async drop glue" instead of "drop glue". "Async drop glue" works in the same basic way as "drop glue", except that it invokes AsyncDrop where appropriate (and may suspend):

  • The async drop glue for a type T first executes the AsyncDrop method
    • If T has no AsyncDrop impl, then the glue executes the synchronous Drop impl
      • If T has no Drop impl, then this is a no-op
  • The async drop glue then recursively "async drops" all fields of T

Auto traits

Rust presently assumes all types are droppable. Consider a function foo:


#![allow(unused)]
fn main() {
async fn foo<T>(x: T) {}
}

Here, we will drop x when foo returns, but we do not know whether T implements AsyncDrop or not, and we won't know until monomorphization. However, to know whether the resulting future for foo(x) is Send, we have to know whether the code that drops x will be send. So we must come up with a way to know that T: Send implies that the async drop future for T is Send.

Explicit async drop

We should have a std::mem::async_drop analogous to std::mem::drop:


#![allow(unused)]
fn main() {
async fn async_drop<T>(x: T) { }
}

Implicit await points

When you run async drop glue, there is an implicit await point. Consider this example:


#![allow(unused)]
fn main() {
async fn foo(dbc: DatabaseConnection) -> io::Result<()> {
    let data = socket().read().await?;
    dbc.write(data).await?;
}
}

Here, presuming that DatabaseConnection implements AsyncDrop, there are actually a number of async drops occurring:


#![allow(unused)]
fn main() {
async fn foo(dbc: DatabaseConnection) -> io::Result<()> {
    let data = match socket().read().await {
        Ok(v) => v,
        Err(e) => {
            std::mem::async_drop(dbc).await;
            return e;
        }
    };
    let () = match dbc.write(data).await? {
        Ok(()) => (),
        Err(e) => {
            std::mem::async_drop(dbc).await;
            return e;
        }
    };
    std::mem::async_drop(dbc).await;
}
}

As this example shows, there are important ergonomic benefits here to implicit async drop, and it also ensures that async and sync code work in analogous ways. However, implicit await points can be a hazard for some applications, where it is important to identify all await points explicitly (for example, authors of embedded applications use await points to reason about what values will be stored in the resulting future vs the stack of the poll function). To further complicate things, async-drop doesn't only execute at the end of a block or an "abrupt" expression like ?: async-drop can also execute at the end of every statement, given temporary values.

The best solution here is unclear. We could have an "allow-by-default" lint encouraging explicit use of async_drop, but as the code above shows, the result may be highly unergonomic (also, imagine how it looks as the number of variables requiring async-drop grows).

Another option is to target the problem from another angle, for example by adding lints to identify when large values are stored in a future or on the stack, or to allow developers to tag local variables that they expect to be stored on the stack, and have the compiler warn them if this turns out to not be true. Users could then choose how to resolve the problem (for example, by shortening the lifetime of the value so that it is not live across an await).

Running destructors concurrently

It's often the case that at the end of a function or scope, multiple destructors are run. In general the order (which is the reverse order of initialization) matters, since one local could borrow from another, or there is some other logical dependency between them.

However, in some cases the order might not matter at all. In async, it would be possible to run destructors for multiple locals concurrently. As an example, we could mark the destructors like this:


#![allow(unused)]
fn main() {
#[concurrent]
impl AsyncDrop for Foo {
    async fn drop(&mut self) { ... }
}
}

Here, #[concurrent] means that Foo does not take logical dependencies or dependents with other values, and it is safe to drop concurrently. (The compiler would still enforce memory safety, of course.)

In these cases, however, it's usually enough to impl synchronous Drop and spawn a task for the "real" destructor. That keeps the language simple, though it's less convenient to write.

Preventing sync drop

It is easy enough to make async-drop be used, but it is currently not possible to prevent sync drop, even from within an async setting. Consider an example such as the following:


#![allow(unused)]
fn main() {
async fn foo(dbc: DatabaseConnection) -> io::Result<()> {
    drop(dbc);
}
}

The compiler could however lint against invoking (or defining!) synchronous functions that take ownership of values whose types implement AsyncDrop. This would catch code like the case above. We may have to tune the lint to avoid false warnings. Note that it is important to lint both invocation and definition sites because the synchronous function may be generic (like drop, in fact).

The question remains: what should code that implements AsyncDrop do if synchronous Drop is invoked? One option is panic, but that is suboptimal, as panic from within a destructor is considered bad practice. Another option is to simply abort. A final option is to have some form of portable "block-on" that would work, but this is effectively the (as yet unsolved) async-sync-async sandwich problem.

Preventing this 'properly' would require changing fundamental Rust assumptions (e.g., by introducing the ?Drop trait). While such a change would make Rust more expressive, it also carries complexity and composition hazards, and would require thorough exploration. It is also a step that could be taken later (although it would require some form of explicit impl !Drop opt-in by types to avoid semver breakage).

Supporting both sync and async drop

It should perhaps be possible to support both sync and async drop. It is not clear though if there are any real use cases for this.

Async closures

Impact

  • Able to create async closures that work like ordinary closures but which can await values.
  • Analogous traits to Fn, FnMut, FnOnce, etc
  • Reconcile async blocks and async closures

Design notes

Async functions need their own traits, analogous to Fn and friends:


#![allow(unused)]
fn main() {
#[repr(async_inline)]
trait AsyncFnOnce<A> {
    type Output;

    // Uh-oh! You can't encode these as `async fn` using inline async functions!
    async fn call(mut self, args: A) -> Self::Output;
}

#[repr(async_inline)]
trait AsyncFnMut: AsyncFnOnce {
    type Output;

    async fn call_mut(&mut self, args: A) -> Self::Output;
}

#[repr(async_inline)]
trait AsyncFn: AsyncFnMut {
    // Uh-oh! You can't encode these as `async fn` using inline async functions!
    async fn call(&self, args: A) -> Self::Output;
}
}

Some notes:

AsyncFnOnce is almost the same as Future/Async -- both represent, effectively, a future that can be driven exactly once. The difference is that your type can distinguish statically between the uncalled state and the persistent state after being called, if you wish, by using separate types for each. This can be useful for situations where an async fn is Send up until the point it is called, at which point it creates inner state that is not Send.

The concept of AsyncFn is more clear, but it requires storing the state externally to make sense: how else can there be multiple parallel executions.

😕 Frequently asked questions

This page lists frequently asked questions about the design. It often redirects to the other pages on the site.

What is the goal of this initiative?

See the Charter.

Who is working on it!

See the Charter.