Project goals

This repo tracks the effort to set and track goals for the Rust project.

Current goal period (2024h2)

There are 26 total goals established for the current goal period, which runs until the end of 2024. Of these, 3 were declared as flagship goals:

Next goal period (2025h1)

The next goal period will be 2025h1 runs from Jan 1 to Jun 30. We are currently in the process of assembling goals. Click here to see the current list. If you'd like to propose a goal, instructions can be found here.

About the process

Want to learn more? Check out some of the following:

Rust project goals for 2024H2

Status: Accepted RFC #3672 has been accepted, establishing 26 total Rust project goals for 2024H2.

Want to learn more?

Goals

This page lists the 26 project goals accepted for 2024h2.

Flagship goals

Flagship goals represent the goals expected to have the broadest overall impact.

Other goals

This is the full list of goals (including flagship).

Invited goals. Some goals here are marked with the Help wanted badge for their point of contact. These goals are called "invited goals". Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these invited goals, reach out to the point of contact listed in the goal description.

Bring the Async Rust experience closer to parity with sync Rust

Metadata
Short titleAsync
Point of contactTyler Mandry
Teamslang, libs, libs-api
StatusFlagship
Tracking issuerust-lang/rust-project-goals#105

Summary

Over the next six months, we will deliver several critical async Rust building block features

Motivation

This goal represents the next step on a multi-year program aiming to raise the experience of authoring "async Rust" to the same level of quality as "sync Rust". Async Rust is a crucial growth area, with 52% of the respondents in the 2023 Rust survey indicating that they use Rust to build server-side or backend applications.

The status quo

Async Rust performs great, but can be hard to use

Async Rust is the most common Rust application area according to our 2023 Rust survey. Rust is a great fit for networked systems, especially in the extremes:

  • Rust scales up. Async Rust reduces cost for large dataplanes because a single server can serve high load without significantly increasing tail latency.
  • Rust scales down. Async Rust can be run without requiring a garbage collector or even an operating system, making it a great fit for embedded systems.
  • Rust is reliable. Networked services run 24/7, so Rust's "if it compiles, it works" mantra means fewer unexpected failures and, in turn, fewer pages in the middle of the night.

Despite async Rust's popularity, using async I/O makes Rust significantly harder to use. As one Rust user memorably put it, "Async Rust is Rust on hard mode." Several years back the async working group collected a number of "status quo" stories as part of authoring an async vision doc. These stories reveal a number of characteristic challenges:

First focus: language parity, interop traits

Based on the above analysis, the Rust org has been focused on driving async/sync language parity, especially in those areas that block the development of a rich ecosystem. The biggest progress took place in Dec 2023, when async fn in traits and return position impl trait in trait were stabilized. Other work includes documenting async usability challenges in the original async vision doc, stabilizing helpers like std::future::poll_fn, and polishing and improving async error messages.

The need for an aligned, high judgment group of async experts

Progress on async-related issues within the Rust org has been slowed due to lack of coherence around a vision and clear steps. General purpose teams such as lang and libs-api have a hard time determining how to respond to, e.g., particular async stabilization requests, as they lack a means to judge whether any given decision is really the right step forward. Theoretically, the async working group could play this role, but it has not really been structured with this purpose in mind. For example, the criteria for membership is loose and the group would benefit from more representation from async ecosystem projects. This is an example of a larger piece of Rust "organizational debt", where the term "working group" has been used for many different purposes over the years.

The next six months

In the second half of 2024 we are planning on the following work items. The following three items are what we consider to be the highest priority, as they do the most to lay a foundation for future progress (and they themselves are listed in priority order):

We have also identified three "stretch goals" that we believe could be completed:

Resolve the "send bound" problem

Although async functions in traits were stabilized, there is currently no way to write a generic function that requires impls where the returned futures are Send. This blocks the use of async function in traits in some core ecosystem crates, such as tower, which want to work across all kinds of async executors. This problem is called the "send bound" problem and there has been extensive discussion of the various ways to solve it. RFC #3654 has been opened proposing one solution and describing why that path is preferred. Our goal for the year is to adopt some solution on stable.

A solution to the send bound problem should include a migration path for users of the trait_variant crate, if possible. For RFC #3654 (RTN), this would require implementable trait aliases (see RFC #3437).

Reorganize the Async WG

We plan to reorganize the async working group into a structure that will better serve the projects needs, especially when it comes to aligning around a clear async vision. In so doing, we will help "launch" the async working group out from the launchpad umbrella team and into a more permanent structure.

Despite its limitations, the async working group serves several important functions for async Rust that need to continue:

  • It provides a forum for discussion around async-related topics, including the #async-wg zulip stream as well as regular sync meetings. These forums don't necessarily get participation by the full set of voices that we would like, however.
  • It owns async-related repositories, such as the sources for the async Rust book (in dire need of improvement), arewewebyet, and the futures-rs crate. Maintenance of these sites has varied though and often been done by a few individuals acting largely independently.
  • It advises the more general teams (typically lang and libs-api) on async-related matters. The authoring of the (mildly dated) async vision doc took place under the auspices of the working group, for example. However, the group lacks decision making power and doesn't have a strong incentive to coalesce behind a shared vision, so it remains more a "set of individual voices" that does not provide the general purpose teams with clear guidance.

We plan to propose one or more permanent teams to meet these same set of needs. The expectation is that these will be subteams under the lang and libs top-level teams.

Stabilize async closures

Building ergonomic APIs in async is often blocked by the lack of async closures. Async combinator-like APIs today typically make use of an ordinary Rust closure that returns a future, such as the filter API from StreamExt:

#![allow(unused)]
fn main() {
fn filter<Fut, F>(self, f: F) -> Filter<Self, Fut, F>
where
    F: FnMut(&Self::Item) -> Fut,
    Fut: Future<Output = bool>,
    Self: Sized,
}

This approach however does not allow the closure to access variables captured by reference from its environment:

#![allow(unused)]
fn main() {
let mut accept_list = vec!["foo", "bar"]
stream
    .filter(|s| async { accept_list.contains(s) })
}

The reason is that data captured from the environment is stored in self. But the signature for sync closures does not permit the return value (Self::Output) to borrow from self:

#![allow(unused)]
fn main() {
trait FnMut<A>: FnOnce<A> {
    fn call_mut(&mut self, args: A) -> Self::Output;
}
}

To support natural async closures, a trait is needed where call_mut is an async fn, which would allow the returned future to borrow from self and hence modify the environment (e.g., accept_list, in our example above). Or, desugared, something that is equivalent to:

#![allow(unused)]
fn main() {
trait AsyncFnMut<A>: AsyncFnOnce<A> {
    fn call_mut<'s>(
        &'s mut self,
        args: A
    ) -> impl Future<Output = Self::Output> + use<'s, A>;
    //                                        ^^^^^^^^^^ note that this captures `'s`
    //
    // (This precise capturing syntax is unstable and covered by
    // rust-lang/rust#123432).
}
}

The goal for this year to be able to

  • support some "async equivalent" to Fn, FnMut, and FnOnce bounds
    • this should be usable in all the usual places
  • support some way to author async closure expressions

These features should be sufficient to support methods like filter above.

The details (syntax, precise semantics) will be determined via experimentation and subject to RFC.

Stabilize trait for async iteration

Stretch Goal

There has been extensive discussion about the best form of the trait for async iteration (sometimes called Stream, sometimes AsyncIter, and now being called AsyncGen). We believe the design space has been sufficiently explored that it should be possible to author an RFC laying out the options and proposing a specific plan.

Release a proc macro for dyn dispatch with async fn in traits

Stretch Goal

Currently we do not support using dyn with traits that use async fn or -> impl Trait. This can be solved without language extensions through the use of a proc macro. This should remove the need for the use of the async_trait proc macro in new enough compilers, giving all users the performance benefits of static dispatch without giving up the flexibility of dynamic dispatch.

Complete async drop experiments

Not funded

Authors of async code frequently need to call async functions as part of resource cleanup. Because Rust today only supports synchronous destructors, this cleanup must take place using alternative mechanisms, forcing a divergence between sync Rust (which uses destructors to arrange cleanup) and async Rust. MCP 727 proposed a series of experiments aimed at supporting async drop in the compiler. We would like to continue and complete those experiments. These experiments are aimed at defining how support for async drop will be implemented in the compiler and some possible ways that we could modify the type system to support it (in particular, one key question is how to prevent types whose drop is async from being dropped in sync code).

The "shiny future" we are working towards

Our eventual goal is to provide Rust users building on async with

  • the same core language capabilities as sync Rust (async traits with dyn dispatch, async closures, async drop, etc);
  • reliable and standardized abstractions for async control flow (streams of data, error recovery, concurrent execution), free of accidental complexity;
  • an easy "getting started" experience that builds on a rich ecosystem;
  • good performance by default, peak performance with tuning;
  • the ability to easily adopt custom runtimes when needed for particular environments, language interop, or specific business needs.

Design axiom

  • Uphold sync Rust's bar for reliability. Sync Rust famously delivers on the general feeling of "if it compiles, it works" -- async Rust should do the same.
  • We lay the foundations for a thriving ecosystem. The role of the Rust org is to develop the rudiments that support an interoperable and thriving async crates.io ecosystem.
  • When in doubt, zero-cost is our compass. Many of Rust's biggest users are choosing it because they know it can deliver the same performance (or better) than C. If we adopt abstractions that add overhead, we are compromising that core strength. As we build out our designs, we ensure that they don't introduce an "abstraction tax" for using them.
  • From embedded to GUI to the cloud. Async Rust covers a wide variety of use cases and we aim to make designs that can span those differing constraints with ease.
  • Consistent, incremental progress. People are building async Rust systems today -- we need to ship incremental improvements while also steering towards the overall outcome we want.

Ownership and team asks

Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP). The overall owners of the async effort (and authors of this goal document) are Tyler Mandry and Niko Matsakis. We have identified owners for subitems below; these may change over time.

TaskOwner(s) or team(s)Notes
Overall program managementTyler Mandry, Niko Matsakis

"Send bound" problem

TaskOwner(s) or team(s)Notes
ImplementationMichael GouletComplete
Author RFCNiko MatsakisComplete
RFC decisionTeam langComplete
Stabilization decisionTeam lang

Async WG reorganization

TaskOwner(s) or team(s)Notes
Author proposal
Org decisionTeam libs, lang

Async closures

TaskOwner(s) or team(s)Notes
implementationComplete
Author RFC
RFC decisionTeam lang
Design meetingTeam lang2 meetings expected
Author call for usageMichael Goulet
Stabilization decisionTeam lang

Trait for async iteration

TaskOwner(s) or team(s)Notes
Author RFC
RFC decisionTeam libs-api lang
Design meetingTeam lang2 meetings expected
Implementation

Dyn dispatch for AFIT

TaskOwner(s) or team(s)Notes
ImplementationSantiago Pastorino
Standard reviewsTyler Mandry

Async drop experiments

TaskOwner(s) or team(s)Notes
author MCPComplete
MCP decisioncompilerComplete
Implementation workNot funded (*)
Design meetingTeam lang2 meetings expected
Standard reviewsTeam compiler

(*) Implementation work on async drop experiments is currently unfunded. We are trying to figure out next steps.

Support needed from the project

Agreement from lang, libs libs-api to prioritize the items marked Team in the table above.

The expectation is that

  • async closures will occupy 2 design meetings from lang during H2
  • async iteration will occupy 2 design meetings from lang during H2 and likely 1-2 from libs API
  • misc matters will occupy 1 design meeting from lang during H2

for a total of 4-5 meetings from lang and 1-2 from libs API.

Frequently asked questions

Can we really do all of this in 6 months?

This is an ambitious agenda, no doubt. We believe it is possible if the teams are behind us, but things always take longer than you think. We have made sure to document the "priority order" of items for this reason. We intend to focus our attention first and foremost on the high priority items.

Why focus on send bounds + async closures?

These are the two features that together block the authoring of traits for a number of common interop purposes. Send bounds are needed for generic traits like the Service trait. Async closures are needed for rich combinator APIs like iterators.

Why not build in dyn dispatch for async fn in traits?

Async fn in traits do not currently support native dynamic dispatch. We have explored a number of designs for making it work but have not completed all of the language design work needed, and are not currently prioritizing that effort. We do hope to support it via a proc macro this year and extend to full language support later, hopefully in 2025.

Why are we moving forward on a trait for async iteration?

There has been extensive discussion about the best design for the "Stream" or "async iter" trait and we judge that the design space is well understood. We would like to unblock generator syntax in 2025 which will require some form of trait.

The majority of the debate about the trait has been on the topic of whether to base the trait on a poll_next function, as we do today, or to try and make the trait use async fn next, making it more anaologous with the Iterator trait (and potentially even making it be two versions of a single trait). We will definitely explore forwards compatibility questions as part of this discussion. Niko Matsakis for example still wants to explore maybe-async-like designs, especially for combinator APIs like map. However, we also refer to the design axiom that "when in doubt, zero-cost is our compass" -- we believe we should be able to stabilize a trait that does the low-level details right, and then design higher level APIs atop that.

Why do you say that we lack a vision, don't we have an [async vision doc][avd]?

Yes, we do, and the [existing document][avd] has been very helpful in understanding the space. Moreover, that document was never RFC'd and we have found that it lacks a certain measure of "authority" as a result. We would like to drive stronger alignment on the path forward so that we can focus more on execution. But doing that is blocked on having a more effective async working group structure (hence the goal to reorganize the async WG).

What about "maybe async", effect systems, and keyword generics?

Keyword generics is an ambitious initiative to enable code that is "maybe async". It has generated significant controversy, with some people feeling it is necessary for Rust to scale and others judging it to be overly complex. One of the reasons to reorganize the async WG is to help us come to a consensus around this point (though this topic is broader than async).

Resolve the biggest blockers to Linux building on stable Rust

Metadata
Short titleRust-for-Linux
Point of contactNiko Matsakis
Teamslang, libs-api, compiler
StatusFlagship
Tracking issuerust-lang/rust-project-goals#116

Summary

Stabilize unstable features required by Rust for Linux project including

  • Stable support for RFL's customized ARC type
  • Labeled goto in inline assembler and extended offset_of! support
  • RFL on Rust CI
  • Pointers to statics in constants

Motivation

The experimental support for Rust development in the Linux kernel is a watershed moment for Rust, demonstrating to the world that Rust is indeed capable of targeting all manner of low-level systems applications. And yet today that support rests on a number of unstable features, blocking the effort from ever going beyond experimental status. For 2024H2 we will work to close the largest gaps that block support.

The status quo

The Rust For Linux (RFL) project has been accepted into the Linux kernel in experimental status. The project's goal, as described in the Kernel RFC introducing it, is to add support for authoring kernel components (modules, subsystems) using Rust. Rust would join C as the only two languages permitted in the linux kernel. This is a very exciting milestone for Rust, but it's also a big challenge.

Integrating Rust into the Linux kernel means that Rust must be able to interoperate with the kernel's low-level C primitives for things like locking, linked lists, allocation, and so forth. This interop requires Rust to expose low-level capabilities that don't currently have stable interfaces.

The dependency on unstable features is the biggest blocker to Rust exiting "experimental" status. Because unstable features have no kind of reliability guarantee, this in turn means that RFL can only be built with a specific, pinned version of the Rust compiler. This is a challenge for distributions which wish to be able to build a range of kernel sources with the same compiler, rather than having to select a particular toolchain for a particular kernel version.

Longer term, having Rust in the Linux kernel is an opportunity to expose more C developers to the benefits of using Rust. But that exposure can go both ways. If Rust is constantly causing pain related to toolchain instability, or if Rust isn't able to interact gracefully with the kernel's data structures, kernel developers may have a bad first impression that causes them to write off Rust altogether. We wish to avoid that outcome. And besides, the Linux kernel is exactly the sort of low-level systems application we want Rust to be great for!

For deeper background, please refer to these materials:

The next six months

The RFL project has a tracking issue listing the unstable features that they rely upon. After discussion with the RFL team, we identified the following subgoals as the ones most urgent to address in 2024. Closing these issues gets us within striking distance of being able to build the RFL codebase on stable Rust.

  • Stable support for RFL's customized ARC type
  • Labeled goto in inline assembler and extended offset_of! support
  • RFL on Rust CI ([done now!])
  • Pointers to statics in constants

Stable support for RFL's customized ARC type

One of Rust's great features is that it doesn't "bake in" the set of pointer types. The common types users use every day, such as Box, Rc, and Arc, are all (in principle) library defined. But in reality those types enjoy access to some unstable features that let them be used more widely and ergonomically. Since few users wish to define their own smart pointer types, this is rarely an issue and there has been relative little pressure to stabilize those mechanisms.

The RFL project needs to integrate with the Kernel's existing reference counting types and intrusive linked lists. To achieve these goals they've created their own variant of Arc (hereafter denoted as rfl::Arc), but this type cannot be used as idiomatically as the Arc type found in libstd without two features:

  • The ability to be used in methods (e.g., self: rfl::Arc<Self>), aka "arbitrary self types", specified in RFC #3519.
  • The ability to be coerce to dyn types like rfl::Arc<dyn Trait> and then support invoking methods on Trait through dynamic dispatch.
    • This requires the use of two unstable traits, CoerceUnsized and DynDispatch, neither of which are close to stabilization.
    • However, RFC #3621 provides for a "shortcut" -- a stable interface using derive that expands to those traits, leaving room to evolve the underlying details.

Our goal for 2024 is to close those gaps, most likely by implementing and stabilizing RFC #3519 and RFC #3621.

Labeled goto in inline assembler and extended offset_of! support

These are two smaller extensions required by the Rust-for-Linux kernel support. Both have been implemented but more experience and/or development may be needed before stabilization is accepted.

RFL on Rust CI

Update: Basic work was completed in PR #125209 by Jakub Beránek during the planning process! We are however still including a team ask of T-compiler to make sure we have agreed around the policy regarding breakage due to unstable features.

Rust sometimes integrates external projects of particular importance or interest into its CI. This gives us early notice when changes to the compiler or stdlib impact that project. Some of that breakage is accidental, and CI integration ensures we can fix it without the project ever being impacted. Otherwise the breakage is intentional, and this gives us an early way to notify the project so they can get ahead of it.

Because of the potential to slow velocity and incur extra work, the bar for being integrated into CI is high, but we believe that Rust For Linux meets that bar. Given that RFL would not be the first such project to be integrated into CI, part of pursuing this goal should be establishing clearer policies on when and how we integrate external projects into our CI, as we now have enough examples to generalize somewhat.

Pointers to statics in constants

The RFL project has a need to create vtables in read-only memory (unique address not required). The current implementation relies on the const_mut_refs and const_refs_to_static features (representative example). Discussion has identified some questions that need to be resolved but no major blockers.

The "shiny future" we are working towards

The ultimate goal is to enable smooth and ergonomic interop between Rust and the Linux kernel's idiomatic data structures.

In addition to the work listed above, there are a few other obvious items that the Rust For Linux project needs. If we can find owners for these this year, we could even get them done as a "stretch goal":

Stable sanitizer support

Support for building and using sanitizers, in particular KASAN.

Custom builds of core/alloc with specialized configuration options

The RFL project builds the stdlib with a number of configuration options to eliminate undesired aspects of libcore (listed in RFL#2). They need a standard way to build a custom version of core as well as agreement on the options that the kernel will continue using.

Code-generation features and compiler options

The RFL project requires various code-generation options. Some of these are related to custom features of the kernel, such as X18 support (rust-lang/compiler-team#748) but others are codegen options like sanitizers and the like. Some subset of the options listed on RFL#2 will need to be stabilized to support being built with all required configurations, but working out the precise set will require more effort.

Ergonomic improvements

Looking further afield, possible future work includes more ergonomic versions of the special patterns for safe pinned initialization or a solution to custom field projection for pinned types or other smart pointers.

Design axioms

  • First, do no harm. If we want to make a good first impression on kernel developers, the minimum we can do is fit comfortably within their existing workflows so that people not using Rust don't have to do extra work to support it. So long as Linux relies on unstable features, users will have to ensure they have the correct version of Rust installed, which means imposing labor on all Kernel developers.
  • Don't let perfect be the enemy of good. The primary goal is to offer stable support for the particular use cases that the Linux kernel requires. Wherever possible we aim to stabilize features completely, but if necessary, we can try to stabilize a subset of functionality that meets the kernel developers' needs while leaving other aspects unstable.

Ownership and team asks

Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP).

  • The Team badge indicates a requirement where Team support is needed.
TaskOwner(s) or team(s)Notes
Overall program managementNiko Matsakis, Josh Triplett

Arbitrary self types v2

TaskOwner(s) or team(s)Notes
author RFCComplete RFC #3519
RFC decisionlangComplete
Implementation
Standard reviewsTeam compiler
Stabilization decisionTeam lang

Derive smart pointer

TaskOwner(s) or team(s)Notes
author RFCRFC #3621
RFC decisionTeam langComplete
ImplementationDing Xiang Fei
Author stabilization reportDing Xiang Fei
Stabilization decisionTeam lang

asm_goto

TaskOwner(s) or team(s)Notes
implementationComplete
Real-world usage in Linux kernelAlice Ryhl
Extend to cover full RFC
Author stabilization report
Stabilization decisionTeam lang

RFL on Rust CI

TaskOwner(s) or team(s)Notes
implementationComplete #125209
Policy draft
Policy decisionTeam compiler

Pointers to static in constants

TaskOwner(s) or team(s)Notes
Stabilization report
Stabilization decisionTeam lang

Support needed from the project

  • Lang team:
    • Prioritize RFC and any related design questions (e.g., the unresolved questions)

Outputs and milestones

Outputs

Final outputs that will be produced

Milestones

Milestones you will reach along the way

Frequently asked questions

None yet.

Rust 2024 Edition

Metadata
Point of contactTC
Teamslang, types
StatusFlagship
Tracking issuerust-lang/rust-project-goals#117

Summary

Feature complete status for Rust 2024, with final release to occur in early 2025.

Motivation

RFC #3501 confirmed the desire to ship a Rust edition in 2024, continuing the pattern of shipping a new Rust edition every 3 years. Our goal for 2024 H2 is to stabilize a new edition on nightly by the end of 2024.

The status quo

Editions are a powerful tool for Rust but organizing them continues to be a "fire drill" each time. We have a preliminary set of 2024 features assembled but work needs to be done to marshal and drive (some subset of...) them to completion.

The next six months

The major goal this year is to release the edition on nightly. Top priority items are as follows:

ItemTrackingRFCMore to do?
Reserve gen keywordhttps://github.com/rust-lang/rust/issues/123904https://github.com/rust-lang/rust/pull/116447No.
Lifetime Capture Rules 2024https://github.com/rust-lang/rust/issues/117587https://github.com/rust-lang/rfcs/pull/3498Yes.
Precise capturing (dependency)https://github.com/rust-lang/rust/issues/123432https://github.com/rust-lang/rfcs/pull/3617Yes.
Change fallback to !https://github.com/rust-lang/rust/issues/123748N/AYes.

The full list of tracked items can be found using the A-edition-2024 label..

The "shiny future" we are working towards

The Edition will be better integrated into our release train. Nightly users will be able to "preview" the next edition just like they would preview any other unstable feature. New features that require new syntax or edition-related changes will land throughout the edition period. Organizing the new edition will be rel

Design axioms

The "Edition Axioms" were laid out in RFC #3085:

  • Editions do not split the ecosystem. The most important rule for editions is that crates in one edition can interoperate seamlessly with crates compiled in other editions.
  • Edition migration is easy and largely automated. Whenever we release a new edition, we also release tooling to automate the migration. The tooling is not necessarily perfect: it may not cover all corner cases, and manual changes may still be required.
  • Users control when they adopt the new edition. We recognize that many users, particularly production users, will need to schedule time to manage an Edition upgrade as part of their overall development cycle.
  • Rust should feel like “one language”. We generally prefer uniform behavior across all editions of Rust, so long as it can be achieved without compromising other design goals.
  • Editions are meant to be adopted. We don’t force the edition on our users, but we do feel free to encourage adoption of the edition through other means.

Ownership and team asks

Owner: TC

TaskOwner(s) or team(s)Notes
RFC decisionTeam leadership-councilComplete (RFC #3501)
Stabilization decisionTeam lang types
Top-level Rust blog post

Outputs and milestones

  • Owner: TC

Outputs

  • Edition release complete with
    • announcement blog post
    • edition migration guide

Milestones

DateVersionEdition stage
2024-10-11Branch v1.83Go / no go on all items
2024-10-17Release v1.82Rust 2024 nightly beta
2025-01-03Branch v1.85Cut Rust 2024 to beta
2025-02-20Release v1.85Release Rust 2024

Frequently asked questions

None yet.

"Stabilizable" prototype for expanded const generics

Metadata
Point of contactBoxy
Teamstypes
StatusAccepted
Tracking issuerust-lang/rust-project-goals#100

Summary

Experiment with a new min_generic_const_args implementation to address challenges found with the existing approach

Motivation

min_const_generics was stabilized with the restriction that const-generic arguments may not use generic parameters other than a bare const parameter, e.g. Foo<N> is legal but not Foo<{ T::ASSOC }>. This restriction is lifted under feature(generic_const_exprs) however its design is fundamentally flawed and introduces significant complexity to the compiler. A ground up rewrite of the feature with a significantly limited scope (e.g. min_generic_const_args) would give a viable path to stabilization and result in large cleanups to the compiler.

The status quo

A large amount of rust users run into the min_const_generics limitation that it is not legal to use generic parameters with const generics. It is generally a bad user experience to hit a wall where a feature is unfinished, and this limitation also prevents patterns that are highly desirable. We have always intended to lift this restriction since stabilizing min_const_generics but we did not know how.

It is possible to use generic parameters with const generics by using feature(generic_const_exprs). Unfortunately this feature has a number of fundamental issues that are hard to solve and as a result is very broken. It being so broken results in two main issues:

  • When users hit a wall with min_const_generics they cannot reach for the generic_const_exprs feature because it is either broken or has no path to stabilization.
  • In the compiler, to work around the fundamental issues with generic_const_exprs, we have a number of hacks which negatively affect the quality of the codebase and the general experience of contributing to the type system.

The next six months

We have a design for min_generic_const_args approach in mind but we want to validate it through implementation as const generics has a history of unforeseen issues showing up during implementation. Therefore we will pursue a prototype implementation in 2024.

As a stretch goal, we will attempt to review the design with the lang team in the form of a design meeting or RFC. Doing so will likely also involve authoring a design retrospective for generic_const_exprs in order to communicate why that design did not work out and why the constraints imposed by min_generic_const_args makes sense.

The "shiny future" we are working towards

The larger goal here is to lift most of the restrictions that const generics currently have:

  • Arbitrary types can be used in const generics instead of just: integers, floats, bool and char.
    • implemented under feature(adt_const_params) and is relatively close to stabilization
  • Generic parameters are allowed to be used in const generic arguments (e.g. Foo<{ <T as Trait>::ASSOC_CONST }>).
  • Users can specify _ as the argument to a const generic, allowing inferring the value just like with types.
    • implemented under feature(generic_arg_infer) and is relatively close to stabilization
  • Associated const items can introduce generic parameters to bring feature parity with type aliases
    • implemented under feature(generic_const_items), needs a bit of work to finish it. Becomes significantly more important after implementing min_generic_const_args
  • Introduce associated const equality bounds, e.g. T: Trait<ASSOC = N> to bring feature parity with associated types
    • implemented under feature(associated_const_equality), blocked on allowing generic parameters in const generic arguments

Allowing generic parameters to be used in const generic arguments is the only part of const generics that requires significant amounts of work while also having significant benefit. Everything else is already relatively close to the point of stabilization. I chose to specify this goal to be for implementing min_generic_const_args over "stabilize the easy stuff" as I would like to know whether the implementation of min_generic_const_args will surface constraints on the other features that may not be possible to easily fix in a backwards compatible manner. Regardless I expect these features will still progress while min_generic_const_args is being implemented.

Design axioms

  • Do not block future extensions to const generics
  • It should not feel worse to write type system logic with const generics compared to type generics
  • Avoid post-monomorphization errors
  • The "minimal" subset should not feel arbitrary

Ownership and team asks

Owner: Boxy, project-const-generics lead, T-types member

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.
TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang types
Implementation and mentoringBoxy
ImplementationNoah Lev
ReviewerMichael Goulet

Outputs and milestones

Outputs

  • A sound, fully implemented feature(min_generic_const_args) available on nightly
  • All issues with generic_const_exprs's design have been comprehensively documented (stretch goal)
  • RFC for min_generic_const_args's design (stretch goal)

Milestones

  • Prerequisite refactorings for min_generic_const_args have taken place
  • Initial implementation of min_generic_const_args lands and is useable on nightly
  • All known issues are resolved with min_generic_const_args
  • Document detailing generic_const_exprs issues
  • RFC is written and filed for min_generic_const_args

Frequently asked questions

Do you expect min_generic_const_args to be stabilized by the end?

No. The feature should be fully implemented such that it does not need any more work to make it ready for stabilization, however I do not intend to actually set the goal of stabilizing it as it may wind up blocked on the new trait solver being stable first.

Assemble project goal slate

Metadata
Point of contactNiko Matsakis
Teamsleadership-council
StatusAccepted
Tracking issuerust-lang/rust-project-goals#102

Extracted from RFC 3614

Summary

Run an experimental goal program during the second half of 2024

Motivation

This RFC proposes to run an experimental goal program during the second half of 2024 with Niko Matsakis as owner/organizer. This program is a first step towards an ongoing Rust roadmap. The proposed outcomes for 2024 are (1) select an initial slate of goals using an experimental process; (2) track progress over the year; (3) drawing on the lessons from that, prepare a second slate of goals for 2025 H1. This second slate is expected to include a goal for authoring an RFC proposing a permanent process.

The status quo

The Rust project last published an annual roadmap in 2021. Even before that, maintaining and running the roadmap process had proved logistically challenging. And yet there are a number of challenges that the project faces for which having an established roadmap, along with a clarified ownership for particular tasks, would be useful:

  • Focusing effort and avoiding burnout:
    • One common contributor to burnout is a sense of lack of agency. People have things they would like to get done, but they feel stymied by debate with no clear resolution; feel it is unclear who is empowered to "make the call"; and feel unclear whether their work is a priority.
    • Having a defined set of goals, each with clear ownership, will address that uncertainty.
  • Helping direct incoming contribution:
    • Many would-be contributors are interested in helping, but don't know what help is wanted/needed. Many others may wish to know how to join in on a particular project.
    • Identifying the goals that are being worked on, along with owners for them, will help both groups get clarity.
  • Helping the Foundation and Project to communicate
    • One challenge for the Rust Foundation has been the lack of clarity around project goals. Programs like fellowships, project grants, etc. have struggled to identify what kind of work would be useful in advancing project direction.
    • Declaring goals, and especially goals that are desired but lack owners to drive them, can be very helpful here.
  • Helping people to get paid for working on Rust
    • A challenge for people who are looking to work on Rust as part of their job -- whether that be full-time work, part-time work, or contracting -- is that the employer would like to have some confidence that the work will make progress. Too often, people find that they open RFCs or PRs which do not receive review, or which are misaligned with project priorities. A secondary problem is that there can be a perceived conflict-of-interest because people's job performance will be judged on their ability to finish a task, such as stabilizing a language feature, which can lead them to pressure project teams to make progress.
    • Having the project agree before-hand that it is a priority to make progress in an area and in particular to aim for achieving particular goals by particular dates will align the incentives and make it easier for people to make commitments to would-be employers.

For more details, see

The plan for 2024

The plan is to do a "dry run" of the process in the remainder of 2024. The 2024 process will be driven by Niko Matsakis; one of the outputs will be an RFC that proposes a more permanent process for use going forward. The short version of the plan is that we will

  • ASAP (April): Have a ~2 month period for selecting the initial slate of goals. Goals will be sourced from Rust teams and the broader community. They will cover the highest priority work to be completed in the second half of 2024.
  • June: Teams will approve the final slate of goals, making them 'official'.
  • Remainder of the year: Regular updates on goal progress will be posted
  • October: Presuming all goes well, the process for 2025 H1 begins. Note that the planning for 2025 H1 and finishing up of goals from 2024 H2 overlap.

The "shiny future" we are working towards

We wish to get to a point where

  • it is clear to onlookers and Rust maintainers alike what the top priorities are for the project and whether progress is being made on those priorities
  • for each priority, there is a clear owner who
    • feels empowered to make decisions regarding the final design (subject to approval from the relevant teams)
  • teams cooperate with one another to prioritize work that is blocking a project goal
  • external groups who would like to sponsor or drive priorities within the Rust project know how to bring proposals and get feedback

More concretely, assuming this goal program is successful, we would like to begin another goal sourcing round in late 2024 (likely Oct 15 - Dec 15). We see this as fitting into a running process where the project evaluates its program and re-establishes goals every six months.

Design axioms

  • Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
  • Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
  • Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
  • Nothing good happens without an owner. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
  • To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.

Ownership and team asks

Owner: Niko Matsakis

  • Niko Matsakis can commit 20% time (avg of 1 days per week) to pursue this task, which he estimates to be sufficient.
TaskOwner(s) or team(s)Notes
RFC decisionTeam leadership-councilComplete
Inside Rust blog post inviting feedbackPosted
Top-level Rust blog post announcing result

Support needed from the project

  • Project website resources to do things like
    • post blog posts on both Inside Rust and the main Rust blog;
    • create a tracking page (e.g., https://rust-lang.org/goals);
    • create repositories, etc.
  • For teams opting to participate in this experimental run:
    • they need to meet with the goal committee to review proposed goals, discuss priorities;
    • they need to decide in a timely fashion whether they can commit the proposed resources

Outputs and milestones

Outputs

There are three specific outputs from this process:

  • A goal slate for the second half (H2) of 2024, which will include
    • a set of goals, each with an owner and with approval from their associated teams
    • a high-level write-up of why this particular set of goals was chosen and what impact we expect for Rust
    • plan is to start with a smallish set of goals, though we don't have a precise number in mind
  • Regular reporting on the progress towards these goals over the course of the year
    • monthly updates on Inside Rust (likely) generated by scraping tracking issues established for each goal
    • larger, hand authored updates on the main Rust blog, one in October and a final retrospective in December
  • A goal slate for the first half (H1) of 2025, which will include
    • a set of goals, each with an owner and with approval from their associated teams
    • a high-level write-up of why this particular set of goals was chosen and what impact we expect for Rust
    • (probably) a goal to author an RFC with a finalized process that we can use going forward

Milestones

Key milestones along the way (with the most impactful highlighted in bold):

Date2024 H2 Milestone2025 H1 Milestones
Apr 26Kick off the goal collection process
May 24Publish draft goal slate, take feedback from teams
June 14Approval process for goal slate begins
June 28Publish final goal slate
JulyPublish monthly update on Inside Rust
AugustPublish monthly update on Inside Rust
SeptemberPublish monthly update on Inside Rust
Oct 1Publish intermediate goal progress update on main Rust blogBegin next round of goal process, expected to cover first half of 2025
NovemberPublish monthly update on Inside RustNov 15: Approval process for 2025 H1 goal slate begins
DecemberPublish retrospective on 2024 H2Announce 2025 H1 goal slate

Process to be followed

The owner plans to author up a proposed process but rough plans are as follows:

  • Create a repository rust-lang/project-goals that will be used to track proposed goals.
  • Initial blog post and emails soliciting goal proposals, authored using the same format as this goal.
  • Owner will consult proposals along with discussions with Rust team members to assemble a draft set of goals
  • Owner will publish a draft set of goals from those that were proposed
  • Owner will read this set with relevant teams to get feedback and ensure consensus
  • Final slate will be approved by each team involved:
    • Likely mechanism is a "check box" from the leads of all teams that represents the team consensus

It is not yet clear how much work it will be to drive this process. If needed, the owner will assemble a "goals committee" to assist in reading over goals, proposing improvements, and generally making progress towards a coherent final slate. This committee is not intended to be a decision making body.

Frequently asked questions

Is there a template for project goals?

This RFC does not specify details, so the following should not be considered normative. However, you can see a preview of what the project goal process would look like at the nikomatsakis/rust-project-goals repository; it contains a goal template. This RFC is in fact a "repackaged" version of 2024's proposed Project Goal #1.

Why is the goal completion date targeting end of year?

In this case, the idea is to run a ~6-month trial, so having goals that are far outside that scope would defeat the purpose. In the future we may want to permit longer goal periods, but in general we want to keep goals narrowly scoped, and 6 months seems ~right. We don't expect 6 months to be enough to complete most projects, but the idea is to mark a milestone that will demonstrate important progress, and then to create a follow-up goal in the next goal season.

How does the goal completion date interact with the Rust 2024 edition?

Certainly I expect some of the goals to be items that will help us to ship a Rust 2024 edition -- and likely a goal for the edition itself (presuming we don't delay it to Rust 2025).

Do we really need a "goal slate" and a "goal season"?

Some early drafts of project goals were framing in a purely bottom-up fashion, with teams approving goals on a rolling basis. That approach though has the downside that the project will always be in planning mode which will be a continuing time sink and morale drain. Deliberating on goals one at a time also makes it hard to weigh competing goals and decide which should have priority.

There is another downside to the "rolling basis" as well -- it's hard to decide on next steps if you don't know where you are going. Having the concept of a "goal slate" allows us to package up the goals along with longer term framing and vision and make sure that they are a coherent set of items that work well together. Otherwise it can be very easy for one team to be solving half of a problem while other teams neglect the other half.

Do we really need an owner?

Nothing good happens without an owner. The owner plays a few important roles:

  • Publicizing and organizing the process, authoring blog posts on update, and the like.
  • Working with individual goal proposals to sharpen them, improve the language, identify milestones.
  • Meeting with teams to discuss relative priorities.
  • Ensuring a coherent slate of goals.
    • For example, if the cargo team is working to improve build times in CI, but the compiler team is focused on build times on individual laptops, that should be surfaced. It may be that its worth doing both, but there may be an opportunity to do more by focusing our efforts on the same target use cases.

Isn't the owner basically a BDFL?

Simply put, no. The owner will review the goals and ensure a quality slate, but it is up to the teams to approve that slate and commit to the goals.

Why the six months horizon?

Per the previous points, it is helpful to have a "season" for goals, but having e.g. an annual process prevents us from reacting to new ideas in a nimble fashion. At the same time, doing quarterly planning, as some companies do, is quite regular overhead. Six months seemed like a nice compromise, and it leaves room for a hefty discussion period of about 2 months, which sems like a good fit for an open-source project.

Associated type position impl trait

Metadata
Point of contactOliver Scherer
Teamstypes, lang
StatusAccepted
Tracking issuerust-lang/rust-project-goals#103

Summary

Stable support for impl Trait in the values of associated types (aka "associated type position impl trait" or ATPIT)

Motivation

Rust has been on a long-term quest to support impl Trait in more and more locations ("impl Trait everywhere"). The next step in that journey is supporting impl Trait in the values of associated types (aka "associated type position impl trait" or ATPIT), which allows impls to provide more complex types as the value of an associated type, particularly anonymous types like closures and futures. It also allows impls to hide the precise type they are using for an associated type, leaving room for future changes. This is the latest step towards the overall vision of support impl Trait notation in various parts of the Rust language.

The status quo

Impls today must provide a precise and explicit value for each associated type. For some associated types, this can be tedious, but for others it is impossible, as the proper type involves a closure or other aspect which cannot be named. Once a type is specified, impls are also unable to change that type without potentially breaking clients that may have hardcoded it.

Rust's answer to these sorts of problems is impl Trait notation, which is used in a number of places within Rust to indicate "some type that implements Trait":

  • Argument position impl Trait ("APIT"), in inherent/item/trait functions, in which impl Trait desugars to an anonymous method type parameter (sometimes called "universal" impl Trait);
  • Return type position in inherent/item functions ("RPIT") and in trait ("RPITIT") functions, in which impl Trait desugars to a fresh opaque type whose value is inferred by the compiler.

ATPIT follows the second pattern, creating a new opaque type.

The next six months

The plan for 2024 is to stabilize Associated Type Position Impl Trait (ATPIT). The design has been finalized from the lang team perspective for some time, but the types team is still working out final details. In particular, the types team is trying to ensure that whatever programs are accepted will also be accepted by the next generation trait solver, which handles opaque types in a new and simplified way.

The "shiny future" we are working towards

This goal is part of the "impl Trait everywhere" effort, which aims to support impl Trait in any position where it makes sense. With the completion of this goal we will support impl Trait in

  • the type of a function argument ("APIT") in inherent/item/trait functions;
  • return types in functions, both inherent/item functions ("RPIT") and trait functions ("RPITIT");
  • the value of an associated type in an impl (ATPIT).

Planned extensions for the future include:

  • allowing impl Trait in type aliases ("TAIT"), like type I = impl Iterator<Item = u32>;
  • allowing impl Trait in let bindings ("LIT"), like let x: impl Future = foo();
  • dyn safety for traits that make use of RTPIT and async functions.

Other possible future extensions are:

  • allowing impl Trait in where-clauses ("WCIT"), like where T: Foo<impl Bar>;
  • allowing impl Trait in struct fields, like struct Foo { x: impl Display };

See also: the explainer here for a "user's guide" style introduction, though it's not been recently updated and may be wrong in the details (especially around TAIT).

Design axioms

None.

Ownership and team asks

Owner: oli-obk owns this goal.

TaskOwner(s) or team(s)Notes
ImplementationOliver Scherer
FCP decision(s)Team types
Stabilization decisionTeam types lang

Frequently asked questions

None yet.

Begin resolving cargo-semver-checks blockers for merging into cargo

Metadata
Point of contactPredrag Gruevski
Teamscargo
StatusAccepted
Tracking issuerust-lang/rust-project-goals#104

Summary

Design and implement cargo-semver-checks functionality that lies on the critical path for merging the tool into cargo itself.

Motivation

Cargo assumes that all packages adhere to semantic versioning (SemVer). However, SemVer adherence is quite hard in practice: research shows that accidental SemVer violations are relatively common (lower-bound: in 3% of releases) and happen to Rustaceans of all skill levels. Given the significant complexity of the Rust SemVer rules, improvements here require better tooling.

cargo-semver-checks is a linter for semantic versioning (SemVer) in Rust. It is broadly adopted by the Rust community, and the cargo team has expressed interest in merging it into cargo itself as part of the existing cargo publish workflow. By default, cargo publish would require SemVer compliance, but offer a flag (analogous to the --allow-dirty flag for uncommitted changes) to override the SemVer check and proceed with publishing anyway.

The cargo team has identified a set of milestones and blockers that must be resolved before cargo-semver-checks can be integrated into the cargo publish workflow. Our goal here is to resolve one of those blockers (cargo manifest linting), and chart a path toward resolving the rest in the future.

The status quo

Work in three major areas is required to resolve the blockers for running cargo-semver-checks as part of cargo publish:

  • Support for cargo manifest linting, and associated CLI changes
  • Checking of cross-crate items
  • SemVer linting of type information

Fully resolving all three areas is likely a 12-24 month undertaking, and beyond the scope of this goal on its own. Instead, this goal proposes to accomplish intermediate steps that create immediate value for users and derisk the overall endeavor, while needing only "moral support" from the cargo team as its only requirement.

Cargo manifest linting

Package manifests have SemVer obligations: for example, removing a feature name that used to exist is a major breaking change.

Currently, cargo-semver-checks is not able to catch such breaking changes. It only draws information from a package's rustdoc JSON, which does not include the necessary manifest details and does not have a convincing path to doing so in the future. Design and implementation work is required to allow package manifests to be linted for breaking changes as well.

This "rustdoc JSON only" assumption is baked into the cargo-semver-checks CLI as well, with options such as --baseline-rustdoc and --current-rustdoc that allow users to lint with a pre-built rustdoc JSON file instead of having cargo-semver-checks build the rustdoc JSON itself. Once manifest linting is supported, users of such options will need to somehow specify a Cargo.toml file (and possibly even a matching Cargo.lock) in addition to the rustdoc JSON. Additional work is required to determine how to evolve the CLI to support manifest linting and future-proof it to the level necessary to be suitable for stabilizing as part of cargo's own CLI.

Checking of cross-crate items

Currently, cargo-semver-checks performs linting by only using the rustdoc JSON of the target package being checked. However, the public API of a package may expose items from other crates. Since rustdoc no longer inlines the definitions of such foreign items into the JSON of the crate whose public API relies on them, cargo-semver-checks cannot see or analyze them.

This causes a massive number of false-positives ("breakage reported incorrectly") and false-negatives ("lint for issue X fails to spot an instance of issue X"). In excess of 90% of real-world false-positives are traceable back to a cross-crate item, as measured by our SemVer study!

For example, the following change is not breaking but cargo-semver-checks will incorrectly report it as breaking:

#![allow(unused)]
fn main() {
// previous release:
pub fn example() {}

// in the new release, imagine this function moved to `another_crate`:
pub use another_crate::example;
}

This is because the rustdoc JSON that cargo-semver-checks sees indeed does not contain a function named example. Currently, cargo-semver-checks is incapable of following the cross-crate connection to another_crate, generating its rustdoc JSON, and continuing its analysis there.

Resolving this limitation will require changes to how cargo-semver-checks generates and handles rustdoc JSON, since the set of required rustdoc JSON files will no longer be fully known ahead of time. It will also require CLI changes in the same area as the changes required to support manifest linting.

While there may be other challenges on rustc and rustdoc's side before this feature could be fully implemented, we consider those out of scope here since there are parallel efforts to resolve them. The goal here is for cargo-semver-checks to have its own story straight and do the best it can.

SemVer linting of type information

Currently, cargo-semver-checks lints cannot represent or examine type information. For example, the following change is breaking but cargo-semver-checks will not detect or report it:

#![allow(unused)]
fn main() {
// previous release:
pub fn example(value: String) {}

// new release:
pub fn example(value: i64) {}
}

Analogous breaking changes to function return values, struct fields, and associated types would also be missed by cargo-semver-checks today.

The main difficulty here lies with the expressiveness of the Rust type system. For example, none of the following changes are breaking:

#![allow(unused)]
fn main() {
// previous release:
pub fn example(value: String) {}

// new release:
pub fn example(value: impl Into<String>) {}

// subsequent release:
pub fn example<S: Into<String>>(value: S) {}
}

Similar challenges exist with lifetimes, variance, trait solving, async fn versus fn() -> impl Future, etc.

While there are some promising preliminary ideas for resolving this challenge, more in-depth design work is necessary to determine the best path forward.

The next 6 months

Three things:

  • Implement cargo manifest linting
  • Implement CLI future-proofing changes, with manifest linting and cross-crate analysis in mind
  • Flesh out a design for supporting cross-crate analysis and type information linting in the future

The "shiny future" we are working towards

Accidentally publishing SemVer violations that break the ecosystem is never fun for anyone involved.

From a user perspective, we want a fearless cargo update: one's project should never be broken by updating dependences without changing major versions.

From a maintainer perspective, we want a fearless cargo publish: we want to prevent breakage, not to find out about it when a frustrated user opens a GitHub issue. Just like cargo flags uncommitted changes in the publish flow, it should also quickly and accurately flag breaking changes in non-major releases. Then the maintainer may choose to release a major version instead, or acknowledge and explicitly override the check to proceed with publishing as-is.

To accomplish this, cargo-semver-checks needs the ability to express more kinds of lints (including manifest and type-based ones), eliminate false-positives, and stabilize its public interfaces (e.g. the CLI). At that point, we'll have lifted the main merge-blockers and we can consider making it a first-party component of cargo itself.

Ownership and team asks

Owner: Predrag Gruevski, as maintainer of cargo-semver-checks

I (Predrag Gruevski) will be working on this effort. The only other resource request would be occasional discussions and moral support from the cargo team, of which I already have the privilege as maintainer of a popular cargo plugin.

TaskOwner(s) or team(s)Notes
Implementation of cargo manifest linting + CLIPredrag Gruevski
Initial design for cross-crate checkingPredrag Gruevski
Initial design for type-checking lintsPredrag Gruevski
Discussion and moral supportTeam cargo

Frequently asked questions

Why not use semverver instead?

Semverver is a prior attempt at enforcing SemVer compliance, but has been deprecated and is no longer developed or maintained. It relied on compiler-internal APIs, which are much more unstable than rustdoc JSON and required much more maintenance to "keep the lights on." This also meant that semverver required users to install a specific nightly versions that were known to be compatible with their version of semverver.

While cargo-semver-checks relies on rustdoc JSON which is also an unstable nightly-only interface, its changes are much less frequent and less severe. By using the Trustfall query engine, cargo-semver-checks can simultaneously support a range of rustdoc JSON formats (and therefore Rust versions) within the same tool. On the maintenance side, cargo-semver-checks lints are written in a declarative manner that is oblivious to the details of the underlying data format, and do not need to be updated when the rustdoc JSON format changes. This makes maintenance much easier: updating to a new rustdoc JSON format usually requires just a few lines of code, instead of "a few lines of code apiece in each of hundreds of lints."

Const traits

Metadata
Point of contactDeadbeef
Teamstypes, lang
StatusAccepted
Tracking issuerust-lang/rust-project-goals#106

Summary

Experiment with effects-based desugaring for "maybe-const" functionality

Motivation

Rust's compile time functionalities (const fn, consts, etc.) are greatly limited in terms of expressivity because const functions currently do not have access to generic trait bounds as runtime functions do. Developers want to write programs that do complex work in compile time, most of the times to offload work from runtime, and being able to have const traits and const impls will greatly reduce the difficulty of writing such compile time functions.

The status quo

People write a lot of code that will be run in compile time. They include procedural macros, build scripts (42.8k hits on GitHub for build.rs), and const functions/consts (108k hits on GitHub for const fn). Not being able to write const functions with generic behavior is often cited as a pain point of Rust's compile time capabilities. Because of the limited expressiveness of const fn, people may decide to move some compile time logic to a build script, which could increase build times, or simply choose not to do it in compile time (even though it would have helped runtime performance).

There are also language features that require the use of traits, such as iterating with for and handling errors with ?. Because the Iterator and Try traits currently cannot be used in constant contexts, people are unable to use ? to handle results, nor use iterators e.g. for x in 0..5.

The next six months

In 2024, we plan to:

  • Finish experimenting with an effects-based desugaring for ensuring correctness of const code with trait bounds
  • Land a relatively stable implementation of const traits
  • Make all UI tests pass.

The "shiny future" we are working towards

We're working towards enabling developers to do more things in general within a const context. Const traits is a blocker for many future possibilities (see also the const eval feature skill tree) including heap operations in const contexts.

Design axioms

None.

Ownership and team asks

Owner: Deadbeef

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.
TaskOwner(s) or team(s)Notes
ImplementationDeadbeef and project-const-traits
Discussion and moral supportTeam types lang

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Ergonomic ref-counting

Metadata
Point of contactJonathan Kelley
Teamslang, libs-api
StatusAccepted
Tracking issuerust-lang/rust-project-goals#107

Summary

Deliver nightly support some solution to reduce the ergonomic pain of working with ref-counted and cheaply cloneable types.

Motivation

For 2024H2 we propose to improve ergonomics of working with "cheaply cloneable" data, most commonly reference-counted values (Rc or Arc). Like many ergonomic issues, these impact all users, but the impact is particularly severe for newer Rust users, who have not yet learned the workarounds, or those doing higher-level development, where the ergonomics of Rust are being compared against garbage-collected languages like Python, TypeScript, or Swift.

The status quo

Many Rust applications—particularly those in higher-level domains—use reference-counted values to pass around core bits of context that are widely used throughout the program. Reference-counted values have the convenient property that they can be cloned in O(1) time and that these clones are indistinguishable from one another (for example, two handles to a Arc<AtomicInteger> both refer to the same counter). There are also a number of data structures found in the stdlib and ecosystem, such as the persistent collections found in the im crate or the Sender type from std::sync::mpsc and tokio::sync::mpsc, that share this same property.

Rust's current rules mean that passing around values of these types must be done explicitly, with a call to clone. Transforming common assignments like x = y to x = y.clone() can be tedious but is relatively easy. However, this becomes a much bigger burden with closures, especially move closures (which are common when spawning threads or async tasks). For example, the following closure will consume the state handle, disallowing it from being used in later closures:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);
tokio::spawn(async move { /* code using `state` */ });
}

This scenario can be quite confusing for new users (see e.g. this 2014 talk at StrangeLoop where an experienced developer describes how confusing they found this to be). Many users settle on a workaround where they first clone the variable into a fresh local with a new name, such as:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

let _state = state.clone();
tokio::spawn(async move { /*code using `_state` */ });

let _state = state.clone();
tokio::spawn(async move { /*code using `_state` */ });
}

Others adopt a slightly different pattern leveraging local variable shadowing:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

tokio::spawn({
    let state = state.clone();
    async move { /*code using `state`*/ }
});
}

Whichever pattern users adopt, explicit clones of reference counted values leads to significant accidental complexity for many applications. As noted, cloning these values is both cheap at runtime and has zero semantic importance, since each clone is as good as the other.

Impact on new users and high-level domains

The impact of this kind of friction can be severe. While experienced users have learned the workaround and consider this to be a papercut, new users can find this kind of change bewildering and a total blocker. The impact is also particularly severe on projects attempting to use Rust in domains traditionally considered "high-level" (e.g., app/game/web development, data science, scientific computing). Rust's strengths have made it a popular choice for building underlying frameworks and libraries that perform reliably and with high performance. However, thanks in large part to these kind of smaller, papercut issues, it is not a great choice for consumption of these libraries

Users in higher-level domains are accustomed to the ergonomics of Python or TypeScript, and hence ergonomic friction can make Rust a non-starter. Those users that stick with Rust long enough to learn the workarounds, however, often find significant value in its emphasis on reliability and long-term maintenance (not to mention performance). Small changes like avoiding explicit clones for reference-counted data can both help to make Rust more appealing in these domains and help Rust in other domains where it is already widespead.

The next six months

The goal for the next six months is to

  • author and accept an RFC that reduces the burden of working with clone, particularly around closures
  • land a prototype nightly implementation.

The "shiny future" we are working towards

This goal is scoped around reducing (or eliminating entirely) the need for explicit clones for reference-counted data. See the FAQ for other potential future work that we are not asking the teams to agree upon now.

Design consensus points

We don't have consensus around a full set of "design axioms" for this design, but we do have alignment around the following basic points:

  • Explicit ref-counting is a major ergonomic pain point impacting both high- and low-level, performance oriented code.
  • The worst ergonomic pain arises around closures that need to clone their upvars.
  • Some code will want the ability to precisely track reference count increments.
  • The design should allow user-defined types to "opt-in" to the lightweight cloning behavior.

Ownership and team asks

The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.

TaskOwner(s) or team(s)Notes
Overall program managementJonathan Kelley
Author RFCTBD
Design meetingTeam lang2 meetings expected
RFC decisionTeam lang libs-api
Nightly implementationSantiago Pastorino
Standard reviewsTeam compiler
Blog post on Inside Rust
  • The Team badge indicates a requirement where Team support is needed.

Support needed from the project

As owners of this goal...

  • We are happy to author RFCs and/or work with other experienced RFC authors.
  • We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
  • We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
  • We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.

The primary project support will be design bandwidth from the [lang team].

Outputs and milestones

Outputs

Final outputs that will be produced

Milestones

Milestones you will reach along the way

Frequently asked questions

After this, are we done? Will high-level Rust be great?

Accepting this goal only implies alignment around reducing (or eliminating entirely) the need for explicit clones for reference-counted data. For people attempting to use Rust as part of higher-level frameworks like Dioxus, this is an important step, but one that would hopefully be followed by further ergonomics work. Examples of language changes that would be helpful are described in the (not accepted) goals around a renewed ergonomics initiative and improve compilation speed.

Explore sandboxed build scripts

Metadata
Point of contactWeihang Lo
Teamscargo
StatusAccepted
Tracking issuerust-lang/rust-project-goals#108

Summary

Explore different strategies for sandboxing build script executions in Cargo.

Motivation

Cargo users can opt-in to run build scripts in a sandboxed environment, limiting their access to OS resources like the file system and network. By providing a sandboxed environment for build script executions, fewer repetitive code scrutinies are needed. The execution of a build script also becomes more deterministic, helping the caching story for the ecosystem in the long run.

The status quo

Build scripts in Cargo can do literally anything from network requests to executing arbitrary binaries. This isn't deemed a security issue as it is "by design". Unfortunately, this "by design" virtue relies on trust among developers within the community. When trust is broken by some incidents, even just once, the community has no choice but to intensively review build scripts in their dependencies.

Although there are collaborative code review tools like cargo-vet and cargo-crev to help build trust, comprehensive review is still impractical, especially considering the pace of new version releases. In Rust, the unsafe keyword helps reviewers identify code sections that require extra scrutiny. However, an unsandboxed build script is effectively an enormous unsafe block, making comprehensive review impractical for the community.

Besides the security and trust issues, in an unsandboxed build script, random network or file system access may occur and fail. These kinds of "side effects" are notoriously non-deterministic, and usually cause retries and rebuilds in build pipelines. Because the build is not deterministic, reproducibility cannot be easily achieved, making programs harder to trace and debug.

There is one 2024 GSoC project "Sandboxed and Deterministic Proc Macro using Wasm" experimenting with the possibility of using WebAssembly to sandbox procedural macros. While build scripts and proc-macros are different concepts at different levels, they share the same flaw — arbitrary code execution. Given that we already have experiments on the proc-macros side, it's better we start some groundwork on build scripts in parallel, and discuss the potential common interface for Cargo to configure them.

The next 6 months

  • Look at prior art in this domain, especially for potential blockers and challenges.
  • Prototype on sandboxing build scripts. Currently looking at WebAssembly System Interface (WASI) and Cackle.
  • Provide a way to opt-in sandboxed build scripts for Cargo packages, and design a configurable interface to grant permissions to each crate.
  • Based on the results of those experiments, consider whether the implementation should be a third-party Cargo plugin first, or make it into Cargo as an unstable feature (with a proper RFC).

The "shiny future" we are working towards

These could become future goals if this one succeeds:

  • The sandboxed build script feature will be opted-in at first when stabilized. By the next Edition, sandboxed build scripts will be on by default, hardening the supply chain security.
  • Cargo users only need to learn one interface for both sandboxed proc-macros and build scripts. The configuration for build scripts will also cover the needs for sandboxed proc-macros,
  • Crates.io and the cargo info command display the permission requirements of a crate, helping developers choose packages based on different security level needs.
  • The runtime of the sandbox environment is swappable, enabling the potential support of remote execution without waiting for a first-party solution. It also opens a door to hermetic builds.

Design axioms

In order of importance, a sandboxed build script feature should provide the following properties:

  • Restrict runtime file system and network access, as well as process spawning, unless allowed explicitly.
  • Cross-platform supports. Cargo is guaranteed to work on tier 1 platforms. This is not a must have for experiments, but is a requirement for stabilization.
  • Ensure -sys crates can be built within the sandbox. Probing and building from system libraries is the major use case of build scripts. We should support it as a first-class citizen.
  • Declarative configuration interface to grant permissions to packages. A declarative configuration helps us analyze permissions granted more easily, without running the actual code.
  • Don't block the build when the sandboxed feature is off. The crates.io ecosystem shouldn't rely on the interface to successfully build things. That would hurt the integration with other external build systems. It should work as if it is an extra layer of security scanning.
  • Room for supporting different sandbox runtimes and strategies. This is for easier integration into external build systems, as well as faster iteration for experimenting with new ideas.

Currently out of scope:

  • Terminal user interface.
  • Pre-built build script binaries.
  • Hermetic builds, though this extension should be considered.
  • Support for all tier 2 with-host-tools platforms. As an experiment, we follow what the chosen sandbox runtime provides us.
  • On-par build times. The build time is expected to be impacted because build script artifacts are going to build for the sandbox runtime. This prevents an optimization that when "host" and "target" platforms are the same, Cargo tries to share artifacts between build scripts and applications.

Ownership and team asks

Owner: Weihang Lo, though I also welcome someone else to take ownership of it. I would be happy to support them as a Cargo maintainer.

TaskOwner(s) or team(s)Notes
DesignWeihang Lo(or mentee)
Discussion and moral supportTeam cargo
Security reviewsHelp wanted
Standard reviewsTeam cargo
MiscellaneousTeam compilerCollaboration with GSoC proc-macro project
Summary of experiments or RFCWeihang Lo(or mentee)

For security reviews, I'd like assistance from experts in security domains. Ideally those experts would be from within the community, such as the Security Response or Secure Code working groups. However, I don't want to pressure that goal since comprehensive security reviews are extremely time-consuming. Outside experts are also welcome.

Outputs and milestones

Outputs

As the work here is mostly experiments and prototyping, based on the results, the outputs could be:

  • A report about why these methods have failed to provide a proper sandboxed environment for build scripts in Cargo, plus some other areas worth exploring in the future.
  • A configurable sandboxed environment for build scripts landed as an unstable feature in Cargo, or provided via crates.io as a third-party plugin for faster experimenting iteration.
  • An RFC proposing a sandboxed build script design to the Rust project.

Milestones

MilestoneExpected Date
Summarize the prior art for sandbox strategies2024-07
Prototype a basic sandboxed build script implementation2024-08
Draft a configurable interface in Cargo.toml2024-10
Integrate the configurable interface with the prototype2024-12
Ask some security experts for reviewing the designTBD
Write an RFC summary for the entire prototyping processTBD

Frequently asked questions

Q: Why can't build script be removed?

The Rust crates.io ecosystem depends heavily on build scripts. Some foundational packages use build scripts for essential tasks, such as linking to C dependencies. If we shut down this option without providing an alternative, half of the ecosystem would collapse.

That is to say, build script is a feature included in the stability guarantee that we cannot simply remove, just like the results of the dependency resolution Cargo produces.

Q: Why are community tools like cargo-vet and cargo-crev not good enough?

They are all excellent tools. A sandboxed build script isn't meant to replace any of them. However, as aforementioned, those tools still require intensive human reviews, which are difficult to achieve (cargo-crev has 103 reviewers and 1910 reviews at the time of writing).

A sandboxed build script is a supplement to them. Crate reviews are easier to complete when crates explicitly specify the permissions granted.

Q: What is the difference between sandboxed builds and deterministic builds?

Sandboxing is a strategy that isolates the running process from other resources on a system, such as file system access.

A deterministic build always produces the same output given the same input.

Sandboxing is not a requirement for deterministic builds, and vice versa. However, sandboxing can help deterministic builds because hidden dependencies often come from the system via network or file system access.

Q: Why do we need to build our own solution versus other existing solutions?

External build systems are aware of this situation. For example, Bazel provides a sandboxing feature. Nix also has a sandbox build via a different approach. Yet, migrating to existing solutions will be a long-term effort for the entire community. It requires extensive exploration and discussions from both social and technical aspects. At this moment, I don't think the Cargo team and the Rust community are ready for a migration.

Expose experimental LLVM features for automatic differentiation and GPU offloading

Metadata
Point of contactManuel Drehwald
Teamslang, compiler
StatusAccepted
Tracking issuerust-lang/rust-project-goals#109
Other tracking issuesrust-lang/rust#124509, rust-lang/rust#124509

Summary

Expose experimental LLVM features for automatic differentiation and GPU offloading.

Motivation

Scientific computing, high performance computing (HPC), and machine learning (ML) all share the interesting challenge in that they each, to different degrees, care about highly efficient library and algorithm implementations, but that these libraries and algorithms are not always used by people with deep experience in computer science. Rust is in a unique position because ownership, lifetimes, and the strong type system can prevent many bugs. At the same time strong alias information allows compelling performance optimizations in these fields, with performance gains well beyond that otherwise seen when comparing C++ with Rust. This is due to how automatic differentiation and GPU offloading strongly benefit from aliasing information.

The status quo

Thanks to PyO3, Rust has excellent interoperability with Python. Conversely, C++ has a relatively weak interop story. This can lead Python libraries to using slowed C libraries as a backend instead, just to ease bundling and integration. Fortran is mostly used in legacy places and hardly used for new projects.

As a solution, many researchers try to limit themself to features which are offered by compilers and libraries built on top of Python, like JAX, PyTorch, or, more recently, Mojo. Rust has a lot of features which make it more suitable to develop a fast and reliable backend for performance critical software than those languages. However, it lacks two major features which developers now expect. One is high performance autodifferentiation. The other is easy use of GPU resources.

Almost every language has some way of calling hand-written CUDA/ROCm/Sycl kernels, but the interesting feature of languages like Julia, or of libraries like JAX, is that they offer users the ability to write kernels in the language the users already know, or a subset of it, without having to learn anything new. Minor performance penalties are not that critical in such cases, if the alternative are a CPU-only solution. Otherwise worthwhile projects such as Rust-CUDA end up going unmaintained due to being too much effort to maintain outside of LLVM or the Rust project.

Elaborate in more detail about the problem you are trying to solve. This section is making the case for why this particular problem is worth prioritizing with project bandwidth. A strong status quo section will (a) identify the target audience and (b) give specifics about the problems they are facing today. Sometimes it may be useful to start sketching out how you think those problems will be addressed by your change, as well, though it's not necessary.

The next six months

We are requesting support from the Rust project for continued experimentation:

  1. Merge the #[autodiff] fork.
  2. Expose the experimental batching feature of Enzyme, preferably by a new contributor.
  3. Merge an MVP #[offloading] fork which is able to run simple functions using rayon parallelism on a GPU or TPU, showing a speed-up.

The "shiny future" we are working towards

The purpose of this goal is to enable continued experimentation with the underlying LLVM functionality. The eventual goal of this experimentation is that all three proposed features (batching, autodiff, offloading) can be combined and work nicely together. The hope is that we will have state-of-the-art libraries like faer to cover linear algebra, and that we will start to see more and more libraries in other languages using Rust with these features as their backend. Cases which don't require interactive exploration will also become more popular in pure Rust.

Caveats to this future

There is not yet consensus amongst the relevant Rust teams as to how and/or whether this functionality should be exposed on stable. Some concerns that continued experimentation will hopefully help to resolve:

  • How effective and general purpose is this functionality?
  • How complex is this functionality to support, and how does that trade off with the value it provides? What is the right point on the spectrum of tradeoffs?
  • Can code using these Rust features still compile and run on backends other than LLVM, and on all supported targets? If not, how should we manage the backend-specific nature of it?
  • Can we avoid tying Rust features too closely to the specific properties of any backend or target, such that we're confident these features can remain stable over decades of future landscape changes?
  • Can we fully implement every feature of the provided functionality (as more than a no-op) on fully open systems, despite the heavily proprietary nature of parts of the GPU and accelerator landscape?

Design axioms

Offloading

  • We try to provide a safe, simple and opaque offloading interface.
  • The "unit" of offloading is a function.
  • We try to not expose explicit data movement if Rust's ownership model gives us enough information.
  • Users can offload functions which contains parallel CPU code, but do not have final control over how the parallelism will be translated to co-processors.
  • We accept that hand-written CUDA/ROCm/etc. kernels might be faster, but actively try to reduce differences.
  • We accept that we might need to provide additional control to the user to guide parallelism, if performance differences remain unacceptably large.
  • Offloaded code might not return the exact same values as code executed on the CPU. We will work with t-opsem to develop clear rules.

Autodiff

  • We try to provide a fast autodiff interface which supports most autodiff features relevant for scientific computing.
  • The "unit" of autodiff is a function.
  • We acknowledge our responsibility since user-implemented autodiff without compiler knowledge might struggle to cover gaps in our features.
  • We have a fast, low level, solution with further optimization opportunities, but need to improve safety and usability (i.e. provide better high level interfaces).
  • We need to teach users more about autodiff "pitfalls" and provide guides on how to handle them. See, e.g. https://arxiv.org/abs/2305.07546.
  • We do not support differentiating inline assembly. Users are expected to write custom derivatives in such cases.
  • We might refuse to expose certain features if they are too hard to use correctly and provide little gains (e.g. derivatives with respect to global vars).

Add your design axioms here. Design axioms clarify the constraints and tradeoffs you will use as you do your design work. These are most important for project goals where the route to the solution has significant ambiguity (e.g., designing a language feature or an API), as they communicate to your reader how you plan to approach the problem. If this goal is more aimed at implementation, then design axioms are less important. Read more about design axioms.

Ownership and team asks

Owner: Manuel Drehwald

Manuel S. Drehwald is working 5 days per week on this, sponsored by LLNL and the University of Toronto (UofT). He has a background in HPC and worked on a Rust compiler fork, as well as an LLVM-based autodiff tool for the last 3 years during his undergrad. He is now in a research-based master's degree program. Supervision and discussion on the LLVM side will happen with Johannes Doerfert and Tom Scogland.

Resources: Domain and CI for the autodiff work is being provided by MIT. This might be moved to the LLVM org later this year. Hardware for benchmarks is being provided by LLNL and UofT. CI for the offloading work will be provided by LLNL or LLVM (see below).

Minimal "smoke test" reviews will be needed from the compiler-team. The Rust language changes at this stage are expected to be a minimal wrapper around the underlying LLVM functionality and the compiler team need only vet that the feature will not hinder usability for ordinary Rust users or cause undue burden on the compiler architecture itself. There is no requirement to vet the quality or usability of the design.

TaskOwner(s) or team(s)Notes
DevelopmentManuel Drehwald
Lang-team experimentTeam lang(approved)
Standard reviewsTeam compiler

Outputs and milestones

Outputs

  • An #[offload] rustc-builtin-macro which makes a function definition known to the LLVM offloading backend.

    • Made a PR to enable LLVM's offloading runtime backend.
    • Merge the offload macro frontend
    • Merge the offload Middle-end
  • An offload!([GPU1, GPU2, TPU1], foo(x, y,z)); macro (placeholder name) which will execute function foo on the specified devices.

  • An #[autodiff] rustc-builtin-macro which differentiates a given function.

    • Merge the Autodiff macro frontend
    • Merge the Autodiff Enzyme backend
    • Merge the Autodiff Middle-end
  • A #[batching] rustc-builtin-macro which fuses N function calls into one call, enabling better vectorization.

Milestones

  • The first offloading step is the automatic copying of a slice or vector of floats to a device and back.

  • The second offloading step is the automatic translation of a (default) Clone implementation to create a host-to-device and device-to-host copy implementation for user types.

  • The third offloading step is to run some embarrassingly parallel Rust code (e.g. scalar times Vector) on the GPU.

  • Fourth we have examples of how rayon code runs faster on a co-processor using offloading.

  • Stretch-goal: combining autodiff and offloading in one example that runs differentiated code on a GPU.

Frequently asked questions

Why do you implement these features only on the LLVM backend?

Performance-wise, we have LLVM and GCC as performant backends. Modularity-wise, we have LLVM and especially Cranelift being nice to modify. It seems reasonable that LLVM thus is the first backend to have support for new features in this field. Especially the offloading support should be supportable by other compiler backends, given pre-existing work like OpenMP offloading and WebGPU.

Do these changes have to happen in the compiler?

Yes, given how Rust works today.

However, both features could be implemented in user-space if the Rust compiler someday supported reflection. In this case we could ask the compiler for the optimized backend IR for a given function. We would then need use either the AD or offloading abilities of the LLVM library to modify the IR, generating a new function. The user would then be able to call that newly generated function. This would require some discussion on how we can have crates in the ecosystem that work with various LLVM versions, since crates are usually expected to have a MSRV, but the LLVM (and like GCC/Cranelift) backend will have breaking changes, unlike Rust.

Batching?

This is offered by all autodiff tools. JAX has an extra command for it, whereas Enzyme (the autodiff backend) combines batching with autodiff. We might want to split these since both have value on their own.

Some libraries also offer array-of-struct vs struct-of-array features which are related but often have limited usability or performance when implemented in userspace.

Writing a GPU backend in 6 months sounds tough...

True. But similar to the autodiff work, we're exposing something that's already existing in the backend.

Rust, Julia, C++, Carbon, Fortran, Chappel, Haskell, Bend, Python, etc. should not all have write their own GPU or autodiff backends. Most of these already share compiler optimization through LLVM or GCC, so let's also share this. Of course, we should still push to use our Rust specific magic.

Rust Specific Magic?

TODO

How about Safety?

We want all these features to be safe by default, and are happy to not expose some features if the gain is too small for the safety risk. As an example, Enzyme can compute the derivative with respect to a global. That's probably too niche, and could be discouraged (and unsafe) for Rust.

Extend pubgrub to match cargo's dependency resolution

Metadata
Point of contactJacob Finkelman
Teamscargo
StatusAccepted
Tracking issuerust-lang/rust-project-goals#110

Summary

Implement a standalone library based on pubgrub that model's cargo dependency resolution and validate its accurate with testing against crates found on crates.io. This lays the groundwork for improved cargo error messages, extensions for hotly requested features (e.g., better MSRV support, CVE-aware resolution, etc), and support for a richer ecosystem of cargo extensions.

Motivation

Cargo's dependency resolver is brittle and under-tested. Disentangling implementation details, performance optimizations, and user-facing functionality will require a rewrite. Making the resolver a standalone modular library will make it easier to test and maintain.

The status quo

Big changes are required in cargo's resolver: there is lots of new functionality that will require changes to the resolver and the existing resolver's error messages are terrible. Cargo's dependency resolver solves the NP-Hard problem of taking a list of direct dependencies and an index of all available crates and returning an exact list of versions that should be built. This functionality is exposed in cargo's CLI interface as generating/updating a lock file. Nonetheless, any change to the current resolver in situ is extremely treacherous. Because the problem is NP-Hard it is not easy to tell what code changes break load-bearing performance or correctness guarantees. It is difficult to abstract and separate the existing resolver from the code base, because the current resolver relies on concrete datatypes from other modules in cargo to determine if a set of versions have any of the many ways two crate versions can be incompatible.

The next six months

Develop a standalone library for doing dependency resolution with all the functionality already supported by cargo's resolver. Extensively test this library to ensure maximum compatibility with existing behavior. Prepare for experimental use of this library inside cargo.

The "shiny future" we are working towards

Eventually we should replace the existing entangled resolver in cargo with one based on separately maintained libraries. These libraries would provide simpler and isolated testing environments to ensure that correctness is maintained. Cargo plugins that want to control or understand what lock file cargo uses can interact with these libraries directly without interacting with the rest of cargo's internals.

Design axioms

  • Correct: The new resolver must perform dependency resolution correctly, which generally means matching the behavior of the existing resolver, and switching to it must not break Rust projects.
  • Complete output: The output from the new resolver should be demonstrably correct. There should be enough information associated with the output to determine that it made the right decision.
  • Modular: There should be a stack of abstractions, each one of which can be understood, tested, and improved on its own without requiring complete knowledge of the entire stack from the smallest implementation details to the largest overall use cases.
  • Fast: The resolver can be a slow part of people's workflow. Overall performance must be a high priority and a focus.

Ownership and team asks

Owner: Jacob Finkelman will own and lead the effort.

I (Jacob Finkelman) will be working full time on this effort. I am a member of the Cargo Team and a maintainer of pubgrub-rs.

Integrating the new resolver into Cargo and reaching the shiny future will require extensive collaboration and review from the Cargo Team. However, the next milestones involve independent work exhaustively searching for differences in behavior between the new and old resolvers and fixing them. So only occasional consultation-level conversations will be needed during this proposal.

TaskOwner(s) or team(s)Notes
Implementation work on pubgrub libraryeh2046
Discussion and moral supportTeam cargo

Outputs

Standalone crates for independent components of cargo's resolver. We have already developed https://github.com/pubgrub-rs/pubgrub for solving the core of dependency resolution and https://github.com/pubgrub-rs/semver-pubgrub for doing mathematical set operations on Semver requirements. The shiny future will involve several more crates, although their exact borders have not yet been determined. Eventually we will also be delivering a -Z pubgrub for testing the new resolver in cargo itself.

Milestones

For all crate versions on crates.io the two resolvers agree about whether there is a solution.

Build a tool that will look at the index from crates.io and for each version of each crate, make a resolution problem out of resolving the dependencies. This tool will save off an independent test case for each time pubgrub and cargo disagree about whether there is a solution. This will not check if the resulting lock files are the same or even compatible, just whether they agree that a lock file is possible. Even this crude comparison will find many bugs in how the problem is presented to pubgrub. This is known for sure, because this milestone has already been achieved.

For all crate versions on crates.io the two resolvers accept the other one's solution.

The tool from previous milestone will be extended to make sure that the lock file generated by pubgrub can be accepted by cargo's resolver and vice versa. How long will this take? What will this find? No way to know. To quote FractalFir "If I knew where / how many bugs there are, I would have fixed them already. So, providing any concrete timeline is difficult."

For all crate versions on crates.io the performance is acceptable.

There are some crates where pubgrub takes a long time to do resolution, and many more where pubgrub takes longer than cargo's existing resolver. Investigate each of these cases and figure out if performance can be improved either by improvements to the underlying pubgrub algorithm or the way the problem is presented to pubgrub.

Frequently asked questions

If the existing resolver defines correct behavior then how does a rewrite help?

Unless we find critical bugs with the existing resolver, the new resolver and cargo's resolver should be 100% compatible. This means that any observable behavior from the existing resolver will need to be matched in the new resolver. The benefits of this work will come not from changes in behavior, but from a more flexible, reusable, testable, and maintainable code base. For example: the base pubgrub crate solves a simpler version of the dependency resolution problem. This allows for a more structured internal algorithm which enables complete error messages. It's also general enough not only to be used in cargo but also in other package managers. We already have contributions from the maintainers of uv who are using the library in production.

Implement "merged doctests" to save doctest time

Metadata
Point of contactGuillaume Gomez
Teamsrustdoc
StatusAccepted
Tracking issuerust-lang/rust-project-goals#111

Guillaume Gomez: https://github.com/GuillaumeGomez

Motivation

Most of the time in doctests is spent in compilation. Merging doctests and compiling them together allows to greatly reduce the overall amount of time.

The status quo

The next six months

  • Finish reviewing the pull request
  • Run crater with the feature enabled by default.
  • Merge it.

The "shiny future" we are working towards

Merged doctests.

Design axioms

N/A

Ownership and team asks

Owner: Guillaume Gomez

TaskOwner(s) or team(s)Notes
ImplementationGuillaume Gomez
Standard reviewsTeam rustdoc

Frequently asked questions

None yet.

Make Rustdoc Search easier to learn

Metadata
Point of contactMichael Howell
Teamsrustdoc, rustdoc-frontend
StatusAccepted
Tracking issuerust-lang/rust-project-goals#112

Summary

To make rustdoc's search engine more useful:

  • Respond to some existing feedback.
  • Write blog and forum posts to advertise these new features to the larger community, and seek out feedback to continue the progress.

Motivation

Rustdoc Search is going to be some people's primary resource for finding things. There are a few reasons for this:

  • It's available. Away from the computer and trying to help someone else out from a smartphone? Evaluating Rust before you install anything? Rustdoc outputs web pages, so you can still use it.
  • If you have a pretty good idea of what you're looking for, it's way better than a general search engine. It offers structured features based on Rust, like type-driven search and crate filtering, that aren't available in DuckDuckGo because it doesn't know about them.

The status quo

Unfortunately, while most people know it exists, they don't know about most of what it can do. A lot of people literally ask "Does Rust have anything like Hoogle?", and they don't know that it's already there. We've had other people who didn't see the tab bar, and it doesn't seem like people look under the ? button, either.

Part of the problem is that they just never try.

[Ted Scharff][] User: I'd never used the search bar inside the docs before
[Ted Scharff][] User: It's because usually the searches inside all of the sites are pretty broken & useless
[Ted Scharff][] User: but this site is cool. docs are very well written and search is fast, concise...

Mostly, we've got a discoverability problem.

The next 6 months

  • Implement a feature to show type signatures in type-driven search results, so it's easier to figure out why a result came up https://github.com/rust-lang/rust/pull/124544.
    • When unintuitive results come up, respond by either changing the algorithm or changing the way it's presented to help it make sense.
    • Do we need to do something to make levenshtein matches more obvious?
  • Seek out user feedback on Internals.

Popular stuff should just be made to work, and what's already there can be made more obvious with education and good UI design.

The "shiny future" we are working towards

Rustdoc Search should be a quick, natural way to find things in your dependencies.

Design axioms

The goal is to reach this point without trying to be a better Google than Google is. Rustdoc Search should focus on what it can do that other search engines can't:

  • Rustdoc Search is not magic, and it doesn't have to be.
    • A single crate, or even a single dependency tree, isn't that big. Extremely fancy techniques—beyond simple database sharding and data structures like bloom filters or tries—aren't needed.
    • If you've already added a crate as a dependency or opened its page on docs.rs, there's no point in trying to exploit it with SEO spam (the crate is already on the other side of the airtight hatchway).
    • Rustdoc is completely open source. There are no secret anti-spam filters. Because it only searches a limited set of pre-screened crates (usually just one), it will never need them.
  • Rustdoc knows the Rust language. It can, and should, offer structured search to build on that.

Ownership and team asks

Owner: Michael Howell

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam rustdoc
Implementation: show type signature in SERPMichael Howell
Implementation: tweak search algoMichael Howell
Standard reviewsTeam rustdoc-frontend
Design meetingTeam rustdoc-frontend
FCP decision(s)Team rustdoc-frontend
Feedback and testing

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Search over all of the crates when?

Docs.rs can do it if they want, but Michael Howell isn't signing up for a full-time job dealing with SEO bad actors.

Full-text search when?

That path is pretty rough. Bugs, enormous size, and contentious decisions on how to handle synonyms abound.

Next-generation trait solver

Metadata
Point of contactlcnr
Teamstypes
StatusAccepted
Tracking issuerust-lang/rust-project-goals#113

Summary

In the next 6 months we plan to extend the next-generation trait solver as follows:

  • stabilize the use of the next-generation trait solver in coherence checking
  • use the new implementation in rustdoc and lints where applicable
  • share the solver with rust-analyser
  • successfully bootstrap the compiler when exclusively using the new implementation and run crater

Motivation

The existing trait system implementation has many bugs, inefficiencies and rough corners which require major changes to its implementation. To fix existing unsound issues, accommodate future improvements, and to improve compile times, we are reimplementing the core trait solver to replace the existing implementations of select and fulfill.

The status quo

There are multiple type system unsoundnesses blocked on the next-generation trait solver: project board. Desirable features such as coinductive trait semantics and perfect derive, where-bounds on binders, and better handling of higher-ranked bounds and types are also stalled due to shortcomings of the existing implementation.

Fixing these issues in the existing implementation is prohibitively difficult as the required changes are interconnected and require major changes to the underlying structure of the trait solver. The Types Team therefore decided to rewrite the trait solver in-tree, and has been working on it since EOY 2022.

The next six months

  • stabilize the use of the next-generation trait solver in coherence checking
  • use the new implementation in rustdoc and lints where applicable
  • share the solver with rust-analyser
  • successfully bootstrap the compiler when exclusively using the new implementation and run crater

The "shiny future" we are working towards

  • we are able to remove the existing trait solver implementation and significantly cleanup the type system in general, e.g. removing most normalize in the caller by handling unnormalized types in the trait system
  • all remaining type system unsoundnesses are fixed
  • many future type system improvements are unblocked and get implemented
  • the type system is more performant, resulting in better compile times

Design axioms

In order of importance, the next-generation trait solver should be:

  • sound: the new trait solver is sound and its design enables us to fix all known type system unsoundnesses
  • backwards-compatible: the breakage caused by the switch to the new solver should be minimal
  • maintainable: the implementation is maintainable, extensible, and approachable to new contributors
  • performant: the implementation is efficient, improving compile-times

Ownership and team asks

Owner: lcnr

Add'l implementation work: Michael Goulet

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam types

Stabilize next-generation solver in coherence

TaskOwner(s) or team(s)Notes
Implementationlcnr, Michael Goulet
Standard reviewsTeam types
Standard reviewsTeam rust-analyzer
Stabilization decisionTeam types

Support next-generation solver in rust-analyzer

TaskOwner(s) or team(s)Notes
Implementation (library side)owner and others
Implementation (rust-analyzer side)TBD
Standard reviewsTeam types
Standard reviewsTeam rust-analyzer

Support needed from the project

  • Types team
    • review design decisions
    • provide technical feedback and suggestion
  • rust-analyzer team
    • contribute to integration in Rust Analyzer
    • provide technical feedback to the design of the API

Outputs and milestones

See next few steps :3

Outputs

Milestones

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Optimizing Clippy & linting

(a.k.a The Clippy Performance Project)

Metadata
Point of contactAlejandra González
Teamsclippy
StatusAccepted
Tracking issuerust-lang/rust-project-goals#114

Summary

This is the formalization and documentation of the Clippy Performance Project, a project first talked about on Zulip, July 2023. As the project consists of several points and is ever-changing, this document also has a dynamic structure and the team can add points.

In short, this is an effort to optimize Clippy, and Rust's linting infrastructure with a point of view of making Clippy faster (both on CI/CD pipelines, and on devs' machines)

Motivation

Clippy can take up to 2.5 times the time that a normal cargo check takes, and it doesn't need to be! Taking so long is expensive both in development time, and in real money.

The status quo

Based on some informal feedback [polls][poll-mastodon], it's clear that Clippy is used in lots of different contexts. Both in developer's IDEs and outside them.

The usage for IDEs is not as smooth as one may desire or expect when comparing to prior art like [Prettier][prettier], [Ruff][ruff], or other tools in the Rust ecosystem rustfmt and Rust-analyzer.

The other big use-case is as a test before committing or on CI. Optimizing Clippy for performance would fold the cost of these tests.

On GitHub Actions, this excessive time can equal the cost of running cargo check on a Linux x64 32-cores machine, instead of a Linux x64 2-cores machine. A 3.3x cost increase.

The next 6 months

In order to achieve a better performance we want to:

  • Keep working on, and eventually merge rust#125116
  • Improve of checking on proc-macros & expansions, maybe by precomputing expanded spans or memoizing the checking functions.
  • Optimize checking for MSRVs and #[clippy::msrv] attributes. (Probably using static values, precomputing MSRV spans?)
  • Migrate applicable lints to use incremental compilation

Apart from these 4 clear goals, any open issue, open PR or merged PRs with the label performance-project are a great benefit.

The "shiny future" we are working towards

The possible outcome would be a system that can be run on-save without being a hassle to the developer, and that has the minimum possible overhead over cargo check (which, would also be optimized as a side of a lot of a subset of the optimizations).

A developer shouldn't have to get a high-end machine to run a compiler swiftly; and a server should not spend more valuable seconds on linting than strictly necessary.

Ownership and team asks

Owner: Alejandra Gonzalez, a.k.a. Alejandra González

TaskOwner(s) or team(s)Notes
ImplementationAlejandra González, Alex Macleod
Standard reviewsTeam clippy

Frequently Asked Questions

[poll-mastodon]: https://tech.lgbt/Alejandra González/112747808297589676 [prettier]: https://github.com/prettier/prettier [ruff]: https://github.com/astral-sh/ruff

Patterns of empty types

Metadata
Point of contact@Nadrieril
Teamslang
StatusAccepted
Tracking issuerust-lang/rust-project-goals#115

Summary

Introduce an RFC for never patterns or other solutions for patterns involving uninhabited types.

Motivation

The story about pattern-matching is incomplete with regards to empty types: users sometimes have to write unreachable!() for cases they know to be impossible. This is a papercut that we can solve, and would make for a more consistent pattern-matching story.

This is particularly salient as the never type ! is planned to be stabilized in edition 2024.

The status quo

Empty types are used commonly to indicate cases that can't happen, e.g. in error-generic interfaces:

#![allow(unused)]
fn main() {
impl TryFrom<X> for Y {
    type Error = Infallible;
    ...
}
// or
impl SomeAST {
    pub fn map_nodes<E>(self, f: impl FnMut(Node) -> Result<Node, E>) -> Result<Self, E> { ... }
}
// used in the infallible case like:
let Ok(new_ast) = ast.map_nodes::<!>(|node| node) else { unreachable!() };
// or:
let new_ast = match ast.map_nodes::<!>(|node| node) {
    Ok(new_ast) => new_ast,
    Err(never) => match never {},
}
}

and conditional compilation:

#![allow(unused)]
fn main() {
pub struct PoisonError<T> {
    guard: T,
    #[cfg(not(panic = "unwind"))]
    _never: !,
}
pub enum TryLockError<T> {
    Poisoned(PoisonError<T>),
    WouldBlock,
}
}

For the most part, pattern-matching today treats empty types as if they were non-empty. E.g. in the above example, both the else { unreachable!() } above and the Err branch are required.

The unstable exhaustive_patterns allows all patterns of empty type to be omitted. It has never been stabilized because it goes against design axiom n.1 "Pattern semantics are predictable" when interacting to possibly-uninitialized data.

The next six months

The first step is already about to complete: the min_exhaustive_patterns feature is in FCP and about to be stabilized. This covers a large number of use-cases.

After min_exhaustive_patterns, there remains the case of empty types behind references, pointers and union fields. The current proposal for these is never_patterns; next steps are to submit the RFC and then finish the implementation according to the RFC outcome.

The "shiny future" we are working towards

The ideal endpoint is that users never have code to handle a pattern of empty type.

Design axioms

  • Pattern semantics are predictable: users should be able to tell what data a pattern touches by looking at it. This is crucial when matching on partially-initialized data.
  • Impossible cases can be omitted: users shouldn't have to write code for cases that are statically impossible.

Ownership and team asks

Owner: @Nadrieril

I (@Nadrieril) am putting forward my own contribution for driving this forward, both on the RFC and implementation sides. I am an experienced compiler contributor and have been driving this forward already for several months.

  • I expect to be authoring one RFC, on never patterns (unless it gets rejected and we need a different approach).
    • The feature may require one design meeting.
  • Implementation work is 80% done, which leaves about 80% more to do. This will require reviews from the compiler team, but not more than the ordinary.
TaskOwner(s) or team(s)Notes
Author RFC@Nadrieril
Implementation@Nadrieril
Standard reviewsTeam compiler
Discussion and moral supportTeam lang
Author stabilization reportGoal owner

Note:

  • RFC decisions, Design Meetings, and Stabilizaton decisions were intentionally not included in the above list of asks. The lang team is not sure it can commit to completing those reviews on a reasonable timeline.

Frequently asked questions

Provided reasons for yanked crates

Metadata
Point of contactRustin
Teamscrates-io, cargo
StatusAccepted
Tracking issuerust-lang/rust-project-goals#101

Summary

Over the next 6 months, we will add support to the registry yank API for providing a reason when a crate is yanked. This reason can then be displayed to users. After this feature has been up and running for a while, we'll open it up to Cargo to support filling in the reason for yanking.

Motivation

When a crate is updated to address a critical issue—such as a fix for a soundness bug or a security vulnerability—it is beneficial to yank previous versions and prompt users to upgrade with a yank reason. Additionally, if a crate is renamed or deprecated, the yank message can provide guidance on the new recommended crate or version. This ensures that users are aware of necessary updates and can maintain the security and stability of their projects.

The status quo

We came up with this need eight years ago, but it was never implemented.

This feature has the following potential use cases:

  1. When a crate is fixed because it will be broken in the next version of the compiler (e.g. a soundness fix or bug fix) then the previous versions can be yanked and nudge users forward.
  2. If a crate is fixed for a security reason, the old versions can be yanked and the new version can be suggested.
  3. If a crate is renamed (or perhaps deprecated) to another then the yank message can indicate what to do in that situation.

Additionally, if we can persist this information to the crates.io index, we can make it available as meta-information to other platforms, such as security platforms like RustSec.

The next 6 months

The primary goal for the next 6 months is to add support to the registry's yank API.

After that, next steps include (these can be done in many different orders):

  • add support on the browser frontend for giving a reason
  • add support on the cargo CLI for giving a reason
  • add reason to the index
  • add support on the cargo CLI for showing the reason

Design axioms

When considering this feature, we need to balance our desire for a perfect, structured yank message with a usable, easy-to-use yank message. We need to start with this feature and leave room for future extensions, but we shouldn't introduce complexity and support for all requirements from the start.

Ownership and team asks

Owner:

  • Rustin: wearing my crates.io team member's hat
  • Rustin: wearing my Cargo regular contributor's hat
TaskOwner(s) or team(s)Notes
ImplementationRustin
Standard reviewsTeam crates-io
Deploy to productionTeam crates-io
Author RFCRustin
RFC decisionTeam cargo, crates-io
Implementation in Cargo sideRustin
Inside Rust blog post inviting feedbackRustin
Stabilization decisionTeam cargo

Frequently asked questions

What might we do next?

We could start with plain text messages, but in the future we could consider designing it as structured data. This way, in addition to displaying it to Cargo users, we can also make it available to more crates-related platforms for data integration and use.

Scalable Polonius support on nightly

Metadata
Point of contactRémy Rakic
Teamstypes
StatusAccepted
Tracking issuerust-lang/rust-project-goals#118

Summary

Implement a native rustc version of the Polonius next generation borrow checking algorithm, that would scale better than the previous datalog implementation.

Motivation

Polonius is an improved version of the borrow checker that resolves common limitations of the borrow checker and which is needed to support future patterns such as "lending iterators" (see #92985). Its model also prepares us for further improvements in the future.

Some support exists on nightly, but this older prototype has no path to stabilization due to scalability issues. We need an improved architecture and implementation to fix these issues.

The status quo

The next six months

  • Land polonius on nightly

The "shiny future" we are working towards

Stable support for Polonius.

Design axioms

N/A

Ownership and team asks

Owner: lqd

Other support provided by Amanda Stjerna as part of her PhD.

TaskOwner(s) or team(s)Notes
Design reviewNiko Matsakis
ImplementationRémy Rakic, Amanda Stjerna
Standard reviewsTeam typesMatthew Jasper

Support needed from the project

We expect most support to be needed from the types team, for design, reviews, interactions with the trait solver, and so on. We expect Niko Matsakis, leading the polonius working group and design, to provide guidance and design time, and Michael Goulet and Matthew Jasper to help with reviews.

Outputs and milestones

Outputs

Nightly implementation of polonius that passes NLL problem case #3 and accepts lending iterators (#92985).

Performance should be reasonable enough that we can run the full test suite, do crater runs, and test it on CI, without significant slowdowns. We do not expect to be production-ready yet by then, and therefore the implementation would still be gated under a nightly -Z feature flag.

As our model is a superset of NLLs, we expect little to no diagnostics regressions, but improvements would probably still be needed for the new errors.

Milestones

MilestoneExpected date
Factoring out higher-ranked concerns from the main pathTBD
Replace parts of the borrow checker with location-insensitive PoloniusTBD
Location-sensitive prototype on nightlyTBD
Verify full test suite/crater pass with location-sensitive PoloniusTBD
Location-sensitive pass on nightly, tested on CITBD

Frequently asked questions

None yet.

Stabilize cargo-script

Metadata
Point of contactEd Page
Teamscargo, lang
StatusAccepted
Tracking issuerust-lang/rust-project-goals#119

Summary

Stabilize support for "cargo script", the ability to have a single file that contains both Rust code and a Cargo.toml.

Motivation

Being able to have a Cargo package in a single file can reduce friction in development and communication, improving bug reports, educational material, prototyping, and development of small utilities.

The status quo

Today, at minimum a Cargo package is at least two files (Cargo.toml and either main.rs or lib.rs). The Cargo.toml has several required fields.

To share this in a bug report, people resort to

  • Creating a repo and sharing it
  • A shell script that cats out to multiple files
  • Manually specifying each file
  • Under-specifying the reproduction case (likely the most common due to being the eaisest)

To create a utility, a developer will need to run cargo new, update the Cargo.toml and main.rs, and decide on a strategy to run this (e.g. a shell script in the path that calls cargo run --manifest-path ...).

The next six months

The support is already implemented on nightly. The goal is to stabilize support. With RFC #3502 and RFC #3503 approved, the next steps are being tracked in rust-lang/cargo#12207.

At a high-level, this is

  • Add support to the compiler for the frontmatter syntax
  • Add support in Cargo for scripts as a "source"
  • Polish

The "shiny future" we are working towards

Design axioms

  • In the trivial case, there should be no boilerplate. The boilerplate should scale with the application's complexity.
  • A script with a couple of dependencies should feel pleasant to develop without copy/pasting or scaffolding generators.
  • We don't need to support everything that exists today because we have multi-file packages.

Ownership and team asks

Owner: epage

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.
TaskOwner(s) or team(s)Notes
ImplementationEd Page
Stabilization decisionTeam lang cargo

Stabilize doc_cfg

Metadata
Point of contactGuillaume Gomez
Teamsrustdoc
StatusAccepted
Tracking issuerust-lang/rust-project-goals#120

Guillaume Gomez: https://github.com/GuillaumeGomez

Motivation

The doc_cfg features would allow to provide more information to crate users reading the documentation.

The status quo

The next six months

  • Merge the RFC.
  • Implement it.
  • Stabilize it.

The "shiny future" we are working towards

Stable support for doc_cfg features.

Design axioms

N/A

Ownership and team asks

Owner: Guillaume Gomez

TaskOwner(s) or team(s)Notes
ImplementationGuillaume Gomez
RFC decisionTeam rustdoc
Standard reviewsTeam rustdoc

Outputs and milestones

Outputs

Milestones

MilestoneExpected date
Merge RFCTBD
Implement RFCTBD
Stabilize the featuresTBD

Frequently asked questions

None yet.

Stabilize parallel front end

Metadata
Point of contactSparrow Li
Teamscompiler
StatusAccepted
Tracking issuerust-lang/rust-project-goals#121

Summary

We will move rustc's support for parallel front end closer to stability by resolving ICE and deadlock issues, completing the test suite for multithreaded scenario and integrating parallel front end into bootstrap. This fits into our larger goal of improving rustc build times by 20% by leveraging multiple cores and enhance its robustness.

Motivation

The parallel front end has been implemented in nightly, but there are still many problems that prevent it from being stable and used at scale.

The status quo

Many current issues reflect ICE or deadlock problems that occur during the use of parallel front end. We need to resolve these issues to enhance its robustness.

The existing compiler testing framework is not sufficient for the parallel front end, and we need to enhance it to ensure the correct functionality of the parallel front end.

The current parallel front end still has room for further improvement in compilation performance, such as parallelization of HIR lowering and macro expansion, and reduction of data contention under more threads (>= 16).

We can use parallel front end in bootstrap to alleviate the problem of slow build of the whole project.

Cargo does not provide an option to enable the use of parallel front end, so it can only be enabled by passing rustc options manually.

The next 6 months

  • Solve ICE and deadlock issues (unless they are due to lack of rare hardware environment or are almost impossible to reproduce).
  • Improve the parallel compilation test framework and enable parallel front end in UI tests.
  • Enable parallel frontends in bootstrap.
  • Continue to improve parallel compilation performance, with the average speed increase from 20% to 25% under 8 cores and 8 threads.
  • Communicate with Cargo team on the solution and plan to support parallel front end.

The "shiny future" we are working towards

The existing rayon library implementation is difficult to fundamentally eliminate the deadlock problem, so we may need a better scheduling design to eliminate deadlock without affecting performance.

The current compilation process with GlobalContext as the core of data storage is not very friendly to parallel front end. Maybe try to reduce the granularity (such as modules) to reduce data competition under more threads and improve performance.

Design axioms

The parallel front end should be:

  • safe: Ensure the safe and correct execution of the compilation process
  • consistent: The compilation result should be consistent with that in single thread
  • maintainable: The implementation should be easy to maintain and extend, and not cause confusion to developers who are not familiar with it.

Ownership and team asks

Owner: Sparrow Li and Parallel Rustc WG own this goal

TaskOwner(s) or team(s)Notes
ImplementationSparrow Li
Author testsSparrow Li
Discussion and moral supportTeam compiler

Frequently asked questions

Survey tools suitability for Std safety verification

Metadata
Point of contactCelina G. Val
TeamsLibs
StatusAccepted
Tracking issuerust-lang/rust-project-goals#126

Summary

Instrument a fork of the standard library (the verify-rust-std repository) with safety contracts, and employ existing verification tools to verify the standard library.

Motivation

The Rust Standard Library is the foundation of portable Rust software. It provides efficient implementations and safe abstractions for the most common general purpose programming data structures and operations. For doing so, they perform unsafe operations internally.

Despite being constantly battle-tested, the implementation of the standard library has not been formally verified or proven safe. A safety issue in the Standard Library may affect almost all Rust applications, and this effort is the first step to enhance the safety guarantees of the Standard Library, hence, the Rust ecosystem.

The status quo

Rust has a very active and diverse formal methods community that has been developing automated or semi-automated verification tools that can further validate Rust code beyond the guarantees provided by the compiler. These tools can complement Rust's safety guarantees, and allow developers to eliminate bugs and formally prove the correctness of their Rust code.

There are multiple verification techniques, and each have their own strength and limitations. Some tools like Creusot and Prusti can prove correctness of Rust, including generic code, but they cannot reason about unsafe Rust, and they are not fully automated.

On the other hand, tools like Kani and Verus are able to verify unsafe Rust, but they have their own limitations, for example, Kani verification is currently bounded in the presence of loops, and it can only verify monomorphic code, while Verus requires an extended version of the Rust language which is accomplished via macros.

Formal verification tools such as Creusot, Kani, and Verus have demonstrated that it is possible to write verify Rust code that is amenable to automated or semi-automated verification. For example, Kani has been successfully applied to different Rust projects, such as: Firecracker microVM, s2n-quic, and Hifitime.

Applying those techniques to the Standard library will allow us to assess these different verification techniques, identify where all these tools come short, and help us guide research required to address those gaps.

Contract language

Virtually every verification tool has its own contract specification language, which makes it hard to combine tools to verify the same system. Specifying a contract language is outside the scope of this project. However, we plan to adopt the syntax that proposed in this MCP, and keep our fork synchronized with progress made to the compiler contract language, and help assess its suitability for verification.

This will also allow us to contribute back the contracts added to the fork.

Repository Configuration

Most of the work for this project will be developed on the top of the verify-rust-std. This repository has a subtree of the Rust library folder, which is the verification target. We have already integrated Kani in CI, and we are in the process of integrating more tools.

This repository also includes "Challenges", which are verification problems that we believe are representative of different verification targets in the Standard library. We hope that these challenges will help contributors to narrow down which parts of the standard library they can verify next. New challenges can also be proposed by any contributor.

The next 6 months

First, we will instrument some unsafe functions of the forked Rust Standard Library with function contracts, and safe abstractions with safety type invariants.

Then we will employ existing verification tools to verify that the annotated unsafe functions are in-fact safe as long as its contract pre-conditions are preserved. And we will also check that any post condition is respected. With that, we will work on proving that safe abstractions do not violate any safety contract, and that it does not leak any unsafe value through its public interface.

Type invariants will be employed to verify that unsafe value encapsulation is strong enough to guarantee the safety of the type interface. Any method should be able to assume the type invariant, and it should also preserve the type invariant. Unsafe methods contract must be enough to guarantee that the type invariant is also preserved at the end of the call.

Finally, we hope to contribute upstream contracts and type invariants added to this fork using the experimental contract support proposed in this MCP.

This is open source and very much open to contributions of tools/techniques/solutions. We introduce problems (currently phrased as challenges) that we believe are important to the Rust and verification communities. These problems can be solved by anyone.

The "shiny future" we are working towards

We are working towards the enhancement of Rust verification tools, so it can eventually be incorporated as part of regular Rust development cycle for code that require the usage of unsafe Rust.

The Rust Standard Library is the perfect candidate given its blast radios and its extensive usage of unsafe Rust to provide performant abstractions.

Design axioms

  • No runtime penalty: Instrumentation must not affect the standard library runtime behavior, including performance.
  • Automated Verification: Our goal is to verify the standard library implementation. Given how quickly the standard library code evolves, automated verification is needed to ensure new changes preserve the properties previously verified.
  • Contract as code: Keeping the contract language and specification as close as possible to Rust syntax and semantics will lower the barrier for users to understand and be able to write their own contracts.

Ownership and team asks

Owner: Celina G. Val

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam libs
Standard reviewsTeam libsWe would like to contribute upstream the contracts added to the fork.
Problem proposalsHelp Wanted
Fork maintenanceCelina G. Val, Jaisurya Nanduri
Fork PR ReviewsOwn CommitteeWe are gathering a few contributors with expertise knowledge.
Instrumentation and verificationHelp Wanted

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Standard reviews refers to reviews for PRs against the Rust repository; these PRs are not expected to be unduly large or complicated.
  • Problem proposals refers to creating a scoped task that involves verifying a chunk of the standard library.
  • Fork PR reviews means a group of individuals who will review the changes made to the fork, as they're expected to require significant context. Besides contracts, these changes may include extra harnesses, lemmas, ghost-code.
  • Fork maintenance means configuring CI, performing periodic fork update from upstream, tool integration.
  • Instrumentation and verification is the work of specifying contracts, invariants, and verifying a specific part of the standard library.

Frequently asked questions

Testing infra + contributors for a-mir-formality

Metadata
Point of contactNiko Matsakis
Teamstypes
StatusAccepted
Tracking issuerust-lang/rust-project-goals#122

Summary

The goal for a-mir-formality this year is to bootstrap it as a live, maintained project:

  • Achieve 2 regular contributors from T-types in addition to Niko Matsakis
  • Support fuzz testing and/or the ability to test against rustc

Motivation

The status quo

Most communication and definition of Rust's type/trait system today takes place through informal argument and with reference to compiler internals. a-mir-formality offers a model of Rust at a much higher level, but it remains very incomplete compared to Rust and, thus far, it has been primarily developed by Niko Matsakis.

The next six months

The goal for a-mir-formality this year is to bootstrap it as a live, maintained project:

  • Achieve 2 regular contributors from T-types in addition to Niko Matsakis
  • Support fuzz testing and/or the ability to test against rustc

The "shiny future" we are working towards

The eventual goal is for a-mir-formality to serve as the official model of how the Rust type system works. We have found that having a model enables us to evaluate designs and changes much more quickly than trying to do everything in the real compiler. We envision a-mir-formality being updated with new features prior to stabilization which will require it to be a living codebase with many contributors. We also envision it being tested both through fuzzing and by comparing its results to the compiler to detect drift.

Design axioms

  • Designed for exploration and extension by ordinary Rust developers. Editing and maintaining formality should not require a PhD. We prefer lightweight formal methods over strong static proof.
  • Focused on the Rust's static checking. There are many things that a-mir-formality could model. We are focused on those things that we need to evaluate Rust's static checks. This includes the type system and trait system.
  • Clarity over efficiency. Formality's codebase is only meant to scale up to small programs. Efficiency is distinctly secondary.
  • The compiler approximates a-mir-formality, a-mir-formality approximates the truth. Rust's type system is Turing Complete and cannot be fully evaluated. We expect the compiler to have safeguards (for example, overflow detection) that may be more conservative than those imposed by a-mir-formality. In other words, formality may accept some programs the compiler cannot evaluate for practical reasons. Similarly, formality will have to make approximations relative to the "platonic ideal" of what Rust's type system would accept.

Ownership and team asks

Owner: Niko Matsakis

We will require participation from at least 2 other members of T-types. Current candidates are lcnr + compiler-errors.

TaskOwner(s) or team(s)Notes
ImplementationNiko Matsakis, lcnr, and others
Standard reviewsTeam types

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Use annotate-snippets for rustc diagnostic output

Metadata
Point of contactScott Schafer
Teamscompiler
StatusAccepted
Tracking issuerust-lang/rust-project-goals#123

Summary

Switch to annotate-snippets for rendering rustc's output, with no loss of functionality or visual regressions.

Motivation

Cargo has been adding its own linting system, where it has been using annotate-snippets to try and match Rust's output. This has led to duplicate code between the two, increasing the overall maintenance load. Having one renderer that produces Rust-like diagnostics will make it so there is a consistent style between Rust and Cargo, as well as any other tools with similar requirements like miri, and should lower the overall maintenance burden by rallying behind a single unified solution.

The status quo

Currently rustc has its own Emitter that encodes the theming properties of compiler diagnostics. It has handle all of the intricancies of terminal support (optional color, terminal width querying and adapting of output), layout (span and label rendering logic), and the presentation of different levels of information. Any tool that wants to approximate rustc's output for their own purposes, needs to use a third-party tool that diverges from rustc's output, like annotate-snippets or miette. Any improvements or bugfixes contributed to those libraries are not propagated back to rustc. Because the emitter is part of the rustc codebase, the barrier to entry for new contributors is artificially kept high than it otherwise would be.

annotate-snippets is already part of the rustc codebase, but it is disabled by default, doesn't have extensive testing and there's no way of enabling this output through cargo, which limits how many users can actually make use of it.

The next 6 months

  • annotate-snippets rendered output reaches full partity (modulo reasonable non-significant divergences) with rustc's output
  • rustc is fully using annotate-snippets for their output.

The "shiny future" we are working towards

The outputs of rustc and cargo are fully using annotate-snippets, with no regressions to the rendered output. annotate-snippets grows its feature set, like support for more advanced rendering formats or displaying diagnostics with more than ASCII-art, independently of the compiler development cycle.

Design axioms

This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.

  • Match rustc's output: The output of annotate-snipepts should match rustc, modulo reasonable non-significant divergences
  • Works for Cargo (and other tools): annotate-snippets is meant to be used by any project that would like "Rust-style" output, so it should be designed to work with any project, not just rustc.

Ownership and team asks

Owner: Esteban Kuber, Scott Schafer

Identify a specific person or small group of people if possible, else the group that will provide the owner

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.

Adjust the table below; some common examples are shown below.

TaskOwner(s) or team(s)Notes
Standard reviewsTeam compiler
Top-level Rust blog post inviting feedback

Reach output parity for rustc/annotation-snippets

TaskOwner(s) or team(s)Notes
Port a subset of rustc's UI testsScott Schafer
Make list of current unnaddressed divergencesScott Schafer
address divergencesScott Schafer

Initial use of annotate-snippets

TaskOwner(s) or team(s)Notes
update annotate-snippets to latest version
teach cargo to pass annotate-snippets flagEsteban Kuber
add ui test mode comparing new output
switch default nightly rustc output

Production use of annotate-snippets

TaskOwner(s) or team(s)Notes
switch default rustc output
release notes
switch ui tests to only check new output
Dedicated reviewerTeam compilerEsteban Kuber will be the reviewer

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

User-wide build cache

Metadata
Point of contactEd Page
Teamscargo
StatusInvited
Tracking issuerust-lang/rust-project-goals#124

Summary

Extend Cargo's caching of intermediate artifacts across a workspace to caching them across all workspaces of the user.

Motivation

The primary goal of this effort is to improve build times by reusing builds across projects.

Secondary goals are

  • Reduce disk usage
  • More precise cross-job caching in CI

The status quo

When Cargo performs a build, it will build the package you requested and all dependencies individually, linking them in the end. These build results (intermediate build artifacts) and the linked result (final build artifact) are stored in the target-dir, which is per-workspace by default.

Ways cargo will try to reuse builds today:

  • On a subsequent build, Cargo tries to reuse these build results by "fingerprinting" the inputs to the prior build and checking if that fingerprint has changed.
  • When dependencies are shared by host (build.rs, proc-macros) and platform-target and the platform-target is the host, Cargo will attempt to share host/target builds

Some users try to get extra cache reuse by assigning all workspaces to use the same target-dir.

  • Cross-project conflicts occur because this shares both intermediate (generally unique) and final build artifacts (might not be unique)
  • cargo clean will clear the entire cache for every project
  • Rebuild churn from build inputs, like RUSTFLAGS, that cause a rebuild but aren't hashed into the file path

In CI, users generally have to declare what directory is should be cached between jobs. This directory will be compressed and uploaded at the end of the job. If the next job's cache key matches, the tarball will be downloaded and decompressed. If too much is cached, the time for managing the cache can dwarf the benefits of the cache. Some third-party projects exist to help manage cache size.

The next 6 months

Add support for user-wide intermediate artifact caching

  • Re-work target directory so each intermediate artifact is in a self-contained directory
    • Develop and implement transition path for tooling that accesses intermediate artifacts
  • Adjust cargo build to
    • Hash all build inputs into a user-wide hash key
    • If hash key is present, use the artifacts straight from the cache, otherwise build it and put it in the cache
    • Limit this immutable packages ("non-local" in cargo terms, like Registry, git dependencies)
    • Limit this to idempotent packages (can't depend on proc-macro, can't have a build.rs)
    • Evaluate risks and determine how we will stabilize this (e.g. unstable to stable, opt-in to opt-out to only on)
  • Track intermediate build artifacts for garbage collection
  • Explore
    • Idempotence opt-ins for build.rs or proc-macros until sandboxing solutions can determine the level of idempotence.
    • a CLI interface for removing anything in the cache that isn't from this CI job's build, providing more automatic CI cache management without third-party tools.

Compared to pre-built binaries, this is adaptive to what people use

  • feature flags
  • RUSTFLAGS
  • dependency versions

A risk is that this won't help as many people as they hope because being able to reuse caches between projects will depend on the exact dependency tree for every intermediate artifact. For example, when building a proc-macro

  • unicode-ident has few releases, so its likely this will get heavy reuse
  • proc-macro2 is has a lot of releases and depends on unicode-ident
  • quote has a lot of releases and depends on proc-macro2 and unicode-ident
  • syn has a lot of releases and depends on proc-macro2, unicode-ident, and optionally on quote

With syn being a very heavy dependency, if it or any of its dependency versions are mismatched between projects, the user won't get shared builds of syn.

See also cargo#5931.

The "shiny future" we are working towards

The cache lookup will be extended with plugins to read and/or write to different sources. Open source projects and companies can have their CI read from and write to their cache. Individuals who trust the CI can then configure their plugin to read from the CI cache.

A cooperating CI service could provide their own plugin that, instead of caching everything used in the last job and unpacking it in the next, their plugin could download only the entries that will be needed for the current build (e.g. say a dependency changed) and only upload the cache entries that were freshly built. Fine-grained caching like this would save the CI service on bandwidth, storage, and the compute time from copying, decompressing, and compressing the cache. Users would have faster CI time and save money on their CI service, minus any induced demand that faster builds creates.

On a different note, as sandboxing efforts improve, we'll have precise details on the inputs for build.rs and proc-macros and can gauge when there is idempotence (and verify the opt-in mentioned earlier).

Design axioms

This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.

Ownership and team asks

Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
ImplementationGoal owner
Standard reviewsTeam cargo
Mentoring and guidanceEd Page
Design meetingTeam cargo

Frequently asked questions

Why not pre-built packages?

Pre-built packages requires guessing

  • CPU Architecture
  • Feature flags
  • RUSTFLAGS
  • Dependency versions

If there are any mismatches there, then the pre-built package can't be used.

A build cache can be populated with pre-built packages and react to the unique circumstances of the user.

Why not sccache?

Tools like sccache try to infer inputs for hashing a cache key from command-line arguments. This has us reusing the extra knowledge Cargo has to get more accurate cache key generation.

If this is limited to immutable, idempotent packages, is this worth it?

In short, yes.

First, this includes an effort to allow packages to declare themselves as idempotent. Longer term, we'll have sandboxing to help infer / verify idempotence.

If subtle dependency changes prevent reuse across projects, is this worth it?

In short, yes.

This is a milestone on the way to remote caches. Remote caches allows access to CI build caches for the same project you are developing on, allowing full reuse at the cost of network access.

Not accepted

This section contains goals that were proposed but ultimately not accepted, either for want of resources or consensus. In many cases, narrower versions of these goals were accepted.

Contracts and Invariants

Metadata
Point of contactFelix Klock
Teamslang, libs, compiler
StatusNot accepted

Motivation

We wish to extend Rust in three ways: First, we want to extend the Rust language to enable Rust developers to write predicates, called "contracts", and attach them to specific points in their program. We intend for this feature to be available to all Rust code, including that of the standard library. Second, we want to extend the Rust crate format such that the contracts for a crate can be embedded and then later extracted by third-party tools. Third, we want to extend project-supported Rust compiler and interpreter tools, such as rustc and miri, to compile the code in a mode that dynamically checks the associated contracts (note that such dynamic checking might be forced to be incomplete for some contracts). Examples of contracts we envision include: pre- and post-conditions on Rust methods; representation invariants for abstract data types; and loop invariants on for and while loops.

Our motivation for this is that contracts are key for reasoning about software correctness. Formal verification tools such as Creusot, Kani, and Verus have demonstrated that it is possible to write Rust code that is coupled with automated verification mechanisms. But furthermore, we assert that contracts can provide value even if we restrict our attention to dynamic checking: By having a dedicated construct for writing method specifications and invariants formally, we will give our tools new avenues for testing our programs in an automated fashion, similar to the benefits provided by fuzzing.

The status quo

Currently, if you want to specify the behavior of a Rust method and check that the specification is correct, you can either attempt construct a test suite that covers the entirety of your specification, or you can manually embed contract-like predicates into your code. Embedding contract-like predicates is typically done via variations of either 1. assert!, 2. debug_assert!, or 3. similar cfg-guarded code sequences that abort/panic when a predicate fails to hold.

All of the existing options are limited.

First, most specifications rely on quantified forms, such as "for all X, P(X) implies Q(X)." The "for all" quantifier needs some language support to be expressed; it cannot be written as executable Rust code (except as a loop over the domain, which is potentially infinite).

Second, directly expressing contracts as an assertion mixes it in with the rest of the code, which makes it difficult or impossible for third-party verification tools to extract the contracts in order to reason about them.

As an example for why a tool might want to extract the contracts, the Kani model checker works by translating a whole program (including its calls to library code) into a form that is passed to an off-the-shelf model checker. Kani would like to use contracts as a way to divide-and-conquer the verification effort. The API for a method is abstracted by its associated contract. Instead of reasoning about the whole program, it now has two subproblems: prove that the method on its own satisfies its associated contract, and in the rest of the program, replace calls to that method by the range of behaviors permitted by the contract.

Third: the Racket language has demonstrated that when you have dynamic dispatch (via higher-order functions or OOP), then assertions embedded in procedure bodies are a subpar way of expressing specifications. This is because when you compose software components, it is non-trivial to take an isolated assertion failure and map it to which module was actually at fault. Having a separate contract language might enable new tools to record enough metadata to do proper "blame tracking." But to get there, we first have to have a way to write contracts down in the first place.

The next six months

  1. Develop a contract predicate sublanguage, suitable for interpretation by rustc, miri, and third-party verification tools.

  2. Extend Rust compiler to enable contract predicates to be attached to items and embedded in Rust crate rlib files.

  3. Work with wg-formal-methods (aka the "Rust Formal Methods Interest Group") to ensure that the embedded predicates are compatible with their tools.

  4. Work with Lang and Libs team on acceptable surface-level syntax for contracts. In particular, we want contracts to be used by the Rust standard library. (At the very least, for method pre- and post-conditions; I can readily imagine, however, also attaching contracts to unsafe { ... } blocks.)

  5. Extend miri to evaluate contract predicates. Add primitives for querying memory model to contract language, to enable contracts that talk about provenance of pointers.

The "shiny future" we are working towards

My shiny future is that the people "naturally" write Rust crates that can be combined with distinct dynamic-validation and verification tools. Today, if you want to use any verification tool, you usually have to pick one and orient your whole code base around using it. (E.g., the third-party verification tools often have their own (rewrite of a subset of the) Rust standard library, if only so that they can provide the contracts that our standard library is missing.)

But in my shiny future, people get to reuse the majority of their contracts and just "plug in" different dynamic-validation and static-verification tools, and all the tools know how to leverage the common contract language that is built into Rust.

Design axioms

Felix presented these axioms as "Shared Values" at the 2024 Rust Verification Workshop.

1. Purpose: Specification Mechanism first, Verification Mechanism second

Contracts have proven useful under the "Design by Contract" philosophy, which usually focuses on pre + post + frame conditions (also known as "requires", "ensures", and "modifies" clauses in systems such as the Java Modelling Language). In other words, attaching predicates to the API boundaries of code, which makes contracts a specification mechanism.

There are other potential uses for attaching predicates to points in the code, largely for encoding formal correctness arguments. My main examples of these are Representation Invariants, Loop Invariants, and Termination Measures (aka "decreasing functions").

In an ideal world, contracts would be useful for both purposes. But if we have to make choices about what to prioritize, we should focus on the things that make contracts useful as an API specification mechanism. In my opinion, API specification is the use-case that is going to benefit the broadest set of developers.

2. Contracts should be (semi-)useful out-of-the-box

This has two parts:

Anyone can eat: I want any Rust developer to be able to turn on "contract checking" in some form without having to change toolchain nor install 3rd-party tool, and get some utility from the result. (It's entirely possible that contracts become even more useful when used in concert with a suitable 3rd-party tool; that's a separate matter.)

Anyone can cook: Any Rust developer can also add contracts to their own code, without having to change their toolchain.

3. Contracts are not just assertions

Contracts are meant to enable modular reasoning.

Contracts have both a dynamic semantics and a static semantics.

In an ideal dynamic semantics, a broken contract will identify which component is at fault for breaking the contract. (We do acknowledge that precise blame assignment becomes non-trivial with dynamic dispatch.)

In an ideal static semantics, contracts enable theorem provers to choose, instead of reasoning about F(G), to instead allow independent correctness proofs for F(...) and ... G ....

4. Balance accessibility over power

For accessibility to the developer community, Rust contracts should strive for a syntax that is, or closely matches, the syntax of Rust code itself. Deviations from existing syntax or semantics must meet a high bar to be accepted for contract language.

But some deviations should be possible, if justified by their necessity for correct expression of specifications. Contracts may need forms that are not valid Rust code. For example, for-all quantifiers will presumably need to be supported, and will likely have a dynamic semantics that is necessarily incomplete compared to an idealized static semantics used by a verification tool. (Note that middle grounds exist here, such as adding a forall(|x: Type| { pred(x) }) intrinsic that is fine from a syntax point of view and is only troublesome in terms of what semantics to assign to it.)

Some expressive forms might be intentionally unavailable to normal object code. For example, some contracts may want to query provenance information on pointers, which would make such contracts unevaluatable in rustc object code (and then one would be expected to use miri or similar tools to get checking of such contracts).

5. Accept incompleteness

Not all properties of interest can be checked/6 at runtime; similarly, not all statements can be proven true or false.

Full functional correctness specifications are often not economically feasible to develop and maintain.

We must accept limitations on both dynamic validation and static verification strategies, and must choose our approximations accordingly.

An impoverished contract system may still be useful for specifying a coarser range of properties (such as invariant maintenance, memory safety, panic-freedom).

6. Embrace tool diversity

Different static verification systems require or support differing levels of expressiveness. And the same is true for dynamic validation tools! (E.g. consider injecting assertions into code via rustc, vs interpreters like miri or binary instrumentation via valgrind).

An ideal contract system needs to deal with this diversity in some manner. For example, we may need to allow third-party tools to swap in different contracts (and then also have to meet some added proof obligation to justify the swap).

7. Verification cannot be bolted on, ... but validation can

In general, code must be written with verification in mind as one of its design criteria.

We cannot expect to add contracts to arbitrary code and be able to get it to pass a static verifier.

This does not imply that contracts must be useless for arbitrary code. Dynamic contract checks have proven useful for the Racket community.

Racket development style: add more contracts to the code when debugging (including, but not limited to, contract failures)

A validation mechanism can be bolted-on after the fact.

Ownership and team asks

Owner: pnkfelix

pnkfelix has been working on the Rust compiler since before Rust 1.0; he was co-lead of the Rust compiler team from 2019 until 2023. pnkfelix has taken on this work as part of a broader interest in enforcing safety and correctness properties for Rust code.

celinval is also assisting. celinval is part of the Amazon team producing the Kani model checker for Rust. The Kani team has been using contracts as an unstable feature in order to enable modular verification; you can read more details about it on Kani's blog post.

Support needed from the project

  • Compiler: We expect to be authoring three kinds of code: 1. unstable surface syntax for expressing contract forms, 2. new compiler intrinsics that do contract checks, and 3. extensions to the rlib format to allow embedding of contracts in a readily extractable manner. We expect that we can acquire review capacity via AWS-employed members of compiler-contributors; the main issue is ensuring that the compiler team (and project as a whole) is on board for the extensions as envisioned.

  • Libs-impl: We will need libs-impl team engagement to ensure we design a contract language that the standard library implementors are willing to use. To put it simply: If we land support for contracts without uptake within the Rust standard library, then the effort will have failed.

  • Lang: We need approval for a lang team experiment to design the contract surface language. However, we do not expect this surface language to be stabilized in 2024, and therefore the language team involvement can be restricted to "whomever is interested in the effort." In addition, it seems likely that at least some of the contracts work will dovetail with the ghost-code initiative

  • WG-formal-methods: We need engagement with the formal-methods community to ensure our contract system is serving their needs.

  • Stable-MIR: Contracts and invariants both require evaluation of predicates, and linking those predicates with intermediate states of the Rust abstract machine. Such predicates and their linkage with the code itself can be encoded in various ways. While this goal proposal does not commit to any particular choice of encoding, we want to avoid introducing unnecessary coupling with compiler-internal structures. If Stable-MIR can be made capable of meeting the technical needs of contracts, then it may be a useful option to consider in the design space of predicate encoding and linkage.

  • Miri: We would like assistance from the miri developers on the right way to extend miri to have configurable contract-checking (i.e. to equip miri with enhanced contract checking that is not available in normal object code).

Outputs and milestones

Outputs

Rust standard library ships with contracts in the rlib (but not built into the default object code).

Rustc has unstable support for embedding dynamic contract-checks into Rust object code.

Some dynamic tool (e.g. miri or valgrind) that can dynamically check contracts whose bodies are not embedded into the object code.

Some static verification tool (e.g. Kani) leverages contracts shipped with Rust standard library.

Milestones

Unstable syntactic support for contracts in Rust programs (at API boundaries at bare minimum, but hopefully also at other natural points in a code base.)

Support for extracting contracts for a given item from an rlib.

Frequently asked questions

Q: How do contracts help static verification tools?

Answer: Once we have a contract language built into rustc, we can include its expressions as part of the compilation pipeline, turning them into HIR, THIR, MIR, et cetera.

For example, we could add contract-specific intrinsics that map to new MIR instructions. Then tools can decide to interpret those instructions. rustc, on its own, can decide whether it wants to map them to LLVM, or into valgrind calls, et cetera. (Or compiler could throw them away; but: unused = untested = unmaintained)

Q: Why do you differentiate between the semantics for dynamic validation vs static verification?

See next question for an answer to this.

Q: How are you planning to dynamically check arbitrary contracts?

A dynamic check of the construct forall(|x:T| { … }) sounds problematic for most T of interest

pnkfelix's expectation here is that we would not actually expect to support forall(|x:T| ...) in a dynamic context, not in the general case of arbitrary T.

pnkfelix's current favorite solution for cracking this nut: a new form, forall!(|x:T| suchas: [x_expr1, x_expr2, …] { … }), where the semantics is "this is saying that the predicate must hold for all T, but in particular, we are hinting to the dynamic semantics that it can draw from the given sample population denoted by x_expr1, x_expr2, etc.

Static tools can ignore the sample population, while dynamic tools can use the sample population directly, or feed them into a fuzzer, etc.

Q: Doesn't formal specifications need stuff like unbounded arithmetic?

Some specifications benefit from using constructs like unbounded integers, or sequences, or sets. (Especially important for devising abstraction functions/relations to describe the meaning of a given type.)

Is this in conflict with “Balance accessibility over power”?

pnkfelix sees two main problems here: 1. Dynamic interpretation may incur unacceptably high overhead. 2. Freely copying terms (i.e. ignoring ownership) is sometimes useful.

Maybe the answer is that some contract forms simply cannot be interpreted via the Rust abstract machine. And to be clear: That is not a failure! If some forms can only be checked in the context of miri or some other third-party tool, so be it.

Q: What about unsafe code?

pnkfelix does not know the complete answer here.

Some dynamic checks would benefit from access to memory model internals.

But in general, checking the correctness of an unsafe abstraction needs type-specific ghost state (to model permissions, etc). We are leaving this for future work, it may or may not get resolved this year.

Experiment with relaxing the Orphan Rule

Metadata
Point of contactNiko Matsakis
Teamslang, types
StatusNot accepted

Summary

Experimental implementation and draft RFCs to relax the orphan rule

Motivation

Relax the orphan rule, in limited circumstances, to allow crates to provide implementations of third-party traits for third-party types. The orphan rule averts one potential source of conflicts between Rust crates, but its presence also creates scaling issues in the Rust community: it prevents providing a third-party library that integrates two other libraries with each other, and instead requires convincing the author of one of the two libraries to add (optional) support for the other, or requires using a newtype wrapper. Relaxing the orphan rule, carefully, would make it easier to integrate libraries with each other, share those integrations, and make it easier for new libraries to garner support from the ecosystem.

The status quo

Suppose a Rust developer wants to work with two libraries: lib_a providing trait TraitA, and lib_b providing type TypeB. Due to the orphan rule, if they want to use the two together, they have the following options:

  • Convince the maintainer of lib_a to provide impl TraitA for TypeB. This typically involves an optional dependency on lib_b. This usually only occurs if lib_a is substantially less popular than lib_b, or the maintainer of lib_a is convinced that others are likely to want to use the two together. This tends to feel "reversed" from the norm.

  • Convince the maintainer of lib_b to provide impl TraitA for TypeB. This typically involves an optional dependency on lib_a. This is only likely to occur if lib_a is popular, and the maintainer of lib_b is convinced that others may want to use the two together. The difficulty in advocating this, scaled across the community, is one big reason why it's difficult to build new popular crates built around traits (e.g. competing serialization/deserialization libraries, or competing async I/O traits).

  • Vendor either lib_a or lib_b into their own project. This is inconvenient, adds maintenance costs, and isn't typically an option for public projects intended for others to use.

  • Create a newtype wrapper around TypeB, and implement TraitA for the wrapper type. This is less convenient, propagates throughout the crate (and through other crates if doing this in a library), and may require additional trait implementations for the wrapper that TypeB already implemented.

All of these solutions are suboptimal in some way, and inconvenient. In particular, all of them are much more difficult than actually writing the trait impl. All of them tend to take longer, as well, slowing down whatever goal depended on having the trait impl.

The next six months

We propose to

  • Experiment on nightly with alternate orphan rules
    • Idea 1. Try relaxing the orphan rule for binary crates, since this cannot create library incompatibilities in the ecosystem. Allow binary crates to implement third-party traits for third-party types, possibly requiring a marker on either the trait or type or both. See how well this works for users.
    • Idea 2. Try allowing library crates to provide third-party impls as long as no implementations actually conflict. Perhaps require marking traits and/or types that permit third-party impls, to ensure that crates can always implement traits for their own types.
  • Draft RFCs for features above, presuming experiments turn out well

The "shiny future" we are working towards

Long-term, we'll want a way to resolve conflicts between third-party trait impls.

We should support a "standalone derive" mechanism, to derive a trait for a type without attaching the derive to the type definition. We could save a simple form of type information about a type, and define a standalone deriving mechanism that consumes exclusively that information.

Given such a mechanism, we could then permit any crate to invoke the standalone derive mechanism for a trait and type, and allow identical derivations no matter where they appear in the dependency tree.

Design axioms

  • Rustaceans should be able to easily integrate a third-party trait with a third-party type without requiring the cooperation of third-party crate maintainers.

  • It should be possible to publish such integration as a new crate. For instance, it should be possible to publish an a_b crate integrating a with b. This makes it easier to scale the ecosystem and get adoption for new libraries.

  • Crate authors should have some control over whether their types have third-party traits implemented. This ensures that it isn't a breaking change to introdice first-party trait implementations.

Ownership and team asks

Owner: Help wanted

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.
TaskOwner(s) or team(s)Notes
Ownership and implementationHelp wanted
RFC authoringHelp wanted
Design consultation/iterationJosh Triplett
Design meetingTeam lang typesUp to 1 meeting, if needed

Frequently asked questions

Won't this create incompatibilities between libraries that implement the same trait for the same type?

Yes! The orphan rule is a tradeoff. It was established to avert one source of potential incompatibility between library crates, in order to help the ecosystem grow, scale, and avoid conflicts. However, the presence of the orphan rule creates a different set of scaling issues and conflicts. This project goal proposes to adjust the balance, attempting to achieve some of the benefits of both.

Why was this goal not approved for 2024H2?

Primarily for capacity reasons:

  • lcnr commented that there was no capacity on the types team for reviewing.
  • tmandry commented that the goal as written was not necessarily focused on the right constraints (text quoted below).

It strikes me as quite open ended and not obviously focused on the right constraints. (cc Josh Triplett as mentor)

For example, we could choose to relax the orphan rule only within a restricted set of co-versioned crates that we treat as "one big crate" for coherence purposes. This would not meet the axioms listed in the goal, but I believe it would still improve things for a significant set of users.

If we instead go with visibility restrictions on impls, that might work and solve a larger subset, but I think the design will have to be guided by someone close to the implementation to be viable.

I would love to have a design meeting if a viable looking design emerges, but I want to make sure this feedback is taken into account before someone spends a lot of time on it.

These points can be considered and addressed at a later time.

Faster build experience

Metadata
Point of contactJonathan Kelley
Teamslang, compiler, cargo
StatusNot accepted

Summary

Improvements to make iterative builds faster.

Motivation

For 2024H2, we propose to create better caching to speed up build times, particularly in iterative, local development. Build time affects all Rust users, but it can be a particular blocker for Rust users in "higher-level" domains like app/game/web development, data science, and scientific computing. These developers are often coming from interpreted languages like Python and are accustomed to making small, quick changes and seeing immediate feedback. In those domains, Rust has picked up momentum as a language for building underlying frameworks and libraries thanks to its lower-level nature. The motivation of this project goal is to make Rust a better choice for higher level programming subfields by improving the build experience (see also the partner goal related to language papercuts).

The status quo

Rust has recently seen tremendous adoption in a number of high-profile projects. These include but are not limited to: Firecracker, Pingora, Zed, Datafusion, Candle, Gecko, Turbopack, React Compiler, Deno, Tauri, InfluxDB, SWC, Ruff, Polars, SurrealDB, NPM and more. These projects tend to power a particular field of development: SWC, React, and Turbopack powering web development, Candle powering AI/ML, Ruff powering Python, InfluxDB and Datafusion powering data science etc. Projects in this space devote significant time to improving build times and the iterative experience, often requiring significant low-level hackery. See e.g. Bevy's page on fast compiles. Seeing the importance of compilation time, other languages like Zig and Go have made fast compilation times a top priority, and developers often cite build time as one of their favorite things about the experience of using those languages. We don't expect Rust to match the compilation experience of Go or Zig -- at least not yet! -- but some targeted improvements could make a big difference.

The next six months

The key areas we've identified as avenues to speed up iterative development include:

  • Speeding up or caching proc macro expansion
  • A per-user cache for compiled artifacts
  • A remote cache for compiled artifacts integrated into Cargo itself

There are other longer term projects that would be interesting to pursue but don't necessarily fit in the 2024 goals:

  • Partial compilation of invalid Rust programs that might not pass "cargo check"
  • Hotreloading for Rust programs
  • A JIT backend for Rust programs
  • An incremental linker to speed up test/example/benchmark compilation for workspaces

Procedural macro expansion caching or speedup

Today, the Rust compiler does not necessarily cache the tokens from procedural macro expansion. On every cargo check, and cargo build, Rust will run procedural macros to expand code for the compiler. The vast majority of procedural macros in Rust are idempotent: their output tokens are simply a deterministic function of their input tokens. If we assumed a procedural macro was free of side-effects, then we would only need to re-run procedural macros when the input tokens change. This has been shown in prototypes to drastically improve incremental compile times (30% speedup), especially for codebases that employ lots of derives (Debug, Clone, PartialEq, Hash, serde::Serialize).

A solution here could either be manual or automatic: macro authors could opt-in to caching or the compiler could automatically cache macros it knows are side-effect free.

Faster fresh builds and higher default optimization levels

A "higher level Rust" would be a Rust where a programmer would be able to start a new project, add several large dependencies, and get to work quickly without waiting minutes for a fresh compile. A web developer would be able to jump into a Tokio/Axum heavy project, a game developer into a Bevy/WGPU heavy project, or a data scientist into a Polars project and start working without incurring a 2-5 minute penalty. In today's world, an incoming developer interested in using Rust for their next project immediately runs into a compile wall. In reality, Rust's incremental compile times are rather good, but Rust's perception is invariably shaped by the "new to Rust" experience which is almost always a long upfront compile time.

Cargo's current compilation model involves downloading and compiling dependencies on a per-project basis. Workspaces allow you to share a set of dependency compilation artifacts across several related crates at once, deduplicating compilation time and reducing disk space usage.

A "higher level Rust" might employ some form of caching - either per-user, per-machine, per-organization, per-library, or otherwise - such that fresh builds are just as fast as incremental builds. If the caching was sufficiently capable, it could even cache dependency artifacts at higher optimization levels. This is particularly important for game development, data science, and procedural macros where debug builds of dependencies run significantly slower than their release variant. Projects like Bevy and WGPU explicitly guide developers to manually increase the optimization level of dependencies since the default is unusably slow for game and graphics development.

Generally, a "high level Rust" would be fast-to-compile and maximally performant by default. The tweaks here do not require language changes and are generally a question of engineering effort rather than design consensus.

The "shiny future" we are working towards

A "high level Rust" would be a Rust that has a strong focus on iteration speed. Developers would benefit from Rust's performance, safety, and reliability guarantees without the current status quo of long compile times, verbose code, and program architecture limitations.

A "high level" Rust would:

  • Compile quickly, even for fresh builds
  • Be terse in the common case
  • Produce performant programs even in debug mode
  • Provide language shortcuts to get to running code faster

In our "shiny future," an aspiring genomics researcher would:

  • be able to quickly jump into a new project
  • add powerful dependencies with little compile-time cost
  • use various procedural macros with little compile-time cost
  • cleanly migrate their existing program architecture to Rust with few lifetime issues
  • employ various shortcuts like unwrap to get to running code quicker

Design axiomsda

  • Preference for minimally invasive changes that have the greatest potential benefit
  • No or less syntax is preferable to more syntax for the same goal
  • Prototype code should receive similar affordances as production code
  • Attention to the end-to-end experience of a Rust developer
  • Willingness to make appropriate tradeoffs in favor of implementation speed and intuitiveness

Ownership and team asks

The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.

TaskOwner(s) or team(s)Notes
proc macro expansion cachingjkelleyrtp + tbd
global dependency cachejkelleyrtp + tbd
  • The Team badge indicates a requirement where Team support is needed.

Support needed from the project

  • We are happy to author RFCs and/or work with other experienced RFC authors.
  • We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
  • We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
  • We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.

Outputs and milestones

Outputs

Final outputs that will be produced

Milestones

Milestones you will reach along the way

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Reduce clones and unwraps, support partial borrows

Metadata
Point of contactJonathan Kelley
Teamslang
StatusNot accepted

Motivation

For 2024H2, we propose to continue with the ergonomics initiative, targeting several of the biggest friction points in everyday Rust development. These issues affect all Rust users, but the impact and severity varies dramatically. Many experienced users have learned the workarounds and consider them papercuts, but for newer users, or in domains traditionally considered "high-level" (e.g., app/game/web development, data science, scientific computing) these kinds of issues can make Rust a non-starter. In those domains, Rust has picked up momentum as a language for building underlying frameworks and libraries thanks to its lower-level nature. However, thanks in large part to these kind of smaller, papercut issues, it is not a great choice for consumption of these libraries - many projects instead choose to expose bindings for languages like Python and JavaScript. The motivation of this project goal is to make Rust a better choice for higher level programming subfields by identifying and remedying language papercuts with minimally invasive language changes. In fact, these same issues arise in other Rust domains: for example, Rust is a great choice for network services where performance is a top consideration, but perhaps not a good choice for "everyday" request-reply services, thanks in no small part to papercuts and small-time friction (as well as other gaps like the needing more libraries, which are being addressed in the async goal).

The status quo

Rust has recently seen tremendous adoption in a number of high-profile projects. These include but are not limited to: Firecracker, Pingora, Zed, Datafusion, Candle, Gecko, Turbopack, React Compiler, Deno, Tauri, InfluxDB, SWC, Ruff, Polars, SurrealDB, NPM and more. These projects tend to power a particular field of development: SWC, React, and Turbopack powering web development, Candle powering AI/ML, Ruff powering Python, InfluxDB and Datafusion powering data science etc.

These projects tend to focus on accelerating development in higher level languages. In theory, Rust itself would be an ideal choice for development in the respective spaces. A Rust webserver can be faster and more reliable than a JavaScript webserver. However, Rust's perceived difficulty, verbosity, compile times, and iteration velocity limit its adoption in these spaces. Various language constructs nudge developers to a particular program structure that might be non-obvious at its outset, resulting in slower development times. Other language constructs influence the final architecture of a Rust program, making it harder to migrate one's mental model as they transition to using Rust. Other Rust language limitations lead to unnecessarily verbose or noisy code. While Rust is not necessarily a bad choice for any of these fields, the analogous frameworks (Axum, Bevy, Dioxus, Polars, etc) are rather nascent and frequently butt up against language limitations.

If we could make Rust a better choice for "higher level" programming - apps, games, UIs, webservers, datascience, high-performance-computing, scripting - then Rust would see much greater adoption outside its current bubble. This would result in more corporate interest, excited contributors, and generally positive growth for the language. With more "higher level" developers using Rust, we might see an uptick in adoption by startups, research-and-development teams, and the physical sciences which could lead to more general innovation.

Generally we believe this boils down to two focuses:

  • Make Rust programs faster to write (this goal)
  • Shorten the iteration cycle of a Rust program (covered in the goal on faster iterative builds)

A fictional scenario: Alex

Let's take the case of "Alex" using a fictional library "Genomix."

Alex is a genomics researcher studying ligand receptor interaction to improve drug therapy for cancer. They work with very large datasets and need to design new algorithms to process genomics data and simulate drug interactions. Alex recently heard that Rust has a great genomics library (Genomix) and decides to try out Rust for their next project. Their goal seems simple: write a program that fetches data from their research lab's S3 bucket, downloads it, cleans it, processes, and then displays it.

Alex creates a new project and starts adding various libraries. To start, they add Polars and Genomix. They also realize they want to wrap their research code in a web frontend and allow remote data, so they add Tokio, Reqwest, Axum, and Dioxus. Before writing any real code, they hit build, and immediately notice the long compilation time to build out these dependencies. They're still excited for Rust, so they get a coffee and wait.

They start banging out code. They are getting a lot of surprising compilation errors around potential failures. They don't really care much about error handling at the moment; some googling reveals that a lot of code just calls unwrap in this scenario, so they start adding that in, but the code is looking kind of ugly and non-elegant.

They are also getting compilation errors. After some time they come to understand how the borrow checker works. Many of the errors can be resolved by calling clone, so they are doing that a lot. They even find a few bugs, which they like. But they eventually get down to some core problems that they just can't see how to fix, and where it feels like the compiler is just being obstinate. For example, they'd like to extract a method like fn push_log(&mut self, item: T) but they can't, because they are iterating over data in self.input_queue and the compiler gives them errors, even though push_log doesn't touch input_queue. "Do I just have to copy and paste my code everywhere?", they wonder. Similarly, when they use closures that spawn threads, the new thread seems to take ownership of the value, they eventually find themselves writing code like let _data = data.clone() and using that from inside the closure. Irritating.

Eventually they do get the system to work, but it takes them a lot longer than they feel it should, and the code doesn't look nearly as clean as they hoped. They are seriously wondering if Rust is as good as it is made out to be, and nervous about having interns or other newer developers try to use it. "Rust seems to make sense for really serious projects, but for the kind of things I do, it's just so much slower.", they think.

Key points:

  • Upfront compile times would be in the order of minutes
  • Adding new dependencies would also incur a strong compile time cost
  • Iterating on the program would take several seconds per build
  • Adding a web UI would be arduous with copious calls .clone() to shuffle state around
  • Lots of explicit unwraps pollute the codebase
  • Refactoring to a collection of structs might take much longer than they anticipated

A real world scenario, lightly fictionalized

Major cloud developer is "all in" on Rust. As they build out code, though, they notice some problems leading to copying-and-pasting or awkward code throughout their codebase. Spawning threads and tasks tends to involve a large number of boilerplate feeling "clone" calls to copy out specific handles from data structures -- it's tough to get rid of them, even with macros. There are a few Rust experts at the company, and they're in high demand helping users resolve seemingly simple problems -- many of them have known workaround patterns, but those patterns are non-obvious, and sometimes rather involved. For example, sometimes they have to make "shadow structs" that have all the same fields, but just contain different kinds of references, to avoid conflicting borrows. For the highest impact systems, Rust remains popular, but for a lot of stuff "on the edge", developers shy away from it. "I'd like to use Rust there," they say, "since it would help me find bugs and get higher performance, but it's just too annoying. It's not worth it."

The next six months

For 2024H2 we have identified two key changes that would make Rust significantly easier to write across a wide variety of domains:

  • Reducing the frequency of an explicit ".clone()" for cheaply cloneable items
  • Partial borrows for structs (especially private &mut self methods that only access a few fields)

Reducing .clone() frequency

Across web, game, UI, app, and even systems development, it's common to share semaphores across scopes. These come in the form of channels, queues, signals, and immutable state typically wrapped in Arc/Rc. In Rust, to use these items across scopes - like tokio::spawn or 'static closures, a programmer must explicitly call .clone(). This is frequently accompanied by a rename of an item:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

let _state = state.clone();
tokio::spawn(async move { /*code*/ });

let _state = state.clone();
tokio::spawn(async move { /*code*/ });

let _state = state.clone();
let callback = Callback::new(move |_| { /*code*/ });
}

This can become both noisy - clone pollutes a codebase - and confusing - what does .clone() imply on this item? Calling .clone() could imply an allocation or simply a RefCount increment. In many cases it's not possible to understand the behavior without viewing the clone implementation directly.

A higher level Rust would provide a mechanism to cut down on these clones entirely, making code terser and potentially clearer about intent:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

tokio::spawn(async move { /*code*/ });

tokio::spawn(async move { /*code*/ });

let callback = Callback::new(move |_| { /*code*/ });
}

While we don't necessarily propose any one solution to this problem, we believe Rust can be tweaked in way that makes these explicit calls to .clone() disappear without significant changes to the language.

Partial borrows for structs

Another complication programmers run into is when designing the architecture of their Rust programs with structs. A programmer might start with code that looks like this:

#![allow(unused)]
fn main() {
let name = "Foo ";
let mut children = vec!["Bar".to_string()];

children.push(name.to_string())
}

And then decide to abstract it into a more traditional struct-like approach:

#![allow(unused)]
fn main() {
struct Baz {
    name: String,
    children: Vec<String>
}

impl Baz {
    pub fn push_name(&mut self, new: String) {
        let name = self.name()
        self.push(new);
        println!("{name} pushed item {new}");
    }

    fn push(&mut self, item: &str) {
        self.children.push(item)
    }

    fn name(&self) -> &str {
        self.name.as_str()
    }
}
}

While this code is similar to the original snippet, it no longer compiles. Because self.name borrows Self, we can't call self.push without running into lifetime conflicts. However, semantically, we haven't violated the borrow checker - both push and name read and write different fields of the struct.

Interestingly, Rust's disjoint capture mechanism for closures, can perform the same operation and compile.

#![allow(unused)]
fn main() {
let mut modify_something =  || s.name = "modified".to_string();
let read_something =  || &s.children.last().unwrap().name;

let o2 = read_something();
let o1 = modify_something();
println!("o: {:?}", o2);
}

This is a very frequent papercut for both beginner and experienced Rust programmers. A developer might design a valid abstraction for a particular problem, but the Rust compiler rejects it even though said design does obey the axioms of the borrow checker.

As part of the "higher level Rust" effort, we want to reduce the frequency of this papercut, making it easier for developers to model and iterate on their program architecture.

For example, a syntax-free approach to solving this problem might be simply turning on disjoint capture for private methods only. Alternatively, we could implement a syntax or attribute that allows developers to explicitly opt in to the partial borrow system. Again, we don't want to necessarily prescribe a solution here, but the best outcome would be a solution that reduces mental overhead with as little new syntax as possible.

The "shiny future" we are working towards

A "high level Rust" would be a Rust that has a strong focus on iteration speed. Developers would benefit from Rust's performance, safety, and reliability guarantees without the current status quo of long compile times, verbose code, and program architecture limitations.

A "high level" Rust would:

  • Compile quickly, even for fresh builds
  • Be terse in the common case
  • Produce performant programs even in debug mode
  • Provide language shortcuts to get to running code faster

In our "shiny future," an aspiring genomics researcher would:

  • be able to quickly jump into a new project
  • add powerful dependencies with little compile-time cost
  • use various procedural macros with little compile-time cost
  • cleanly migrate their existing program architecture to Rust with few lifetime issues
  • employ various shortcuts like unwrap to get to running code quicker

Revisiting Alex and Genomix

Let's revisit the scenario of "Alex" using a fictional library "Genomix."

Alex is a genomics researcher studying ligand receptor interaction to improve drug therapy for cancer. They work with very large datasets and need to design new algorithms to process genomics data and simulate drug interactions. Alex recently heard that Rust has a great genomics library (Genomix) and decides to try out Rust for their next project.

Alex creates a new project and starts adding various libraries. To start, they add Polars and Genomix. They also realize they want to wrap their research code in a web frontend and allow remote data, so they add Tokio, Reqwest, Axum, and Dioxus. They write a simple program that fetches data from their research lab's S3 bucket, downloads it, cleans it, processes, and then displays it.

For the first time, they type cargo run. The project builds in 10 seconds and their code starts running. The analysis churns for a bit and then Alex is greeted with a basic webpage visualizing their results. They start working on the visualization interface, adding interactivity with new callbacks and async routines. Thanks to hotreloading, the webpage updates without fully recompiling and losing program state.

Once satisfied, Alex decides to refactor their messy program into different structs so that they can reuse the different pieces for other projects. They add basic improvements like multithreading and swap out the unwrap shortcuts for proper error handling.

Alex heard Rust was difficult to learn, but they're generally happy. Their Rust program is certainly faster than their previous Python work. They didn't need to learn JavaScript to wrap it in a web frontend. The Cargo.toml is a cool - they can share their work with the research lab without messing with Python installs and dependency management. They heard Rust had long compile times but didn't run into that. Being able to add async and multithreading was easier than they thought - interacting with channels and queues was as easy as it was in Go.

Design axioms

  • Preference for minimally invasive changes that have the greatest potential benefit
  • No or less syntax is preferable to more syntax for the same goal
  • Prototype code should receive similar affordances as production code
  • Attention to the end-to-end experience of a Rust developer
  • Willingness to make appropriate tradeoffs in favor of implementation speed and intuitiveness

Ownership and team asks

The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.

TaskOwner(s) or team(s)Notes
.clone() problemJonathan Kelley + tbd
partial borrowsJonathan Kelley + tbd
.unwrap() problemJonathan Kelley + tbd
Named/Optional argumentsJonathan Kelley + tbd
  • The Team badge indicates a requirement where Team support is needed.

Support needed from the project

  • We are happy to author RFCs and/or work with other experienced RFC authors.
  • We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
  • We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
  • We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.

Outputs and milestones

Outputs

Final outputs that will be produced

Milestones

Milestones you will reach along the way

Frequently asked questions

After these two items, are we done? What comes next?

We will have made significant process, but we won't be done. We have identified two particular items that come up frequently in the "high level app dev" domain but which will require more discussion to reach alignment. These could be candidates for future goals.

Faster Unwrap Syntax (Contentious)

Another common criticism of Rust in prototype-heavy programming subfields is its pervasive verbosity - especially when performing rather simple or innocuous transformations. Admittedly, even as experienced Rust programmers, we find ourselves bogged down by the noisiness of various language constructs. In our opinion, the single biggest polluter of prototype Rust codebase is the need to call .unwrap() everywhere. While yes, many operations can fail and it's a good idea to handle errors, we've generally found that .unwrap() drastically hinders development in higher level paradigms.

Whether it be simple operations like getting the last item from a vec:

#![allow(unused)]
fn main() {
let items = vec![1,2,3,4];
let last = items.last().unwrap();
}

Or slightly more involved operations like fetching from a server:

#![allow(unused)]
fn main() {
let res = Client::new()
	.unwrap()
	.get("https://dog.ceo/api/breeds/list/all")
	.header("content/text".parse().unwrap())
	.send()
	.unwrap()
	.await
	.unwrap()
	.json::<DogApi>()
	.await
	.unwrap();
}

It's clear that .unwrap() plays a large role in the early steps of every Rust codebase.

A "higher level Rust" would be a Rust that enables programmers to quickly prototype their solution, iterating on architecture and functionality before finally deciding to "productionize" their code. In today's Rust this is equivalent to replacing .unwrap() with proper error handling (or .expect()), adding documentation, and adding tests.

Programmers generally understand the difference between prototype code and production code - they don't necessarily need to be so strongly reminded that their code is prototype code by forcing a verbose .unwrap() at every corner. In many ways, Rust today feels hostile to prototype code. We believe that a "higher level Rust" should be welcoming to prototype code. The easier it is for developers to write prototype code, the more code will likely convert to production code. Prototype code by design is the first step to production code.

When this topic comes up, folks will invariably bring up Result plus ? as a solution. In practice, we've not found it to be a suitable bandaid. Adopting question mark syntax requires you to change the signatures of your code at every turn. While prototyping you can no longer think in terms of A -> B but now you need to think of every A -> B? as a potentially fallible operation. The final production-ready iteration of your code will likely not be fallible in every method, forcing yet another level of refactoring. Plus, question mark syntax tends to bubble errors without line information, generally making it difficult to locate where the error is occurring in the first place. And finally, question mark syntax doesn't work on Option<T>, meaning .unwrap() or pattern matching are the only valid options.

#![allow(unused)]
fn main() {
let items = vec![1,2,3,4];
let last = items.last().unwrap(); // <--- this can't be question-marked!
}

We don't prescribe any particular solution, but ideally Rust would provide a similar shortcut for .unwrap() as it does for return Err(e). Other languages tend to use a ! operator for this case:

#![allow(unused)]
fn main() {
let items = vec![1,2,3,4];
let last = items.last()!;

let res = Client::new()!
	.get("https://dog.ceo/api/breeds/list/all")
	.header("content/text".parse()!)
	.send()!
	.await!
	.json::<DogApi>()
	.await!;
}

A "higher level Rust" would provide similar affordances to prototype code that it provides to production code. All production code was once prototype code. Today's Rust makes it harder to write prototype code than it does production code. This language-level opinion is seemingly unique to Rust and arguably a major factor in why Rust has seen slower adoption in higher level programming paradigms.

Named and Optional Arguments or Partial Defaults (Contentious)

Beyond .clone() and .unwrap(), the next biggest polluter for "high level" Rust code tends to be the lack of a way to properly supply optional arguments to various operations. This has received lots of discussion already and we don't want to belabor the point anymore than it already has.

The main thing we want to add here is that we believe the builder pattern is not a great solution for this problem, especially during prototyping and in paradigms where iteration time is important.

#![allow(unused)]
fn main() {
struct PlotCfg {
   title: Option<String>,
   height: Option<u32>,
   width: Option<u32>,
   dpi: Option<u32>,
   style: Option<Style>
}

impl PlotCfg {
    pub fn title(&mut self, title: Option<u32>) -> &mut self {
        self.title = title;
        self
    }
    pub fn height(&mut self, height: Option<u32>) -> &mut self {
        self.height = height;
        self
    }
    pub fn width(&mut self, width: Option<u32>) -> &mut self {
        self.width = width;
        self
    }
    pub fn dpi(&mut self, dpi: Option<u32>) -> &mut self {
        self.dpi = dpi;
        self
    }
    pub fn style(&mut self, style: Option<u32>) -> &mut self {
        self.style = style;
        self
    }
    pub fn build() -> Plot {
	    todo!()
    }
}
}

A solution to this problem could in any number of forms:

  • Partial Defaults to structs
  • Named and optional function arguments
  • Anonymous structs

We don't want to specify any particular solution:

  • Partial defaults simply feel like an extension of the language
  • Named function arguments would be a very welcome change for many high-level interfaces
  • Anonymous structs would be useful outside of replacing builders

Generally though, we feel like this is another core problem that needs to be solved for Rust to see more traction in higher-level programming paradigms.

Seamless C support

Metadata
Point of contactJosh Triplett
TeamsNames of teams being asked to commit to the goal
StatusNot accepted

Summary

Using C from Rust should be as easy as using C from C++

Motivation

Using C from Rust should be as easy as using C from C++: completely seamless, as though it's just another module of code. You should be able to drop Rust code into a C project and start compiling and using it in minutes.

The status quo

Today, people who want to use C and Rust together in a project have to put substantial work into infrastructure or manual bindings. Whether by creating build system infrastructure to invoke bindgen/cbindgen (and requiring the installation of those tools), or manually writing C bindings in Rust, projects cannot simply drop Rust code into a C program or C code into a Rust program. This creates a high bar for adopting or experimenting with Rust, and makes it more difficult to provide Rust bindings for a C library.

By contrast, dropping C++ code into a C project or C code into a C++ project is trivial. The same compiler understands both C and C++, and allows compiling both together or separately. The developer does not need to duplicate declarations for the two languages, and can freely call between functions in both languages.

C and C++ are still not the same language. They have different idioms and common types, and a C interface may not be the most ergonomic to use from C++. Using C++ from C involves treating the C as C++, such that it no longer works with a C compiler that has no C++ support. But nonetheless, C++ and C integrate extremely well, and C++ is currently the easiest language to integrate into an established C project.

This is the level of integration we should aspire to for Rust and C.

The next six months

To provide seamless integration between Rust and C, we need a single compiler to understand both Rust and C. Thus, the first step will be to integrate a C preprocessor and compiler frontend into the Rust compiler. For at least the initial experimentation, we could integrate components from LLVM, taking inspiration from zig cc. (In the future, we can consider other alternatives, including a native Rust implementation. We could also consider components from c2rust or similar.)

We can either generate MIR directly from C (which would be experimental and incomplete but integrate better with the compiler), or bypass MIR and generate LLVM bytecode (which would be simpler but less well integrated).

This first step would provide substantial benefits already: a C compiler that's always available on any system with Rust installed, that generates code for any supported Rust target, and that always supports cross-language optimization.

We can further improve support for calling C from Rust. We can support "importing" C header files, to permit using this support to call external libraries, and to support inline functions.

The "shiny future" we are working towards

Once C support is integrated, we can generate type information for C functions as if they were unsafe Rust functions, and then support treating the C code as a Rust module, adding the ability to import and call C functions from Rust. This would not necessarily even require header files, making it even simpler to use C from Rust. The initial support can be incomplete, supporting the subset of C that has reasonable semantics in Rust.

We will also want to add C features that are missing in Rust, to allow Rust to call any supported C code.

Once we have a C compiler integrated into Rust, we can incrementally add C extensions to support using Rust from C. For instance:

  • Support importing Rust modules and calling extern "C" functions from them, without requiring a C header file.
  • Support using :: for scoping names.
  • Support simple Rust types (e.g. Option and Result).
  • Support calling Rust methods on objects.
  • Allow annotating C functions with Rust-enhanced type signatures, such as marking them as safe, using Rust references for pointer parameters, or providing simple lifetime information.

We can support mixing Rust and C in a source file, to simplify incremental porting even further.

To provide simpler integration into C build systems, we can accept a C-compiler-compatible command line (CFLAGS), and apply that to the C code we process.

We can also provide a CLI entry point that's sufficiently command-line compatible to allow using it as CC in a C project.

Design axioms

  • C code should feel like just another Rust module. Integrating C code into a Rust project, or Rust code into a C project, should be trivial; it should be just as easy as integrating C with C++.

  • This is not primarily about providing safe bindings. This project will primarily make it much easier to access C bindings as unsafe interfaces. There will still be value in wrapping these unsafe C interfaces with safer Rust interfaces.

  • Calling C from Rust should not require writing duplicate information in Rust that's already present in a C header or source file.

  • Integrating C with Rust should not require third-party tools.

  • Compiling C code should not require substantially changing the information normally passed to a C compiler (e.g. compiler arguments).

Ownership and team asks

Owner: TODO

Support needed from the project

  • Lang team:
    • Design meetings to discuss design changes
    • RFC reviews
  • Compiler team:
    • RFC review

Outputs and milestones

Outputs

The initial output will be a pair of RFCs: one for an experimental integration of a C compiler into rustc, and the other for minimal language features to take advantage of that.

Milestones

  • Compiler RFC: Integrated C compiler
  • Lang RFC: Rust language support for seamless C integration

Frequently asked questions

General notes

This is a place for the goal slate owner to track notes and ideas for later follow-up.

Candidate goals:

  • Track feature stabilization
  • Finer-grained infra permissions
  • Host Rust contributor event

Areas where Rust is best suited and how it grows

Rust offers particular advantages in two areas:

  • Latency sensitive or high scale network services, which benefit from Rust’s lack of garbage collection pauses (in comparison to GC’d languages).
  • Low-level systems applications, like kernels and embedded development, benefit from Rust’s memory safety guarantees and high-level productivity (in comparison to C or C++).
  • Developer tooling has proven to be an unexpected growth area, with tools ranging from IDEs to build systems being written in Rust.

Who is using Rust

Building on the characters from the async vision doc, we can define at least three groups of Rust users:

  • Alan1, an experienced developer in a Garbage Collected language, like Java, Swift, or Python.
    • Alan likes the idea of having his code run faster and use less memory without having to deal with memory safety bugs.
    • Alan's biggest (pleasant) surprise is that Rust's type system prevents not only memory safety bugs but all kinds of other bugs, like null pointer exceptions or forgetting to close a file handle.
    • Alan's biggest frustration with Rust is that it sometimes makes him deal with low-level minutia -- he sometimes finds himself just randomly inserting a * or clone to see if it will build -- or complex errors dealing with features he doesn't know yet.
  • Grace2, a low-level, systems programming expert.
    • Grace is drawn to Rust by the promise of having memory safety while still being able to work "close to the hardware".
    • Her biggest surprise is cargo and the way that it makes reusing code trivial. She doesn't miss ./configure && make at all.
    • Her biggest frustration is
  • Barbara3
1

In honor of Alan Kay, inventor of Smalltalk, which gave rise in turn to Java and most of the object-oriented languages we know today.

2

In honor of Grace Hopper, a computer scientist, mathematician, and rear admiral in the US Navy; inventor of COBOL.

3

In honor of Barbara Liskov, a computer science professor at MIT who invented of the [CLU](https://en.wikipedia.org/wiki/CLU_(programming_language) programming language.

How Rust adoption grows

The typical pattern is that Rust adoption begins in a system where Rust offers particular advantage. For example, a company building network services may begin with a highly scaled service. In this setting, the need to learn Rust is justified by its advantage.

Once users are past the initial learning curve, they find that Rust helps them to move and iterate quickly. They spend slightly more time getting their program to compile, but they spend a lot less time debugging. Refactorings tend to work "the first time".

Over time, people wind up using Rust for far more programs than they initially expected. They come to appreciate Rust's focus on reliability, quality tooling, and attention to ergonomics. They find that while other languages may have helped them edit code faster, Rust gets them to production more quickly and reduces maintenance over time. And of course using fewer languages is its own advantage.

How Rust adoption stalls

Anecdotally, the most commonly cited reasons to stop using Rust is a feeling that development is "too slow" or "too complex". There is not any one cause for this.

  • Language complexity: Most users that get frustrated with Rust do not cite the borrow checker but rather the myriad workarounds needed to overcome various obstacles and inconsistencies. Often "idomatic Rust" involves a number of crates to cover gaps in core functionality (e.g., anyhow as a better error type, or async_recursion to permit recursive async functions). Language complexity is a particular problem
  • Picking crates: Rust intentionally offers a lean standard library, preferring instead to support a rich set of crates. But when getting started users are often overwhelmed by the options available and unsure which one would be best to use. Making matters worse, Rust documentation often doesn't show examples making use of these crates in an effort to avoid picking favorites, making it harder for users to learn how to do things.
  • Build times and slow iteration: Being able to make a change and quickly see its effect makes learning and debugging effortless. Despite our best efforts, real-world Rust programs do still have bugs, and finding and resolving those can be frustratingly slow when every change requires waiting minutes and minutes for a build to complete.

Additional concerns faced by companies

For larger users, such as companies, there are additional concerns:

  • Uneven support for cross-language invocations: Most companies have large existing codebases in other languages. Rewriting those codebases from scratch is not an option. Sometimes it possible to integrate at a microservice or process boundary, but many would like a way to rewrite individual modules in Rust, passing data structures easily back and forth. Rust's support for this kind of interop is uneven and often requires knowing the right crate to use for any given language.
  • Spotty ecosystem support, especially for older things: There are a number of amazing crates in the Rust ecosystem, but there are also a number of notable gaps, particularly for older technologies. Larger companies though often have to interact with legacy systems. Lacking quality libraries makes that harder.
  • Supply chain security: Leaning on the ecosystem also means increased concerns about supply chain security and business continuity. In short, crates maintained by a few volunteers rather than being officially supported by Rust are a risk.
  • Limited hiring pool: Hiring developers skilled in Rust remains a challenge. Companies have to be ready to onboard new developers and to help them learn Rust. Although there are many strong Rust books available, as well as a number of well regarded Rust training organizations, companies must still pick and choose between them to create a "how to learn Rust" workflow, and many do not have the extra time or skills to do that.

Status: Accepting goal proposals We are in the process of assembling the goal slate.

Summary

This is a draft for the eventual RFC proposing the 2025H1 goals.

Motivation

The 2025H1 goal slate consists of 40 project goals, of which we have selected (TBD) as flagship goals. Flagship goals represent the goals expected to have the broadest overall impact.

How the goal process works

Project goals are proposed bottom-up by a point of contact, somebody who is willing to commit resources (time, money, leadership) to seeing the work get done. The owner identifies the problem they want to address and sketches the solution of how they want to do so. They also identify the support they will need from the Rust teams (typically things like review bandwidth or feedback on RFCs). Teams then read the goals and provide feedback. If the goal is approved, teams are committing to support the owner in their work.

Project goals can vary in scope from an internal refactoring that affects only one team to a larger cross-cutting initiative. No matter its scope, accepting a goal should never be interpreted as a promise that the team will make any future decision (e.g., accepting an RFC that has yet to be written). Rather, it is a promise that the team are aligned on the contents of the goal thus far (including the design axioms and other notes) and will prioritize giving feedback and support as needed.

Of the proposed goals, a small subset are selected by the roadmap owner as flagship goals. Flagship goals are chosen for their high impact (many Rust users will be impacted) and their shovel-ready nature (the org is well-aligned around a concrete plan). Flagship goals are the ones that will feature most prominently in our public messaging and which should be prioritized by Rust teams where needed.

Rust’s mission

Our goals are selected to further Rust's mission of empowering everyone to build reliable and efficient software. Rust targets programs that prioritize

  • reliability and robustness;
  • performance, memory usage, and resource consumption; and
  • long-term maintenance and extensibility.

We consider "any two out of the three" as the right heuristic for projects where Rust is a strong contender or possibly the best option.

Axioms for selecting goals

We believe that...

  • Rust must deliver on its promise of peak performance and high reliability. Rust’s maximum advantage is in applications that require peak performance or low-level systems capabilities. We must continue to innovate and support those areas above all.
  • Rust's goals require high productivity and ergonomics. Being attentive to ergonomics broadens Rust impact by making it more appealing for projects that value reliability and maintenance but which don't have strict performance requirements.
  • Slow and steady wins the race. For this first round of goals, we want a small set that can be completed without undue stress. As the Rust open source org continues to grow, the set of goals can grow in size.

Guide-level explanation

Flagship goals

The flagship goals proposed for this roadmap are as follows:

(TBD)

Why these particular flagship goals?

(TBD--typically one paragraph per goal)

Project goals

The full slate of project goals are as follows. These goals all have identified owners who will drive the work forward as well as a viable work plan. The goals include asks from the listed Rust teams, which are cataloged in the reference-level explanation section below.

Invited goals. Some goals of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as an owner. To find the invited goals, look for the Help wanted badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactTeam
"Stabilizable" prototype for expanded const genericsBoxylang, types
Bring the Async Rust experience closer to parity with sync RustTyler Mandrycompiler, lang, libs-api, types
Continue resolving cargo-semver-checks blockers for merging into cargoPredrag Gruevskicargo, rustdoc
Declarative (macro_rules!) macro improvementsJosh Triplettlang, wg-macros
Evaluate approaches for seamless interop between C++ and RustTyler Mandrycompiler, lang, libs-api
Experiment with ergonomic ref-countingSantiago Pastorinolang
Expose experimental LLVM features for GPU offloadingManuel Drehwaldcompiler, lang
Extend pubgrub to match cargo's dependency resolutionJacob Finkelmancargo
Externally Implementable ItemsMara Boscompiler, lang
Field ProjectionsBenno Lossincompiler, lang
Finish the libtest json output experimentEd Pagetesting-devex
Implement Open API Namespace SupportHelp Wantedcargo, compiler
Implement restrictions, prepare for stabilizationJacob Prattcompiler, lang
Improve state machine codegenFolkert de Vriescompiler, lang
Instrument the Rust standard library with safety contractsCelina G. Valcompiler, libs
Integration of the FLS into the Rust ProjectJoel Marceyrelease, spec
Making compiletest more maintainable: reworking directive handlingJieyou Xubootstrap, compiler, rustdoc
Metrics InitiativeJane Lusby
Model coherence in a-mir-formalityNiko Matsakistypes
Next-generation trait solverlcnrtypes
Nightly support for ergonomic SIMD multiversioningLuca Versarilang
Null and enum-discriminant runtime checks in debug buildsBastian Kerstingcompiler, lang, opsem
Optimizing Clippy & lintingAlejandra Gonzálezclippy
Promoting Parallel Front EndSparrow Licompiler
Prototype a new set of Cargo "plumbing" commandsHelp Wantedcargo
Publish first version of StableMIR on crates.ioCelina G. Valcompiler, project-stable-mir
Research: How to achieve safety when linking separately compiled codeMara Boscompiler, lang
Run the 2025H1 project goal programNiko Matsakisleadership-council
Rust All-Hands 2025!Mara Bosleadership-council
Rust Specification TestingConnor Hormanbootstrap, compiler, lang, spec
Rust Vision DocumentNiko Matsakisleadership-council
SVE and SME on AArch64David Woodcompiler, lang, types
Scalable Polonius support on nightlyRémy Rakictypes
Secure quorum-based cryptographic verification and mirroring for crates.io@walterhpearcecargo, crates-io, infra, leadership-council
Stabilize public/private dependenciesHelp Wantedcargo, compiler
Stabilize tooling needed by Rust for LinuxNiko Matsakiscargo, clippy, compiler, rustdoc
Unsafe FieldsJack Wrenncompiler, lang
Use annotate-snippets for rustc diagnostic outputScott Schafercompiler
build-stdDavid Woodcargo
rustc-perf improvementsDavid Woodcompiler, infra

Reference-level explanation

The following table highlights the asks from each affected team. The "owner" in the column is the person expecting to do the design/implementation work that the team will be approving.

bootstrap team

GoalPoint of contactNotes
Discussion and moral support
Making compiletest more maintainable: reworking directive handlingJieyou Xuincluding consultations for desired test behaviors and testing infra consumers
RFC decision
Rust Specification TestingConnor Horman
Standard reviews
Making compiletest more maintainable: reworking directive handlingJieyou XuProbably mostly bootstrap or whoever is more interested in reviewing [compiletest] changes

cargo team

clippy team

GoalPoint of contactNotes
Stabilization decision
Clippy configurationNiko Matsakis
Standard reviews
Optimizing Clippy & lintingAlejandra González

compiler team

GoalPoint of contactNotes
Dedicated reviewer
Production use of annotate-snippetsScott SchaferEsteban Kuber will be the reviewer
Design meeting
Experimental Contract attributesCelina G. Val
Evaluate approaches for seamless interop between C++ and RustTyler Mandry2-3 meetings expected; all involve lang
Discussion and moral support
Improve state machine codegenFolkert de Vries
Rust-for-LinuxNiko Matsakis
Stabilize public/private dependenciesEd Page
Making compiletest more maintainable: reworking directive handlingJieyou Xuincluding consultations for desired test behaviors and testing infra consumers
Evaluate approaches for seamless interop between C++ and RustTyler Mandry
Implement Open API Namespace SupportEd Page
Promoting Parallel Front EndSparrow Li
Publish first version of StableMIR on crates.ioCelina G. Val
SVE and SME on AArch64David Wood
Investigate SME supportDavid Wood
Policy decision
rustc-perf improvementsDavid WoodUpdate performance regression policy
RFC decision
ABI-modifying compiler flagsNiko MatsakisRFC #3716, currently in PFCP
Rust Specification TestingConnor Horman
Stabilization decision
ABI-modifying compiler flagsNiko MatsakisFor each of the relevant compiler flags
Extract dependency information, configure no-std externallyNiko Matsakis
Standard reviews
Unsafe FieldsJack Wrenn
Field ProjectionsBenno Lossin
Improve state machine codegenFolkert de Vries
Experimental Contract attributesCelina G. Val
ABI-modifying compiler flagsNiko Matsakis
Extract dependency information, configure no-std externallyNiko Matsakis
Return type notationTyler Mandry
Implementable trait aliasesTyler Mandry
Making compiletest more maintainable: reworking directive handlingJieyou XuProbably mostly bootstrap or whoever is more interested in reviewing [compiletest] changes
Implement restrictions, prepare for stabilizationJacob Pratt
Research: How to achieve safety when linking separately compiled codeMara Bos
Externally Implementable ItemsMara Bos
Null and enum-discriminant runtime checks in debug buildsBastian Kersting
Standard reviewsScott Schafer
Land nightly experiment for SVE typesDavid Wood
Extending type system to support scalable vectorsDavid Wood
Expose experimental LLVM features for GPU offloadingManuel Drehwald

crates-io team

infra team

GoalPoint of contactNotes
Deploy to production
rustc-perf improvementsDavid Woodrustc-perf improvements, testing infrastructure
Quorum-based cryptographic infrastructure (RFC 3724)@walterhpearce
Discussion and moral support
rustc-perf improvementsDavid Wood
RFC decision
Quorum-based cryptographic infrastructure (RFC 3724)@walterhpearce
Standard reviews
rustc-perf improvementsDavid Wood

lang team

GoalPoint of contactNotes
Design meeting
Unsafe FieldsJack Wrenn
Field ProjectionsBenno Lossin
Experiment with ergonomic ref-countingSantiago Pastorino
Safe pin projectionTyler MandryStretch goal
Trait for generators (sync)Tyler Mandry2 meetings expected
Trait for async iterationTyler Mandry
Evaluate approaches for seamless interop between C++ and RustTyler Mandry2-3 meetings expected; all involve lang
Null and enum-discriminant runtime checks in debug buildsBastian Kersting
Design and iteration for macro fragment fieldsJosh Triplett
Nightly support for ergonomic SIMD multiversioningLuca Versari
Discussion and moral support
Unsafe FieldsJack Wrenn[Zulip]
"Stabilizable" prototype for expanded const genericsBoxy
Evaluate approaches for seamless interop between C++ and RustTyler Mandry
Implement restrictions, prepare for stabilizationJacob Pratt
Research: How to achieve safety when linking separately compiled codeMara Bos
Null and enum-discriminant runtime checks in debug buildsBastian Kersting
Design for macro metavariable constructsJosh Triplett
SVE and SME on AArch64David Wood
Investigate SME supportDavid Wood
Lang-team experiment
Improve state machine codegenFolkert de Vries
Safe pin projectionTyler Mandry
Research: How to achieve safety when linking separately compiled codeMara BosNiko Matsakis as liaison
Externally Implementable ItemsMara BosAlready approved
Null and enum-discriminant runtime checks in debug buildsBastian Kersting
Nightly support for ergonomic SIMD multiversioningLuca Versari
Expose experimental LLVM features for GPU offloadingManuel Drehwald(approved)
Policy decision
Declarative (macro_rules!) macro improvementsJosh TriplettDiscussed with Eric Holk and Vincenzo Palazzo; lang would decide whether to delegate specific matters to wg-macros
Prioritized nominations
Implement restrictions, prepare for stabilizationJacob Prattfor unresolved questions, including syntax
macro_rules! attributesJosh Triplett
macro_rules! derivesJosh Triplett
RFC decision
Unsafe FieldsJack Wrenn
Field ProjectionsBenno Lossin
Experiment with ergonomic ref-countingSantiago Pastorino
Return type notationTyler MandryComplete
Unsafe bindersTyler MandryStretch goal
Implementable trait aliasesTyler Mandry
Pin reborrowingTyler Mandry
Trait for generators (sync)Tyler Mandry
Rust Specification TestingConnor Horman
macro_rules! attributesJosh Triplett
macro_rules! derivesJosh Triplett
Design and iteration for macro fragment fieldsJosh Triplett
Nightly support for ergonomic SIMD multiversioningLuca Versari
Land nightly experiment for SVE typesDavid Wood
Extending type system to support scalable vectorsDavid Wood
Stabilization decision
Return type notationTyler Mandry
Implement restrictions, prepare for stabilizationJacob Pratt
macro_rules! attributesJosh Triplett
macro_rules! derivesJosh Triplett
Design and iteration for macro fragment fieldsJosh Triplett

leadership-council team

GoalPoint of contactNotes
Allocate funds
Rust All-Hands 2025!Mara BosComplete for event
Miscellaneous
Rust All-Hands 2025!Mara BosPrepare one or two plenary sessions
Team swagMara BosDecide on team swag; suggestions very welcome!
Rust Vision DocumentNiko MatsakisCreate supporting subteam + Zulip stream
Quorum-based cryptographic infrastructure (RFC 3724)@walterhpearceSelect root quorum
Org decision
Run the 2025H1 project goal programNiko Matsakisapprove creation of new team

libs team

GoalPoint of contactNotes
Discussion and moral support
Instrument the Rust standard library with safety contractsCelina G. Val
Standard reviews
Standard Library ContractsCelina G. Val

libs-api team

opsem team

project-stable-mir team

GoalPoint of contactNotes
Standard reviews
Publish first version of StableMIR on crates.ioCelina G. Val

release team

GoalPoint of contactNotes
Discussion and moral support
Integrate FLS into release processJoel MarceyFebrurary 2025
Standard reviews
Integrate FLS into release processJoel Marcey

rustdoc team

spec team

GoalPoint of contactNotes
Miscellaneous
Integration of the FLS into the Rust ProjectJoel MarceyTake ownership of the FLS (prior to, or shortly into January 2025).
RFC decision
Rust Specification TestingConnor Horman
Integrate FLS into T-spec processesJoel MarceyEnd of March 2025

testing-devex team

GoalPoint of contactNotes
Discussion and moral support
Finish the libtest json output experimentEd Page

types team

wg-macros team

GoalPoint of contactNotes
Discussion and moral support
Design for macro metavariable constructsJosh Triplett
Policy decision
Declarative (macro_rules!) macro improvementsJosh TriplettDiscussed with Eric Holk and Vincenzo Palazzo; lang would decide whether to delegate specific matters to wg-macros

Definitions

Definitions for terms used above:

  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What goals were not accepted?

The following goals were proposed but not accepted:

GoalPoint of contactProgress

Goals

This page lists the 40 project goals proposed for 2025h1.

Just because a goal is listed on this list does not mean the goal has been accepted. The owner of the goal process makes the final decisions on which goals to include and prepares an RFC to ask approval from the teams.

Flagship goals

Flagship goals represent the goals expected to have the broadest overall impact.

Other goals

These are the other proposed goals.

Invited goals. Some goals of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as an owner. To find the invited goals, look for the Help wanted badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactTeam
"Stabilizable" prototype for expanded const genericsBoxylang, types
Continue resolving cargo-semver-checks blockers for merging into cargoPredrag Gruevskicargo, rustdoc
Declarative (macro_rules!) macro improvementsJosh Triplettlang, wg-macros
Evaluate approaches for seamless interop between C++ and RustTyler Mandrycompiler, lang, libs-api
Experiment with ergonomic ref-countingSantiago Pastorinolang
Expose experimental LLVM features for GPU offloadingManuel Drehwaldcompiler, lang
Extend pubgrub to match cargo's dependency resolutionJacob Finkelmancargo
Externally Implementable ItemsMara Boscompiler, lang
Field ProjectionsBenno Lossincompiler, lang
Finish the libtest json output experimentEd Pagetesting-devex
Implement Open API Namespace SupportHelp Wantedcargo, compiler
Implement restrictions, prepare for stabilizationJacob Prattcompiler, lang
Improve state machine codegenFolkert de Vriescompiler, lang
Instrument the Rust standard library with safety contractsCelina G. Valcompiler, libs
Integration of the FLS into the Rust ProjectJoel Marceyrelease, spec
Making compiletest more maintainable: reworking directive handlingJieyou Xubootstrap, compiler, rustdoc
Metrics InitiativeJane Lusby
Model coherence in a-mir-formalityNiko Matsakistypes
Next-generation trait solverlcnrtypes
Nightly support for ergonomic SIMD multiversioningLuca Versarilang
Null and enum-discriminant runtime checks in debug buildsBastian Kerstingcompiler, lang, opsem
Optimizing Clippy & lintingAlejandra Gonzálezclippy
Promoting Parallel Front EndSparrow Licompiler
Prototype a new set of Cargo "plumbing" commandsHelp Wantedcargo
Publish first version of StableMIR on crates.ioCelina G. Valcompiler, project-stable-mir
Research: How to achieve safety when linking separately compiled codeMara Boscompiler, lang
Run the 2025H1 project goal programNiko Matsakisleadership-council
Rust Specification TestingConnor Hormanbootstrap, compiler, lang, spec
Rust Vision DocumentNiko Matsakisleadership-council
SVE and SME on AArch64David Woodcompiler, lang, types
Scalable Polonius support on nightlyRémy Rakictypes
Secure quorum-based cryptographic verification and mirroring for crates.io@walterhpearcecargo, crates-io, infra, leadership-council
Stabilize public/private dependenciesHelp Wantedcargo, compiler
Unsafe FieldsJack Wrenncompiler, lang
Use annotate-snippets for rustc diagnostic outputScott Schafercompiler
build-stdDavid Woodcargo
rustc-perf improvementsDavid Woodcompiler, infra

Bring the Async Rust experience closer to parity with sync Rust

Metadata
Point of contactTyler Mandry
Teamslang, libs, libs-api, types
StatusProposed for flagship

Summary

Over the next six months, we will continue bringing Async Rust up to par with "sync Rust" by doing the following:

  • Telling a complete story for the use of async fn in traits, unblocking wide ecosystem adoption,
  • Improving the ergonomics of Pin, which is frequently used in low-level async code, and
  • Preparing to support asynchronous (and synchronous) generators in the language.

Motivation

This goal represents the next step on a multi-year program aiming to raise the experience of authoring "async Rust" to the same level of quality as "sync Rust". Async Rust is a crucial growth area, with 52% of the respondents in the 2023 Rust survey indicating that they use Rust to build server-side or backend applications.

The status quo

Async Rust is the most common Rust application area according to our 2023 Rust survey. Rust is a great fit for networked systems, especially in the extremes:

  • Rust scales up. Async Rust reduces cost for large dataplanes because a single server can serve high load without significantly increasing tail latency.
  • Rust scales down. Async Rust can be run without requiring a garbage collector or even an operating system, making it a great fit for embedded systems.
  • Rust is reliable. Networked services run 24/7, so Rust's "if it compiles, it works" mantra means fewer unexpected failures and, in turn, fewer pages in the middle of the night.

Despite async Rust's popularity, using async I/O makes Rust significantly harder to use. As one Rust user memorably put it, "Async Rust is Rust on hard mode." Several years back the async working group collected a number of "status quo" stories as part of authoring an async vision doc. These stories reveal a number of characteristic challenges:

The next 6 months

Tell a complete story for async fn in traits

  • Unblock AFIT in public traits by stabilizing RTN and implementable trait aliases (unblock tower 1.0)
  • Ship 1.0 of the dynosaur crate, enabling dynamic dispatch with AFIT
  • Stretch goal: Implement experimental support for async fn in dyn Trait in nightly

Improve ergonomics around Pin

  • Ratify and implement an RFC for auto-reborrowing of pinned references
  • Stretch goal: Discuss and implement a design for safe pin projection

Work toward asynchronous generators

  • Have design meetings and ratify an RFC for synchronous generators
  • Have a design meeting for asynchronous iteration
  • Stretch goal: Ratify an RFC for unsafe binders

In H2 we hope to tackle the following:

  • RTN in type position
  • Ratified RFC for asynchronous iteration

The "shiny future" we are working towards

Writing async code in Rust should feel just as expressive, reliable, and productive as writing sync code in Rust. Our eventual goal is to provide Rust users building on async with

  • the same core language capabilities as sync Rust (async traits with dyn dispatch, async closures, async drop, etc);
  • reliable and standardized abstractions for async control flow (streams of data, error recovery, concurrent execution), free of accidental complexity;
  • an easy "getting started" experience that builds on a rich ecosystem;
  • good performance by default, peak performance with tuning;
  • the ability to easily adopt custom runtimes when needed for particular environments, language interop, or specific business needs.

Design axioms

  • Uphold sync Rust's bar for reliability. Sync Rust famously delivers on the general feeling of "if it compiles, it works" -- async Rust should do the same.
  • Lay the foundations for a thriving ecosystem. The role of the Rust org is to develop the rudiments that support an interoperable and thriving async crates.io ecosystem.
  • When in doubt, zero-cost is our compass. Many of Rust's biggest users are choosing it because they know it can deliver the same performance (or better) than C. If we adopt abstractions that add overhead, we are compromising that core strength. As we build out our designs, we ensure that they don't introduce an "abstraction tax" for using them.
  • From embedded to GUI to the cloud. Async Rust covers a wide variety of use cases and we aim to make designs that can span those differing constraints with ease.
  • Consistent, incremental progress. People are building async Rust systems today -- we need to ship incremental improvements while also steering towards the overall outcome we want.

Ownership and team asks

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The overall owner of the effort is Tyler Mandry. We have identified owners for subitems below; these may change over time.

Overall program management

TaskOwner(s) or team(s)Notes
AFIT story blog postTyler Mandry

Return type notation

TaskOwner(s) or team(s)Notes
Initial implementationMichael GouletComplete
Author RFCNiko MatsakisComplete
RFC decisionTeam langComplete
Finished implementationMichael Goulet
Standard reviewsTeam types compiler
Stabilization decisionTeam lang

Unsafe binders

TaskOwner(s) or team(s)Notes
Initial implementationMichael GouletStretch goal
Author RFCNiko MatsakisStretch goal
RFC decisionTeam langStretch goal

Implementable trait aliases

TaskOwner(s) or team(s)Notes
Author RFCTyler Mandry
ImplementationMichael Goulet
Standard reviewsTeam types compiler
RFC decisionTeam lang types

async fn in dyn Trait

TaskOwner(s) or team(s)Notes
Lang-team experimentNiko Matsakis(Approved)
ImplementationMichael GouletStretch goal

Pin reborrowing

TaskOwner(s) or team(s)Notes
ImplementationEric Holk
Author RFCEric Holk
RFC decisionTeam lang

Safe pin projection

TaskOwner(s) or team(s)Notes
Lang-team experimentTeam lang
ImplementationStretch goal
Design meetingTeam langStretch goal

Trait for generators (sync)

TaskOwner(s) or team(s)Notes
ImplementationEric Holk
Author RFC
RFC decisionTeam libs-api lang
Design meetingTeam lang2 meetings expected

Trait for async iteration

TaskOwner(s) or team(s)Notes
Design meetingTeam lang libs-api

Dynosaur 1.0

TaskOwner(s) or team(s)Notes
ImplementationSantiago Pastorino
Standard reviewsTyler Mandry

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Why work on synchronous generators if your goal is to support async?

There are three features we want that all interact quite heavily with each other:

  • Sync generators
  • Async generators
  • Async iteration trait

Of the three, we think we are the closest to ratifying an RFC for synchronous generators. This should help clarify one of the major outstanding questions for the other two items; namely, the relation to pinning. With that out of the way, we should better be able to focus on the iteration trait and how well it works with async generators.

Focusing on pinning first also synergizes well with the efforts to improve the ergonomics of pinning.

Rust All-Hands 2025!

Metadata
Point of contactMara Bos
Teamsleadership-council
StatusProposed for flagship

Summary

Organise another Rust All-Hands in 2025!

Motivation

It's been too long since we've had a Rust All-Hands. Time to bring it back!

The status quo

Previous Rust All-Hands events were very useful and successful, but after the changes at Mozilla, we haven't had a Rust All-Hands in six years. Meanwhile, the project has grown a lot, making it much harder and expensive to organise a new Rust All-Hands.

A few months ago, when Jack brought up having a new Rust All-Hands in the Leadership Council meeting, Mara proposed having the new Rust All-Hands at RustWeek 2025 by RustNL, which will take place in the Netherlands around the 10th birthday of Rust 1.0. Both RustNL and the Rust Leadership Council agreed.

See https://blog.rust-lang.org/inside-rust/2024/09/02/all-hands.html

The next 6 months

  • Prepare the all-hands and everything around it.
  • Have a social and informal "pre-all hands day" on Rust's 10th birtday: May 15, 2025.
  • Have a two-day all-hands on May 16 and May 17, 2025.

The "shiny future" we are working towards

The immediate goal is a very successful and productive Rust All-Hands 2025. Hopefully, this will be the first step towards the larger goal of having regular Rust All-Hands again.

We should be able to use the feedback and lessons learned from this event for the next ones, and to figure out what the right frequency would be (yearly or less often). Repeating an event tends to be much easier than organising one from scratch.

Design axioms

  • Accessibility. Ideally, everyone in the project should be able to attend the Rust All-Hands
  • Productivity and effectivity. We should optimally make use of the event, to make it worth everyone's time.
  • Low stress. The event or the planning of it should not burn anyone out. It is a tool to help the project and its members, after all!
  • Space for social events. The goal is not just to work on technical things together, but also to get to know each other and become a closer team.

Ownership and team asks

Owner: Mara

TaskOwner(s) or team(s)Notes
Pick the datesRustNL, leadership-councilComplete
Allocate fundsTeam leadership-councilComplete for event
Allocate fundsRust FoundationComplete for travel
Book the venueRustNLComplete
Catering / snacks / food / drinksRustNLComplete
Register for the Rust All-Hands 2025every project memberMajority already signed up!
Send out conformations/ticketsMara Bos
Send out detailed informationMara Bos
Answer logistical questionsMara Bos
Interface between project and RustNLMara Bos
Make hotel reservationsRustNLIn progress
Book hotelall participantsRustNL will provide suggestions
Book travelall participants
MiscellaneousTeam leadership-councilPrepare one or two plenary sessions
Submit talks for the "Rust Project Track"project membersPossibility to give talks at the conference before the all-hands.
Moderation / safetyRustNL and moderator team
Accessibility and covid safetyRustNL
Come to the Rust All-Hands 2025all participants
Reimburse travel costsRust Foundation

Team swag

TaskOwner(s) or team(s)Notes
MiscellaneousTeam leadership-councilDecide on team swag; suggestions very welcome!
Acquire swagRustNL

Make plans for what to do at the all-hands

TaskOwner(s) or team(s)Notes
Gather input from your teamsteam leads (or delegate)
Provide input for planning teamteam leads (or delegate)
Make an agenda for your team's roomteam leads (or delegate)
Coordinate the overall processPlanning teamSmall group of 2-3 people. Volunteers welcome!
Make a room plan (after gathering input)Planning team

Organise an optional "pre all-hands day"

TaskOwner(s) or team(s)Notes
OrganisationMara Bos, RustNLAn optional day without an agenda, with space for social activities
Acquire gifts (secret!)Mara BosComplete

Frequently asked questions

I'm so excited about the all hands!

Me too!

How will we schedule meetings?

Jack Huey asked:

The key challenge here is going to be scheduling meetings for teams with overlapping membership. Likely it'll probably make sense to stagger team meetings such that there are blocks of time with very few overlaps of "official" meetings (with the acknowledgement that maybe that means during those times the parallelism goes down).

Mara Bos answered:

Yeah that's part of the reason for a "Planning team" to "Coordinate the overall process". Hopefully we can make the draft agendas public early so teams can work together directly to align their plans a bit. It's going to be a fun puzzle though. ^^

Who can attend?

Jack Huey asked:

One thing I haven't seen listed is who other than the Project is/may attend? From what I remember, there is a separate space available - it would be really good to have an actual list for that, which is available to Project members, since people may be interested in chatting/coordinating.

Mara Bos answered:

There will be an unconference at the same venue that will host groups like the embedded working group, Rust for Linux, Bevy maintainers, UI/App developers, and a few other groups. That part is handled by RustNL as a mostly separate event, that just happens to take place in the same location.

RustNL will of course share which groups that will be once they confirm, so we can coordinate potential collaboration with these groups. But from a organisation (and funding) perspective, we treat the all-hands and the unconference as mostly separate events, which is why I didn't include it in the project goal.

Stabilize tooling needed by Rust for Linux

Metadata
Short titleRust-for-Linux
Point of contactNiko Matsakis
Teamscompiler
StatusProposed for flagship
Tracking issuerust-lang/rust-project-goals#116

Summary

Continue working towards Rust for Linux on stable, turning focus from language features to compiler and tooling.

Motivation

This goal continues our push to support the Linux kernel building on stable Rust. The focus in 2025H1 is shifting from language features, which were largely completed in 2024H2, towards compiler flags and tooling support. The Linux Kernel makes use of a number of unstable options in the compiler for target specific optimizations, code hardening, and sanitizer integration. It also requires a custom build of the standard library and has hacky integration with rustdoc to enable the use of doctests. We are looking to put all of these items onto a stable foundation.

The status quo

The Rust For Linux (RFL) project has been accepted into the Linux kernel in experimental status. The project's goal, as described in the Kernel RFC introducing it, is to add support for authoring kernel components (modules, subsystems) using Rust. Rust would join C as the only two languages permitted in the linux kernel. This is a very exciting milestone for Rust, but it's also a big challenge.

Integrating Rust into the Linux kernel means that Rust must be able to interoperate with the kernel's low-level C primitives for things like locking, linked lists, allocation, and so forth. This interop requires Rust to expose low-level capabilities that don't currently have stable interfaces.

The dependency on unstable features is the biggest blocker to Rust exiting "experimental" status. Because unstable features have no kind of reliability guarantee, this in turn means that RFL can only be built with a specific, pinned version of the Rust compiler. This is a challenge for distributions which wish to be able to build a range of kernel sources with the same compiler, rather than having to select a particular toolchain for a particular kernel version.

Longer term, having Rust in the Linux kernel is an opportunity to expose more C developers to the benefits of using Rust. But that exposure can go both ways. If Rust is constantly causing pain related to toolchain instability, or if Rust isn't able to interact gracefully with the kernel's data structures, kernel developers may have a bad first impression that causes them to write off Rust altogether. We wish to avoid that outcome. And besides, the Linux kernel is exactly the sort of low-level systems application we want Rust to be great for!

For deeper background, please refer to these materials:

What we have done so far

We began the push towards stable support for RFL in 2024H2 with a project goal focused on language features. Over the course of those six months we:

  • Stabilized the CoercePointee derive, supporting the kernel's use of smart pointers to model intrusive linked lists.
  • Stabilized basic usage of asm_goto. Based on a survey of the kernel's usage, we modified the existing design and also proposed two extensions.
  • Stabilized offset_of syntax applied to structs.
  • Added Rust-for-Linux to the Rust CI to avoid accidental breakage.
  • Stabilized support for pointers to static in constants.

The one feature which was not stabilized yet is arbitrary self types v2, which reached "feature complete" status in its implementation. Stabilization is expected in early 2025.

We also began work on tooling stabilization with an RFC proposing an approach to stabilizing ABI-modifying compiler flags.

The next six months

Over the next six months our goal is to stabilize the major bits of tooling used by the Rust for Linux project. Some of these work items are complex enough to be tracked independently as their own project goals, in which case they are linked.

  • implementing RFC #3716 to stabilize ABI-modifying compiler flags to control code generation, sanitizer integration, and so forth:
    • arm64: -Zbranch-protection, -Zfixed-x18, -Zuse-sync-unwind.
    • x86: -Zcf-protection, -Zfunction-return, -Zno-jump-tables, -Zpatchable-function-entry, retpoline (+retpoline-external-thunk,+retpoline-indirect-branches,+retpoline-indirect-calls), SLS (+harden-sls-ijmp,+harden-sls-ret).
    • x86 32-bit: -Zregparm=3, -Zreg-struct-return.
    • LoongArch: -Zdirect-access-external-data.
    • production sanitizer flags: -Zsanitizer=shadow-call-stack, -Zsanitizer=kcfi, -Zsanitizer-cfi-normalize-integer.
  • the ability to extract dependency info and to configure no-std without requiring it in the source file:
    • currently using -Zbinary_dep_depinfo=y and -Zcrate-attr
  • stable rustdoc features allowing the RFL project to extract and customize rustdoc tests:
  • clippy configuration (.clippy.toml in particular and CLIPPY_CONF_DIR);
  • a blessed way to rebuild std: RFL needs a way to rebuild the standard library using stable calls to rustc. Currently building the standard library with rustc is not supported. This is a precursor to what is commonly called -Zbuild-std; it is also a blocker to making full use of API-modifying compiler flags and similar features, since they can't be used effectively unless the kernel is rebuilt.

In addition, as follow-up from 2024H2, we wish to complete [arbitrary self types v2][astv2] stabilization.

The "shiny future" we are working towards

The ultimate target for this line of work is that Rust code in the Linux kernel builds on stable Rust with a Minimum Supported Rust Version (MSRV) tied to some external benchmark, such as Debian stable. This is the minimum requirement for Rust integration to proceed from an "experiment" so something that could become a permanent part of Linux.

Looking past the bare minimum, the next target would be making "quality of life" improvements that make it more ergonomic to write Rust code in the kernel (and similar codebases). One such example is the proposed experiment for field projections.

Design axioms

  • First, do no harm. If we want to make a good first impression on kernel developers, the minimum we can do is fit comfortably within their existing workflows so that people not using Rust don't have to do extra work to support it. So long as Linux relies on unstable features, users will have to ensure they have the correct version of Rust installed, which means imposing labor on all Kernel developers.
  • Don't let perfect be the enemy of good. The primary goal is to offer stable support for the particular use cases that the Linux kernel requires. Wherever possible we aim to stabilize features completely, but if necessary, we can try to stabilize a subset of functionality that meets the kernel developers' needs while leaving other aspects unstable.

Ownership and team asks

Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP).

  • The Team badge indicates a requirement where Team support is needed.
TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam compiler rustdoc cargo
Overall program managementNiko Matsakis

ABI-modifying compiler flags

Goal: stabilizing various ABI-modifying flags such as -Zbranch-protection and friends.

TaskOwner(s) or team(s)Notes
Author RFCAlice Ryhl![Completed][]
RFC decisionTeam compilerRFC #3716, currently in PFCP
ImplementationHelp WantedFor each flag, need to move flags from -Z to -C etc
Standard reviewsTeam compiler
Stabilization decisionTeam compilerFor each of the relevant compiler flags

Extract dependency information, configure no-std externally

Goal: support extraction of dependency information (similar to -Zbinary_dep_depinfo=y today) and ability to write crates without explicit, per-crate ![no_std] (achieved via -Zcrate-attr today).

Right now there is no plan for how to approach this. This task needs an owner to pick it up, make a plan, and execute.

TaskOwner(s) or team(s)Notes
Author a planHelp Wanted
ImplementationHelp Wanted
Standard reviewsTeam compiler
Stabilization decisionTeam compiler

Rustdoc features to extract doc tests

Goal: stable rustdoc features sufficient to extract doc tests without hacky regular expressions

TaskOwner(s) or team(s)Notes
Author RFCHelp Wanted
RFC decisionTeam rustdoc
ImplementationHelp Wanted
Standard reviewsTeam rustdoc
Stabilization decisionTeam rustdoc

Clippy configuration

Goal: stabilized approach to customizing clippy (like .clippy.toml and CLIPPY_CONF_DIR today).

As discussed on Zulip, the relevant policy is already correct, but documentation is needed.

TaskOwner(s) or team(s)Notes
Author documentationHelp Wanted
Stabilization decisionTeam clippy

Blessed way to rebuild std

See build-std goal.

"Stabilizable" prototype for expanded const generics

Metadata
Point of contactBoxy
Teamstypes
StatusProposed
Tracking issuerust-lang/rust-project-goals#100

Summary

Experiment with a new min_generic_const_args implementation to address challenges found with the existing approach to supporting generic parameters in const generic arguments.

Motivation

min_const_generics was stabilized with the restriction that const-generic arguments may not use generic parameters other than a bare const parameter, e.g. Foo<N> is legal but not Foo<{ T::ASSOC }>. This restriction is lifted under feature(generic_const_exprs) however its design is fundamentally flawed and introduces significant complexity to the compiler. A ground up rewrite of the feature with a significantly limited scope (e.g. min_generic_const_args) would give a viable path to stabilization and result in large cleanups to the compiler.

The status quo

A large amount of rust users run into the min_const_generics limitation that it is not legal to use generic parameters with const generics. It is generally a bad user experience to hit a wall where a feature is unfinished, and this limitation also prevents patterns that are highly desirable. We have always intended to lift this restriction since stabilizing min_const_generics but we did not know how.

It is possible to use generic parameters with const generics by using feature(generic_const_exprs). Unfortunately this feature has a number of fundamental issues that are hard to solve and as a result is very broken. It being so broken results in two main issues:

  • When users hit a wall with min_const_generics they cannot reach for the generic_const_exprs feature because it is either broken or has no path to stabilization.
  • In the compiler, to work around the fundamental issues with generic_const_exprs, we have a number of hacks which negatively affect the quality of the codebase and the general experience of contributing to the type system.

The next six months

We have a design for min_generic_const_args in mind but would like to validate it through implementation as const generics has a history of unforeseen issues showing up during implementation. Therefore we will pursue a prototype implementation in 2025. As a stretch goal, we will attempt to review the design with the lang team in the form of a design meeting or RFC.

In the past 6 months preliminary refactors were made to allow actually implementing the core of the design, this took significantly longer than expected which highlights the importance of actually implementing the design to see if it works.

The "shiny future" we are working towards

The larger plan with const generics (but not this project-goal) is to bring feature-parity with type generics for const generics:

  • Arbitrary types can be used in const generics instead of just: integers, floats, bool and char.
    • implemented under feature(adt_const_params) and is relatively close to stabilization
  • Generic parameters are allowed to be used in const generic arguments (e.g. Foo<{ <T as Trait>::ASSOC_CONST }>).
  • Users can specify _ as the argument to a const generic, allowing inferring the value just like with types.
    • implemented under feature(generic_arg_infer) and is relatively close to stabilization
  • Associated const items can introduce generic parameters to bring feature parity with type aliases
    • implemented under feature(generic_const_items), needs a bit of work to finish it. Becomes significantly more important after implementing min_generic_const_args
  • Introduce associated const equality bounds, e.g. T: Trait<ASSOC = N> to bring feature parity with associated types
    • implemented under feature(associated_const_equality), blocked on allowing generic parameters in const generic arguments

Allowing generic parameters to be used in const generic arguments is the only part of const generics that requires significant amounts of work while also having significant benefit. Everything else is already relatively close to the point of stabilization. I chose to specify this goal to be for implementing min_generic_const_args over "stabilize the easy stuff" as I would like to know whether the implementation of min_generic_const_args will surface constraints on the other features that may not be possible to easily fix in a backwards compatible manner. Regardless I expect these features will still progress while min_generic_const_args is being implemented.

Design axioms

  • Do not block future extensions to const generics
  • It should not feel worse to write type system logic with const generics compared to type generics
  • Avoid post-monomorphization errors
  • The "minimal" subset should not feel arbitrary

Ownership and team asks

Owner: Boxy, project-const-generics lead, T-types member

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.

  • Subgoal:
    • Describe the work to be done and use to mark "subitems".
  • Owner(s) or team(s):
    • List the owner for this item (who will do the work) or Help wanted if an owner is needed.
    • If the item is a "team ask" (i.e., approve an RFC), put Team and the team name(s).
  • Status:
    • List Help wanted if there is an owner but they need support, for example funding.
    • Other needs (e.g., complete, in FCP, etc) are also fine.
TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang types
Implementation and mentoringBoxy
ImplementationNoah Lev Michael Goulet
ReviewerMichael Goulet

Outputs and milestones

Outputs

  • A sound, fully implemented feature(min_generic_const_args) available on nightly
  • All issues with generic_const_exprs's design have been comprehensively documented (stretch goal)
  • RFC for min_generic_const_args's design (stretch goal)

Milestones

  • Prerequisite refactorings for min_generic_const_args have taken place
  • Initial implementation of min_generic_const_args lands and is useable on nightly
  • All known issues are resolved with min_generic_const_args
  • Document detailing generic_const_exprs issues
  • RFC is written and filed for min_generic_const_args

Frequently asked questions

Do you expect min_generic_const_args to be stabilized by the end?

No. The feature should be fully implemented such that it does not need any more work to make it ready for stabilization, however I do not intend to actually set the goal of stabilizing it as it may wind up blocked on the new trait solver being stable first.

Continue resolving cargo-semver-checks blockers for merging into cargo

Metadata
Point of contactPredrag Gruevski
Teamscargo
StatusProposed

Summary

Design and implement cargo-semver-checks functionality that lies on the critical path for merging the tool into cargo itself. Continues the work of the 2024h2 goal.

Motivation

Cargo assumes that all packages adhere to semantic versioning (SemVer). However, SemVer adherence is quite hard in practice: research shows that accidental SemVer violations are relatively common (lower-bound: in 3% of releases) and happen to Rustaceans of all skill levels. Given the significant complexity of the Rust SemVer rules, improvements here require better tooling.

cargo-semver-checks is a linter for semantic versioning (SemVer) in Rust. It is broadly adopted by the Rust community, and the cargo team has expressed interest in merging it into cargo itself as part of the existing cargo publish workflow. By default, cargo publish would require SemVer compliance, but offer a flag (analogous to the --allow-dirty flag for uncommitted changes) to override the SemVer check and proceed with publishing anyway.

The cargo team has identified a set of milestones and blockers that must be resolved before cargo-semver-checks can be integrated into the cargo publish workflow. Our goal here is to make steady progress toward resolving them.

The status quo after the 2024h2 goal

As part of the 2024h2 goal work, support for cargo manifest linting was merged into cargo-semver-checks. This lifted one of the blockers blocker for SemVer-checking as part of cargo publish.

Work is still required in two major areas:

  • Checking of cross-crate items
  • SemVer linting of type information

Some work in each of these areas already happened in the 2024h2 goal:

  • The manifest linting work required a significant refactor of the tool's data-handling infrastructure. As part of that major refactor, we were able to also create "API space" for a future addition of cross-crate information.
  • The compiler team MCP required to expose cross-crate information to rustdoc was merged, and together with T-rustdoc, we now have a plan for exposing that information to cargo-semver-checks.
  • We have implemented a partial schema that makes available a limited subset of type information around generic parameters and trait bounds. It's sufficient to power a set of new lints, though it isn't comprehensive yet.

Fully resolving the blockers is likely a 12-24 month undertaking, and beyond the scope of this goal on its own. Instead, this goal proposes to accomplish intermediate steps that create immediate value for users and derisk the overall endeavor, while needing only "moral support" from the cargo team as its only requirement.

Checking of cross-crate items

This section is background information and is unchanged from the 2024h2 goal.

Currently, cargo-semver-checks performs linting by only using the rustdoc JSON of the target package being checked. However, the public API of a package may expose items from other crates. Since rustdoc no longer inlines the definitions of such foreign items into the JSON of the crate whose public API relies on them, cargo-semver-checks cannot see or analyze them.

This causes a massive number of false-positives ("breakage reported incorrectly") and false-negatives ("lint for issue X fails to spot an instance of issue X"). In excess of 90% of real-world false-positives are traceable back to a cross-crate item, as measured by our SemVer study!

For example, the following change is not breaking but cargo-semver-checks will incorrectly report it as breaking:

#![allow(unused)]
fn main() {
// previous release:
pub fn example() {}

// in the new release, imagine this function moved to `another_crate`:
pub use another_crate::example;
}

This is because the rustdoc JSON that cargo-semver-checks sees indeed does not contain a function named example. Currently, cargo-semver-checks is incapable of following the cross-crate connection to another_crate, generating its rustdoc JSON, and continuing its analysis there.

Resolving this limitation will require changes to how cargo-semver-checks generates and handles rustdoc JSON, since the set of required rustdoc JSON files will no longer be fully known ahead of time. It will also require CLI changes in the same area as the changes required to support manifest linting.

While there may be other challenges on rustc and rustdoc's side before this feature could be fully implemented, we consider those out of scope here since there are parallel efforts to resolve them. The goal here is for cargo-semver-checks to have its own story straight and do the best it can.

SemVer linting of type information

This section is background information and is unchanged from the 2024h2 goal.

In general, at the moment cargo-semver-checks lints cannot represent or examine type information. For example, the following change is breaking but cargo-semver-checks will not detect or report it:

#![allow(unused)]
fn main() {
// previous release:
pub fn example(value: String) {}

// new release:
pub fn example(value: i64) {}
}

Analogous breaking changes to function return values, struct fields, and associated types would also be missed by cargo-semver-checks today.

The main difficulty here lies with the expressiveness of the Rust type system. For example, none of the following changes are breaking:

#![allow(unused)]
fn main() {
// previous release:
pub fn example(value: String) {}

// new release:
pub fn example(value: impl Into<String>) {}

// subsequent release:
pub fn example<S: Into<String>>(value: S) {}
}

Similar challenges exist with lifetimes, variance, trait solving, async fn versus fn() -> impl Future, etc.

While some promising preliminary work has been done toward resolving this challenge, more in-depth design work is necessary to determine the best path forward.

The next 6 months

  • Prototype cross-crate linting using manual workarounds for the current rustc and rustdoc blockers. This will allow us to roll out a full solution relatively quickly after the rustc and rustdoc blockers are resolved.
  • Expose data on generic types, lifetimes, functions, methods, and bounds in sufficient granularity for linting.
  • Determine how to handle special cases, such as changes to impls or bounds involving 'static, ?Sized, dyn Trait etc.
  • Improve sealed trait analysis to account for #[doc(hidden)] items, resolving many false-positives.

The "shiny future" we are working towards

This section is unchanged from the 2024h2 goal.

Accidentally publishing SemVer violations that break the ecosystem is never fun for anyone involved.

From a user perspective, we want a fearless cargo update: one's project should never be broken by updating dependences without changing major versions.

From a maintainer perspective, we want a fearless cargo publish: we want to prevent breakage, not to find out about it when a frustrated user opens a GitHub issue. Just like cargo flags uncommitted changes in the publish flow, it should also quickly and accurately flag breaking changes in non-major releases. Then the maintainer may choose to release a major version instead, or acknowledge and explicitly override the check to proceed with publishing as-is.

To accomplish this, cargo-semver-checks needs the ability to express more kinds of lints (including manifest and type-based ones), eliminate false-positives, and stabilize its public interfaces (e.g. the CLI). At that point, we'll have lifted the main merge-blockers and we can consider making it a first-party component of cargo itself.

Ownership and team asks

Owner: Predrag Gruevski, as maintainer of cargo-semver-checks

I (Predrag Gruevski) will be working on this effort. The only other resource request would be occasional discussions and moral support from the cargo and rustdoc teams, of which I already have the privilege as maintainer of a popular cargo plugin that makes extensive use of rustdoc JSON.

TaskOwner(s) or team(s)Notes
Prototype cross-crate linting using workaroundsPredrag Gruevski
Allow linting generic types, lifetimes, boundsPredrag Gruevski
Handle "special cases" like 'static and ?SizedPredrag Gruevski
Handle #[doc(hidden)] in sealed trait analysisPredrag Gruevski
Discussion and moral supportTeam cargo rustdoc

Frequently asked questions

This section is unchanged from the 2024h2 goal.

Why not use semverver instead?

Semverver is a prior attempt at enforcing SemVer compliance, but has been deprecated and is no longer developed or maintained. It relied on compiler-internal APIs, which are much more unstable than rustdoc JSON and required much more maintenance to "keep the lights on." This also meant that semverver required users to install a specific nightly versions that were known to be compatible with their version of semverver.

While cargo-semver-checks relies on rustdoc JSON which is also an unstable nightly-only interface, its changes are much less frequent and less severe. By using the Trustfall query engine, cargo-semver-checks can simultaneously support a range of rustdoc JSON formats (and therefore Rust versions) within the same tool. On the maintenance side, cargo-semver-checks lints are written in a declarative manner that is oblivious to the details of the underlying data format, and do not need to be updated when the rustdoc JSON format changes. This makes maintenance much easier: updating to a new rustdoc JSON format usually requires just a few lines of code, instead of "a few lines of code apiece in each of hundreds of lints."

Declarative (macro_rules!) macro improvements

Metadata
Point of contactJosh Triplett
Teamslang, wg-macros
StatusProposed

Summary

In this project goal, I'll propose and shepherd Rust language RFCs to make macro_rules! macros just as capable as proc macros, and to make such macros easier to write. I'll also start prototyping extensions to the declarative macro system to make macros easier to write, with the aim of discussing and reaching consensus on those additional proposals during RustWeek (May 2025) at the latest. Finally, I'll write a series of Inside Rust blog posts on these features, to encourage crate authors to try them and provide feedback, and to plan transitions within the ecosystem.

The scope of this goal is an arc of many related RFCs that tell a complete story, as well as the implementation of the first few steps.

Motivation

This project goal will make it possible, and straightforward, to write any type of macro using the declarative macro_rules! system. This will make many Rust projects build substantially faster, make macros simpler to write and understand, and reduce the dependency supply chain of most crates.

The status quo

There are currently several capabilities that you can only get with a proc macro: defining an attribute macro that you can invoke with #[mymacro], or defining a derive macro that you can invoke with #[derive(MyTrait)]. In addition, even without the requirement to do so (e.g. using workarounds such as the macro_rules_attribute crate), macro authors often reach for proc macros anyway, in order to write simpler procedural code rather than refactoring it into a declarative form.

Proc macros are complex to build, have to be built as a separate crate that needs to be kept in sync with your main crate, add a heavy dependency chain (syn/quote/proc-macro2) to projects using them, add to build time, and lack some features of declarative (macro_rules!) macros such as $crate.

As a result, proc macros contribute to the perceptions that Rust is complex, has large dependency supply chains, and takes a long time to build. Crate authors sometimes push back on (or feature-gate) capabilities that require proc macros if their crate doesn't yet have a dependency on any, to avoid increasing their dependencies.

The next 6 months

Over the next 6 months, I'll propose RFCs to improve the current state of declarative (macro_rules!) macros, and work with Eric Holk and Vincenzo Palazzo to get those RFCs implemented. Those RFCs together will enable:

  • Using macro_rules! to define attribute macros (#[attr])
  • Using macro_rules! to define derive macros (#[derive(Trait)])
  • Using macro_rules! to define unsafe attributes and unsafe derive macros.

I also have an RFC in progress ("macro fragment fields") to allow macro_rules! macros to better leverage the Rust parser for complex constructs. Over the next 6 months, I'll shepherd and refine that RFC, and design extensions of it to help parse additional constructs. (I expect this RFC to potentially require an additional design discussion before acceptance.) The goal will be to have enough capabilities to simplify many common cases of attribute macros and derive macros.

I'll propose initial prototypes of additional macro metavariable expressions to make macro_rules! easier to write, such as by handling multiple cases or iterating without having to recurse. This provides one of the key simplification benefits of proc macros, with minimal added complexity in the language. I expect these to reach pre-RFC form and be suitable for discussion at RustWeek in May 2025, and hopefully reach consensus, but I do not expect them to be fully accepted or shipped in the next 6 months.

In addition, as part of this goal, I intend to work with Eric Holk and Vincenzo Palazzo to revitalize the wg-macros team, and evaluate potential policies and delegations from lang, in a similar spirit to wg-const-eval, t-types, and t-opsem.

Much as with the const eval system, I expect this to be a long incremental road, with regular improvements to capabilities and simplicity. Crate authors can adopt new features as they arise, and transition from proc macros to declarative macros once they observe sufficient parity to support such a switch.

The "shiny future" we are working towards

In the shiny future of Rust, the vast majority of crates don't need to use proc macros. They can easily implement attributes, derives, and complex macros using exclusively the declarative macro_rules! system.

Furthermore, crate authors will not feel compelled to use proc macros for simplicity, and will not have to contort their procedural logic in order to express it as a declarative macro macro. Crate authors will be able to write macros using macro_rules! in either a recursive or semi-procedural style. For instance, this could include constructs like for and match.

I expect that all of these will be available to macros written in any edition, though I also anticipate the possibility of syntax improvements unlocked by future editions or within future macro constructs. For instance, currently Rust macros do not reserve syntax like $keyword (e.g. $for). Existing editions could require the ${...} macro metavariable syntax to introduce new constructs. Rust 2027 could reserve $keyword, and new syntax like macro could reserve such syntax in all editions.

Design axioms

  • Incremental improvements are often preferable to a ground-up rewrite. The ecosystem can adopt incremental improvements incrementally, and give feedback that inspires further incremental improvements.
  • There should never be a capability that requires using a proc macro.
  • The most obvious and simplest way to write a macro should handle all cases a user might expect to be able to write. Where possible, macros should automatically support new syntax variations of existing constructs, without requiring an update.
  • Macros should not have to recreate the Rust parser (or depend on crates that do so). Macros should be able to reuse the compiler's parser. Macros shouldn't have to parse an entire construct in order to extract one component of it.
  • Transforming iteration or matching into recursion is generally possible, but can sometimes obfuscate logic.

Ownership and team asks

Owner / Responsible Reporting Party: Josh Triplett

TaskOwner(s) or team(s)Notes
Propose discussion session at RustWeekJosh Triplett
Policy decisionTeam lang wg-macrosDiscussed with Eric Holk and Vincenzo Palazzo; lang would decide whether to delegate specific matters to wg-macros

macro_rules! attributes

TaskOwner(s) or team(s)Notes
Author/revise/iterate RFCsJosh Triplett
Prioritized nominationsTeam lang
RFC decisionTeam lang
Implementation of RFCEric Holk, Vincenzo Palazzo
Iterate on design as neededJosh Triplett
Inside Rust blog post on attribute macrosJosh Triplett
Process feedback from crate authorsJosh Triplett
Author stabilization report (if ready)Josh Triplett
Stabilization decisionTeam lang

macro_rules! derives

TaskOwner(s) or team(s)Notes
Author/revise/iterate RFCsJosh Triplett
Prioritized nominationsTeam lang
RFC decisionTeam lang
Implementation of RFCEric Holk, Vincenzo Palazzo
Iterate on design as neededJosh Triplett
Inside Rust blog post on derive macrosJosh Triplett
Process feedback from crate authorsJosh Triplett
Author stabilization report (if ready)Josh Triplett
Stabilization decisionTeam lang

Design and iteration for macro fragment fields

TaskOwner(s) or team(s)Notes
Author initial RFCJosh Triplett
Design meetingTeam lang
RFC decisionTeam lang
Implementation of RFCEric Holk, Vincenzo Palazzo
Iterate on design as neededJosh Triplett
Inside Rust blog post on additional capabilitiesJosh Triplett
Process feedback from crate authorsJosh Triplett
Author stabilization report (if ready)Josh Triplett
Stabilization decisionTeam lang
Support lang experiments for fragment fieldsJosh Triplett
Author small RFCs for further fragment fieldsJosh Triplett

Design for macro metavariable constructs

TaskOwner(s) or team(s)Notes
Design research and discussionsJosh Triplett
Discussion and moral supportTeam lang wg-macros
Author initial RFCJosh Triplett

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What about "macros 2.0"

Whenever anyone proposes a non-trivial extension to macros, the question always arises of how it interacts with "macros 2.0", or whether it should wait for "macros 2.0".

"Macros 2.0" has come to refer to a few different things, ambiguously:

  • Potential future extensions to declarative macros to improve hygiene/namespace handling.
  • An experimental marco system using the keyword macro that partially implements hygiene improvements and experimental alternate syntax, which doesn't have a champion or a path to stabilization, and hasn't seen active development in a long time.
  • A catch-all for hypothetical future macro improvements, with unbounded potential for scope creep.

As a result, the possibility of "macros 2.0" has contributed substantially to "stop energy" around improvements to macros.

This project goal takes the position that "macros 2.0" is sufficiently nebulous and unfinished that it should not block making improvements to the macro system. Improvements to macro hygiene should occur incrementally, and should not block other improvements.

Could we support proc macros without a separate crate, instead?

According to reports from compiler experts, this would be theoretically possible but incredibly difficult, and is unlikely to happen any time soon. We shouldn't block on it.

In addition, this would not solve the problem of requiring proc macros to recreate the Rust parser (or depend on such a reimplementation).

What about a "comptime" system?

This would likewise be possible in the future, but we shouldn't block on it. And as above, this would not solve the problem of requiring such a system to recreate the Rust parser. We would still need a design for allowing such comptime functions to walk the Rust AST in a forward-compatible way.

Evaluate approaches for seamless interop between C++ and Rust

Metadata
Point of contactTyler Mandry
Teamslang compiler libs-api
StatusProposed

Summary

Seriously consider what it will take to enable Rust adoption in projects that must make use of large, rich C++ APIs. Map out the space of long-term solutions we are interested in. These solutions should enable interop between Rust and other languages in the future.

Motivation

Rust has seen broad and growing adoption across the software industry. This has repeatedly demonstrated the value of its commitment to safety and reliability. Memory safety, in particular, has caught the notice of governmental bodies in the European Union and the United States, among others.

We should aim to spread the benefits of Rust and its underlying ideas as far as possible across our industry and its users. While the current uptake of Rust is encouraging, it is limited today to areas where Rust adoption is relatively easy. There exists a large portion of production code in use today that cannot feasibly adopt Rust, and it is time we looked seriously at what it would take to change that.

The status quo

Costs of memory unsafety

Memory safety vulnerabilities are the most costly kinds of vulnerabilities, both for product owners and their users. These vulnerabilities and their costs have persisted despite the deployment of many mitigation measures in memory unsafe languages which often impose costs of their own.[^ag]1

Experience has shown that regardless of the size of an existing codebase, incrementally adopting a memory safe language like Rust in new code brings roughly linear benefits in terms of new memory safety vulnerabilities. This is because most vulnerabilities come from new code, not old code.2 This means Rust adoption has value even if only adopted in new code.

Given the growing recognition of this problem from within various technical communities, major technology companies, and major governmental bodies, there is increasing pressure to adopt memory safe languages across the board for all new code. As this proposal explains, this presents both a significant opportunity and a significant challenge for Rust.

3

https://alexgaynor.net/2020/may/27/science-on-memory-unsafety-and-security/

1

https://security.googleblog.com/2021/04/rust-in-android-platform.html

2

See https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html and https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html.

Obstacles to memory safety

Roughly speaking, there are three axes to adoption of memory safety: Social, Technical, and Economic. Making progress along one axis can overcome blockers in the others.

For example, safety has become more socially desirable in many technical communities over the years, which has led to the development of mitigation measures and the adoption of languages like Rust. This has come partly as a result of the recognition of the economic costs of memory safety vulnerabilities.

For C/C++ this has led to an improvement along the technical front in terms of automated checking, in both static and dynamic tooling. However, this protracted effort has also revealed the limits of such an approach without language changes. While there have been calls for C++ to adopt memory safety features,4 they have not gained traction within the C++ standards body for a combination of technical, social, and economic reasons.5

4

https://safecpp.org/draft.html

5

https://cor3ntin.github.io/posts/profiles

Obstacles to Rust adoption

Changing languages at a large scale is fearfully expensive.6

6

https://downloads.regulations.gov/ONCD-2023-0002-0020/attachment_1.pdf

Rust itself is a major technical breakthrough that enables safety from all kinds of undefined behavior, including spatial safety, temporal safety, and data race safety, with very high confidence. This makes it appealing for those looking to introduce safety to their codebase. Rust adoption is feasible in the following situations:

Feasible: New codebases with Rust-only dependencies

This includes completely new projects as well as complete rewrites of existing projects, when such rewrites are socially and economically viable.

Feasible: Interprocess boundaries

Projects with a natural interprocess boundary between components are more easily migrated to Rust. Because of the loose coupling enforced by the boundary, the project can be incrementally migrated one component at a time. Microservice architectures with their RPC/HTTP boundaries are one example of this.

Feasible: Small, simple intraprocess API surface

Projects with a small, simple API surface that can be manually expressed in terms of the C ABI. This boundary, expressed and invoked in unsafe code, is prone to human error. It can be maintainable when the surface is small enough, but this also means that Rust adoption can decrease safety at the language boundary.

Feasible: Larger intraprocess API surface, but with limited vocabulary

Projects with a limited API vocabulary are able to use one of the existing interop tools like bindgen, cbindgen, or cxx.

Infeasible: Everything else

The fact that all of these options exist and undergo active development is a testament to the value developers see in Rust adoption. However, they leave out a large portion of production use cases today: Projects that make rich use of an API in a language like C++ where comparatively limited interop support exists for Rust, and that link in enough code to make rewriting infeasible.

Furthermore, the limitations of current interop tooling are not simply a matter of adding features. Many of them stem from a mismatch in the expressiveness of the two languages along various axes. As one example, C++ and Java both support overloading while Rust does not. In some cases this mismatch is broadly accepted as a missing feature in Rust that will be added in time. In others, Rust's lack of expressiveness may be considered a feature in itself.

These mismatches point to the limitations of such approaches. If we attempt to solve them one at a time, we may never reach the "shiny future" we are working towards.

The next 6 months

We do not propose any specific deliverables over the next six months. We only propose a discussion with the Language, Compiler, and Libs-API teams that takes a serious look at the problem space and what it would take to solve it. This discussion should incorporate lessons from existing projects and lay the foundation for future explorations and engagements.

Possible discussion topics include:

  • Coverage of rich C++ APIs, including those that make use of language features like templates, (partial) specialization, and argument-dependent lookup. (Lang + Compiler)
  • Seamless use of "vocabulary types" like strings, vectors, and hashmaps, including the various kinds of conversions in source and at the ABI level. (Lang + Libs-API)
  • A standard IDL for describing a Rust API/ABI that can be produced by the Rust compiler. (Lang + Compiler)

The "shiny future" we are working towards

It is essential that our industry adopts memory safety broadly. To realize this, Rust should be feasible to adopt in any application, particularly those which prioritize performance and reliability in addition to safety.

This includes making Rust feasible to adopt in both new and existing applications that make rich use of APIs in memory unsafe languages like C++. To the extent possible, incremental Rust adoption should only increase safety, never decrease it.

Given that this is a highly ambitious, multi-year project, we should begin with presenting the problem space as accurately as possible to the Rust language team as a way to receive guidance and build alignment on overall direction.

Design axioms

This goal adheres to the general design axioms in the interop initiative's problem statement:

  • Build the foundations for a better future while actively improving the present
  • Pursue high-quality interoperation from both sides
  • Pursue general-purpose interoperability (not tied to a specific toolchain/IR)
  • Avoid changes to Rust itself that would undermine its core values
  • Only change the language or standard library where external library solutions are insufficient

In addition, it proposes the following axioms:

  • Seek solutions that make 100% coverage possible. This means 100% of functions and methods defined in one language are callable in the other language. This may require some APIs to be unergonomic and/or unsafe to call.
  • Minimize the potential for human error. Interop should leverage trusted, automated tooling wherever possible.
  • Extend contracts between languages where possible. For example, a strongly typed interface in one language should be equally strongly typed in the other language, subject to the constraints imposed by that language.
  • Introduce zero overhead when calling between languages.
  • Prefer solutions that are general enough to apply to languages beyond C++.

Ownership and team asks

Owner: Jon Bauman and Tyler Mandry

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang compiler libs-api
Design meetingTeam lang compiler libs-api2-3 meetings expected; all involve lang
Author design docTyler Mandry can drive
Author design docHelp wanted
Author design docHelp wanted

Frequently asked questions

None yet.

Experiment with ergonomic ref-counting

Metadata
Point of contactSantiago Pastorino
Teamslang
StatusProposed

Summary

  • Deliver a nightly implementation of the experimental use syntax for ergonomic ref-counting.
  • RFC decision on the above

Motivation

For 2025H1 we propose to continue pursuing the use syntax that makes it more ergonomic to work with "cheaply cloneable" data particularly the use || closures. The specific goals are to land an experimental nightly implementation and an accepted RFC so that we can collect feedback from Rust Nightly users.

Like many ergonomic issues, these impact all users, but the impact is particularly severe for newer Rust users, who have not yet learned the workarounds, or those doing higher-level development, where the ergonomics of Rust are being compared against garbage-collected languages like Python, TypeScript, or Swift.

The status quo

Many Rust applications—particularly those in higher-level domains—use reference-counted values to pass around core bits of context that are widely used throughout the program. Reference-counted values have the convenient property that they can be cloned in O(1) time and that these clones are indistinguishable from one another (for example, two handles to a Arc<AtomicInteger> both refer to the same counter). There are also a number of data structures found in the stdlib and ecosystem, such as the persistent collections found in the im crate or the Sender type from std::sync::mpsc and tokio::sync::mpsc, that share this same property.

Rust's current rules mean that passing around values of these types must be done explicitly, with a call to clone. Transforming common assignments like x = y to x = y.clone() can be tedious but is relatively easy. However, this becomes a much bigger burden with closures, especially move closures (which are common when spawning threads or async tasks). For example, the following closure will consume the state handle, disallowing it from being used in later closures:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);
tokio::spawn(async move { /* code using `state` */ });
}

This scenario can be quite confusing for new users (see e.g. this 2014 talk at StrangeLoop where an experienced developer describes how confusing they found this to be). Many users settle on a workaround where they first clone the variable into a fresh local with a new name, such as:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

let _state = state.clone();
tokio::spawn(async move { /*code using `_state` */ });

let _state = state.clone();
tokio::spawn(async move { /*code using `_state` */ });
}

Others adopt a slightly different pattern leveraging local variable shadowing:

#![allow(unused)]
fn main() {
let state = Arc::new(some_state);

tokio::spawn({
    let state = state.clone();
    async move { /*code using `state`*/ }
});
}

Whichever pattern users adopt, explicit clones of reference counted values leads to significant accidental complexity for many applications. As noted, cloning these values is both cheap at runtime and has zero semantic importance, since each clone is as good as the other.

Impact on new users and high-level domains

The impact of this kind of friction can be severe. While experienced users have learned the workaround and consider this to be a papercut, new users can find this kind of change bewildering and a total blocker. The impact is also particularly severe on projects attempting to use Rust in domains traditionally considered "high-level" (e.g., app/game/web development, data science, scientific computing). Rust's strengths have made it a popular choice for building underlying frameworks and libraries that perform reliably and with high performance. However, thanks in large part to these kind of smaller, papercut issues, it is not a great choice for consumption of these libraries

Users in higher-level domains are accustomed to the ergonomics of Python or TypeScript, and hence ergonomic friction can make Rust a non-starter. Those users that stick with Rust long enough to learn the workarounds, however, often find significant value in its emphasis on reliability and long-term maintenance (not to mention performance). Small changes like avoiding explicit clones for reference-counted data can both help to make Rust more appealing in these domains and help Rust in other domains where it is already widespead.

The next 6 months

In 2024H2 we began work on an experimental implementation (not yet landed) and authored a corresponding RFC, which has received substantial feedback. In 2025H1 we will continue by (a) landing the experimental branch and (b) addressing feedback on the RFC, reading it with the lang-team, and reaching a decision.

The "shiny future" we are working towards

This goal is scoped around reducing (or eliminating entirely) the need for explicit clones for reference-counted data. See the FAQ for other potential future work that we are not asking the teams to agree upon now.

Design axioms

We don't have consensus around a full set of "design axioms" for this design, but we do have alignment around the following basic points:

  • Explicit ref-counting is a major ergonomic pain point impacting both high- and low-level, performance oriented code.
  • The worst ergonomic pain arises around closures that need to clone their upvars.
  • Some code will want the ability to precisely track reference count increments.
  • The design should allow user-defined types to "opt-in" to the lightweight cloning behavior.

Ownership and team asks

TaskOwner(s) or team(s)Notes
ImplementationSantiago Pastorino
ReviewsNiko Matsakis
Author RFCJosh TriplettComplete
Design meetingTeam lang
RFC decisionTeam lang

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None.

Expose experimental LLVM features for GPU offloading

Metadata
Point of contactManuel Drehwald
Teamslang, compiler
StatusProposed
Tracking issuerust-lang/rust-project-goals#109
Other tracking issuesrust-lang/rust#124509, rust-lang/rust#124509

Summary

Expose experimental LLVM features for GPU offloading and allow combining it with the std::autodiff feature.

Motivation

Scientific computing, high performance computing (HPC), and machine learning (ML) all share the interesting challenge in that they each, to different degrees, care about highly efficient library and algorithm implementations, but that these libraries and algorithms are not always used by people with deep experience in computer science. Rust is in a unique position because ownership, lifetimes, and the strong type system can prevent many bugs. At the same time strong alias information allows compelling performance optimizations in these fields, with performance gains well beyond that otherwise seen when comparing C++ with Rust. This is due to how automatic differentiation and GPU offloading strongly benefit from aliasing information.

The status quo

Thanks to PyO3, Rust has excellent interoperability with Python. Conversely, C++ has a relatively weak interop story. This can lead Python libraries to using slowed C libraries as a backend instead, just to ease bundling and integration. Fortran is mostly used in legacy places and hardly used for new projects.

As a solution, many researchers try to limit themself to features which are offered by compilers and libraries built on top of Python, like JAX, PyTorch, or, more recently, Mojo. Rust has a lot of features which make it more suitable to develop a fast and reliable backend for performance critical software than those languages. However, it lacks GPU support which developers now expect.

Almost every language has some way of calling hand-written CUDA/ROCm/Sycl kernels, but the interesting feature of languages like Julia, or of libraries like JAX, is that they offer users the ability to write kernels in the language the users already know, or a subset of it, without having to learn anything new. Minor performance penalties are not that critical in such cases, if the alternative are a CPU-only solution. Otherwise worthwhile projects such as Rust-CUDA end up going unmaintained due to being too much effort to maintain outside of LLVM or the Rust project.

Elaborate in more detail about the problem you are trying to solve. This section is making the case for why this particular problem is worth prioritizing with project bandwidth. A strong status quo section will (a) identify the target audience and (b) give specifics about the problems they are facing today. Sometimes it may be useful to start sketching out how you think those problems will be addressed by your change, as well, though it's not necessary.

The next six months

We are requesting support from the Rust project for continued experimentation:

  1. Merge an MVP #[offloading] fork which is able to run simple functions using rayon parallelism on a GPU, showing a speed-up.
  2. Show an example of how to combine #[offloading] with #[autodiff] to run a differentiated function on a GPU.

The "shiny future" we are working towards

The purpose of this goal is to enable continued experimentation with the underlying LLVM functionality. The eventual goal of this experimentation is that three important LLVM features (batching, autodiff, offloading) can be combined and work nicely together. The hope is that we will have state-of-the-art libraries like faer to cover linear algebra, and that we will start to see more and more libraries in other languages using Rust with these features as their backend. Cases which don't require interactive exploration will also become more popular in pure Rust.

Caveats to this future

There is not yet consensus amongst the relevant Rust teams as to how and/or whether this functionality should be exposed on stable. Some concerns that continued experimentation will hopefully help to resolve:

  • How effective and general purpose is this functionality?
  • How complex is this functionality to support, and how does that trade off with the value it provides? What is the right point on the spectrum of tradeoffs?
  • Can code using these Rust features still compile and run on backends other than LLVM, and on all supported targets? If not, how should we manage the backend-specific nature of it?
  • Can we avoid tying Rust features too closely to the specific properties of any backend or target, such that we're confident these features can remain stable over decades of future landscape changes?
  • Can we fully implement every feature of the provided functionality (as more than a no-op) on fully open systems, despite the heavily proprietary nature of parts of the GPU and accelerator landscape?

Design axioms

Offloading

  • We try to provide a safe, simple and opaque offloading interface.
  • The "unit" of offloading is a function.
  • We try to not expose explicit data movement if Rust's ownership model gives us enough information.
  • Users can offload functions which contains parallel CPU code, but do not have final control over how the parallelism will be translated to co-processors.
  • We accept that hand-written CUDA/ROCm/etc. kernels might be faster, but actively try to reduce differences.
  • We accept that we might need to provide additional control to the user to guide parallelism, if performance differences remain unacceptably large.
  • Offloaded code might not return the exact same values as code executed on the CPU. We will work with t-opsem to develop clear rules.

Autodiff

  • std::autodiff has been upstreamed as part of the last Project Goal. There are till open PRs under review, but I expect them to be merged still in 2024.
  • Currently we work on adding custom-derivatives and will upstream support for batching/vectorization next, but both will be small PRs once the basic infrastructure is in place.
  • Some features like safety checks or "TypeTrees" which will improve performance and catch usage mistakes were removed from the previous upstreaming PRs to make reviewing easier. We will upstream them at the side, but those are only 100-300 loc each, and thus should be easy to review.

Add your design axioms here. Design axioms clarify the constraints and tradeoffs you will use as you do your design work. These are most important for project goals where the route to the solution has significant ambiguity (e.g., designing a language feature or an API), as they communicate to your reader how you plan to approach the problem. If this goal is more aimed at implementation, then design axioms are less important. Read more about design axioms.

Ownership and team asks

Owner: Manuel Drehwald

Manuel S. Drehwald is working 5 days per week on this, sponsored by LLNL and the University of Toronto (UofT). He has a background in HPC and worked on a Rust compiler fork, as well as an LLVM-based autodiff tool for the last 3 years during his undergrad. He is now in a research-based master's degree program. Supervision and discussion on the LLVM side will happen with Johannes Doerfert and Tom Scogland.

Minimal "smoke test" reviews will be needed from the compiler-team. The Rust language changes at this stage are expected to be a minimal wrapper around the underlying LLVM functionality and the compiler team need only vet that the feature will not hinder usability for ordinary Rust users or cause undue burden on the compiler architecture itself. There is no requirement to vet the quality or usability of the design.

TaskOwner(s) or team(s)Notes
DevelopmentManuel Drehwald
Lang-team experimentTeam lang(approved)
Standard reviewsTeam compiler

Outputs and milestones

Outputs

  • An #[offload] rustc-builtin-macro which makes a function definition known to the LLVM offloading backend.

    • Made a PR to enable LLVM's offloading runtime backend.
    • Merge the offload macro frontend
    • Merge the offload Middle-end
  • An offload!([GPU1, GPU2, TPU1], foo(x, y,z)); macro (placeholder name) which will execute function foo on the specified devices.

  • An #[autodiff] rustc-builtin-macro which differentiates a given function.

    • Merge the Autodiff macro frontend
    • Merge the Autodiff Enzyme backend
    • Merge the Autodiff Middle-end
  • A #[batching] rustc-builtin-macro which fuses N function calls into one call, enabling better vectorization.

Milestones

  • The first offloading step is the automatic copying of a slice or vector of floats to a device and back.

  • The second offloading step is the automatic translation of a (default) Clone implementation to create a host-to-device and device-to-host copy implementation for user types.

  • The third offloading step is to run some embarrassingly parallel Rust code (e.g. scalar times Vector) on the GPU.

  • Fourth we have examples of how rayon code runs faster on a co-processor using offloading.

  • Stretch-goal: combining autodiff and offloading in one example that runs differentiated code on a GPU.

Frequently asked questions

Why do you implement these features only on the LLVM backend?

Performance-wise, we have LLVM and GCC as performant backends. Modularity-wise, we have LLVM and especially Cranelift being nice to modify. It seems reasonable that LLVM thus is the first backend to have support for new features in this field. Especially the offloading support should be supportable by other compiler backends, given pre-existing work like OpenMP offloading and WebGPU.

Do these changes have to happen in the compiler?

Yes, given how Rust works today.

However, both features could be implemented in user-space if the Rust compiler someday supported reflection. In this case we could ask the compiler for the optimized backend IR for a given function. We would then need use either the AD or offloading abilities of the LLVM library to modify the IR, generating a new function. The user would then be able to call that newly generated function. This would require some discussion on how we can have crates in the ecosystem that work with various LLVM versions, since crates are usually expected to have a MSRV, but the LLVM (and like GCC/Cranelift) backend will have breaking changes, unlike Rust.

Batching?

This is offered by all autodiff tools. JAX has an extra command for it, whereas Enzyme (the autodiff backend) combines batching with autodiff. We might want to split these since both have value on their own.

Some libraries also offer array-of-struct vs struct-of-array features which are related but often have limited usability or performance when implemented in userspace.

Writing a GPU backend in 6 months sounds tough...

True. But similar to the autodiff work, we're exposing something that's already existing in the backend.

Rust, Julia, C++, Carbon, Fortran, Chappel, Haskell, Bend, Python, etc. should not all have write their own GPU or autodiff backends. Most of these already share compiler optimization through LLVM or GCC, so let's also share this. Of course, we should still push to use our Rust specific magic.

How about Safety?

We want all these features to be safe by default, and are happy to not expose some features if the gain is too small for the safety risk. As an example, Enzyme can compute the derivative with respect to a global. That's probably too niche, and could be discouraged (and unsafe) for Rust.

Extend pubgrub to match cargo's dependency resolution

Metadata
Point of contactJacob Finkelman
Teamscargo
StatusProposed

Summary

Implement a standalone library based on pubgrub that model's cargo dependency resolution and bring it to a quality of code so that it can be maintained by the cargo team. This lays the groundwork for improved cargo error messages, extensions for hotly requested features (e.g., better MSRV support, CVE-aware resolution, etc), and support for a richer ecosystem of cargo extensions.

Motivation

Cargo's dependency resolver is brittle and under-tested. Disentangling implementation details, performance optimizations, and user-facing functionality will require a rewrite. Making the resolver a standalone modular library will make it easier to test and maintain.

The status quo

Big changes are required in cargo's resolver: there is lots of new functionality that will require changes to the resolver and the existing resolver's error messages are terrible. Cargo's dependency resolver solves the NP-Hard problem of taking a list of direct dependencies and an index of all available crates and returning an exact list of versions that should be built. This functionality is exposed in cargo's CLI interface as generating/updating a lock file. Nonetheless, any change to the current resolver in situ is extremely treacherous. Because the problem is NP-Hard it is not easy to tell what code changes break load-bearing performance or correctness guarantees. It is difficult to abstract and separate the existing resolver from the code base, because the current resolver relies on concrete datatypes from other modules in cargo to determine if a set of versions have any of the many ways two crate versions can be incompatible.

The next six months

Develop a standalone library for doing dependency resolution with all the functionality already supported by cargo's resolver. Prepare for experimental use of this library inside cargo.

The "shiny future" we are working towards

Eventually we should replace the existing entangled resolver in cargo with one based on separately maintained libraries. These libraries would provide simpler and isolated testing environments to ensure that correctness is maintained. Cargo plugins that want to control or understand what lock file cargo uses can interact with these libraries directly without interacting with the rest of cargo's internals.

Design axioms

  • Correct: The new resolver must perform dependency resolution correctly, which generally means matching the behavior of the existing resolver, and switching to it must not break Rust projects.
  • Complete output: The output from the new resolver should be demonstrably correct. There should be enough information associated with the output to determine that it made the right decision.
  • Modular: There should be a stack of abstractions, each one of which can be understood, tested, and improved on its own without requiring complete knowledge of the entire stack from the smallest implementation details to the largest overall use cases.
  • Fast: The resolver can be a slow part of people's workflow. Overall performance must be a high priority and a focus.

Ownership and team asks

Owner: Jacob Finkelman will own and lead the effort.

I (Jacob Finkelman) will be working full time on this effort. I am a member of the Cargo Team and a maintainer of pubgrub-rs.

Integrating the new resolver into Cargo and reaching the shiny future will require extensive collaboration and review from the Cargo Team. The next milestones involve independent work polishing various projects for publication. Review support from the cargo team, identifying what about the code needs to be documented and improved will be invaluable. However, there is plenty of work clearly available to do. If team members are not available progress will continue.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo
Implementation work on pubgrub libraryeh2046

Outputs

Standalone crates for independent components of cargo's resolver. We have already developed https://github.com/pubgrub-rs/pubgrub for solving the core of dependency resolution and https://github.com/pubgrub-rs/semver-pubgrub for doing mathematical set operations on Semver requirements. The shiny future will involve several more crates, although their exact borders have not yet been determined. Eventually we will also be delivering a -Z pubgrub for testing the new resolver in cargo itself.

Milestones

For all crate versions on crates.io the performance is acceptable.

There are some crates where pubgrub takes a long time to do resolution, and more where pubgrub takes longer than cargo's existing resolver. Investigate each of these cases and figure out if performance can be improved either by improvements to the underlying pubgrub algorithm or the way the problem is presented to pubgrub.

Make a new release of pubgrub with the features developed for the prototype.

The prototype testing tool has relied on pubgrub as a git dependency. This has allowed rapid feedback on proposed changes. Before cargo can depend on PubGrub these changes need to be polished and documented to a quality appropriate for publication on crates.io.

Determine what portion of the prototype can be maintained as a standalone library.

One of the goals of this effort is to have large portions of resolution be maintained as separate packages, allowing for their use and testing without depending on all of cargo. Figure out which parts of the prototype can be separate packages and which parts should be part of cargo.

Get cargo team review of code developed in the prototype.

Much of the prototypes code has only ever been understood by me. Before it becomes a critical dependency of cargo or part of cargo, it needs to be polished and documented so that other members the cargo team would be comfortable maintaining it.

Frequently asked questions

If the existing resolver defines correct behavior then how does a rewrite help?

Unless we find critical bugs with the existing resolver, the new resolver and cargo's resolver should be 100% compatible. This means that any observable behavior from the existing resolver will need to be matched in the new resolver. The benefits of this work will come not from changes in behavior, but from a more flexible, reusable, testable, and maintainable code base. For example: the base pubgrub crate solves a simpler version of the dependency resolution problem. This allows for a more structured internal algorithm which enables complete error messages. It's also general enough not only to be used in cargo but also in other package managers. We already have contributions from the maintainers of uv who are using the library in production.

Externally Implementable Items

Metadata
Point of contactMara Bos
Teamslang, compiler
StatusProposed

Summary

We intend to implement Externally Implementable Items in the compiler. The plan is to do so in a way that allows us to change the way #[panic_handler] and similar attributes are handled, making these library features instead of compiler built-ins. We intend to eventually support both statics and functions, but the priority is at functions right now.

Motivation

(as per the rfcs[^1][^2]1 on this):

We have several items in the standard library that are overridable/definable by the user crate. For example, the (no_std) panic_handler, the global allocator for alloc, and so on.

Each of those is a special lang item with its own special handling. Having a general mechanism simplifies the language and makes this functionality available for other crates, and potentially for more use cases in core/alloc/std.

In general, having externally implementable items be feature of the language instead of magic lang items and linker hacks gives more flexibility. It creates a standard interface to expose points where libraries can be customized.

Additionally, making externally implementable items a language feature makes it easier to document these points of customization. They can become part of the public api of a crate.

2

https://github.com/rust-lang/rfcs/pull/3632

3

https://github.com/rust-lang/rfcs/pull/3635

1

https://github.com/rust-lang/rfcs/pull/3645

The status quo

Today, "externally implementable items" exist in various forms that each have their own implementation. Examples are the #[panic_handler], the global allocator, the global logger of the log crate, and so on. Some of these are magical lang items, whereas others need to be set at runtime or are done (unsafely) through a global (#[no_mangle]) linker symbol.

After [RFC 3632], which proposes a new syntax for externally implementable functions, several alternative ideas were proposed in rapid succession that focus on statics, traits, and impl blocks rather than function definitions. Each of these having roughly equivalent power, but using a different part of Rust to achieve it.

The lang team agreed that this is a problem worth solving, and accepted it as a lang experiment.4

While working on implementing possible solutions, we concluded that it'd be better to make use of attributes rather than new syntax, at least for now.5

Because this requires support for name resolution in attributes, this led to a big detour: refactoring how attributes are implemented and handled in rustc.6 The main part of that is now merged7, allowing us to finally continue on implementing the externally implementable items experiment itself.

4

https://github.com/rust-lang/rfcs/pull/3632#issuecomment-2125488373

5

https://github.com/rust-lang/rust/issues/125418#issuecomment-2360542039

6

https://github.com/rust-lang/rust/issues/131229

7

https://github.com/rust-lang/rust/pull/131808

The next 6 months

The goal for the next six months is to finish the implementation of externally implementable items (as an experimental feature).

It is not unthinkable that we run into more obstacles that requires some changes in the compiler, but we estimate that six months is enough to make the feature available for experimentation.

The "shiny future" we are working towards

In the longer term, this feature should be able to replace the magic behind the panic handler, global allocator, oom handler, and so on. At that point, an attribute like #[panic_handler] would simply be a regular (externally implementable) item exported by core, for example.

After stabilization, other crates in the ecosystem, such as the log crate, should be able to make use of this as well. E.g., they could have a #[log::global_logger] item that can be used to provide the global logger.

In the longer term, this could enable more fine grained customization of parts of core, alloc and std, such as panic handling. For example, right now, all kind of panics, including overflows and out-of-bounds panics, can all only be handled through the #[panic_handler], which will only get a panic message containing an (english) description of the problem. Instead, one could imagine having a #[panic_oob_handler] that gets the index and size as arguments, allowing one to customize the default behavior.

Design axioms

The experimental feature we implement should:

  • be able to replace how #[panic_handler] and global allocator features are implemented.
    • This means the feature should not have a higher (memory, performance, etc.) cost than how those features are currently implemented.
    • This also puts some requirements on the supported functionality, to support everything that those features currently support. (E.g., being able to provide a default implementation that can be overridden later.)
  • be ergonomic.
    • This means that mistakes should not result in confusing linker errors, but in reasonable diagnostics.
  • allow for semver-compatible upgrade paths.
    • E.g. if a crate wants to change the signature or kind of an externally implementable item, it should be possible to have some backwards-compatible path forward.
  • be as close to zero-cost as possible.
    • E.g. adding the option for more fine grained panic handlers to core should not result in a loss of performance.

Ownership and team asks

Owner: Jonathan Dönszelmann(Jonathan Dönszelmann) and Mara Bos(Mara Bos)

TaskOwner(s) or team(s)Notes
Discussion and moral supportcompiler, lang
Lang-team experimentTeam langAlready approved
Design experiment (syntax, etc.)Jonathan and MaraDone
Refactor attributes in rustcJonathanIn progress, refactor merged
Implement experimentJonathan and Mara
Standard reviewsTeam compiler
Blog post inviting feedbackJonathan and Mara
Update RFC with new findingsJonathan and Mara

Frequently asked questions

  • None yet.

Field Projections

Metadata
Point of contactBenno Lossin
Teamslang, libs, compiler
StatusProposed

Summary

Finalize the Field Projections RFC and implement it for use in nightly.

Motivation

Rust programs often make use of custom pointer/reference types (for example Arc<T>) instead of using plain references. In addition, container types are used to add or remove invariants on objects (for example MaybeUninit<T>). These types have significantly worse ergonomics when trying to operate on fields of the contained types compared to references.

The status quo

Field projections are a unifying solution to several problems:

  • pin projections,
  • ergonomic pointer-to-field access operations for pointer-types (*const T, &mut MaybeUninit<T>, NonNull<T>, &UnsafeCell<T>, etc.),
  • projecting custom references and container types.

Pin projections have been a constant pain point and this feature solves them elegantly while at the same time solving a much broader problem space. For example, field projections enable the ergonomic use of NonNull<T> over *mut T for accessing fields.

In the following sections, we will cover the basic usage first. And then we will go over the most complex version that is required for pin projections as well as allowing custom projections such as the abstraction for RCU from the Rust for Linux project.

Ergonomic Pointer-to-Field Operations

We will use the struct from the RFC's summary as a simple example:

#![allow(unused)]
fn main() {
struct Foo {
    bar: i32,
}
}

References and raw pointers already possess pointer-to-field operations. Given a variable foo: &T one can write &foo.bar to obtain a &i32 pointing to the field bar of Foo. The same can be done for foo: *const T: &raw (*foo).bar (although this operation is unsafe) and their mutable versions.

However, the other pointer-like types such as NonNull<T>, &mut MaybeUninit<T> and &UnsafeCell<T> don't natively support this operation. Of course one can write:

#![allow(unused)]
fn main() {
unsafe fn project(foo: NonNull<Foo>) -> NonNull<i32> {
    let foo = foo.as_ptr();
    unsafe { NonNull::new_unchecked(&raw mut (*foo).bar) }
}
}

But this is very annoying to use in practice, since the code depends on the name of the field and can thus not be written using a single generic function. For this reason, many people use raw pointers even though NonNull<T> would be more fitting. The same can be said about &mut MaybeUninit<T>.

Field projection adds a new operator that allows types to provide operations generic over the fields of structs. For example, one can use the field projections on MaybeUninit<T> to safely initialize Foo:

#![allow(unused)]
fn main() {
impl Foo {
    fn initialize(this: &mut MaybeUninit<Self>) {
        let bar: &mut MaybeUninit<i32> = this->bar;
        bar.write(42);
    }
}
}

There are a lot of types that can benefit from this operation:

  • NonNull<T>
  • *const T, *mut T
  • &T, &mut T
  • &Cell<T>, &UnsafeCell<T>
  • &mut MaybeUninit<T>, *mut MaybeUninit<T>
  • cell::Ref<'_, T>, cell::RefMut<'_, T>
  • MappedMutexGuard<T>, MappedRwLockReadGuard<T> and MappedRwLockWriteGuard<T>

Pin Projections

The examples from the previous section are very simple, since they all follow the pattern C<T> -> C<F> where C is the respective generic container type and F is a field of T.

In order to handle Pin<&mut T>, the return type of the field projection operator needs to depend on the field itself. This is needed in order to be able to project structurally pinned fields from Pin<&mut T> to Pin<&mut F1> while simultaneously projecting not structurally pinned fields from Pin<&mut T> to &mut F2.

Fields marked with #[pin] are structurally pinned field. For example, consider the following future:

#![allow(unused)]
fn main() {
struct FairRaceFuture<F1, F2> {
    #[pin]
    fut1: F1,
    #[pin]
    fut2: F2,
    fair: bool,
}
}

One can utilize the following projections when given fut: Pin<&mut FairRaceFuture<F1, F2>>:

  • fut->fut1: Pin<&mut F1>
  • fut->fut2: Pin<&mut F2>
  • fut->fair: &mut bool

Using these, one can concisely implement Future for FairRaceFuture:

#![allow(unused)]
fn main() {
impl<F1: Future, F2: Future<Output = F1::Output>> Future for FairRaceFuture<F1, F2> {
    type Output = F1::Output;

    fn poll(self: Pin<&mut Self>, ctx: &mut Context) -> Poll<Self::Output> {
        let fair: &mut bool = self->fair;
        *fair = !*fair;
        if *fair {
            // self->fut1: Pin<&mut F1>
            match self->fut1.poll(ctx) {
                Poll::Pending => self->fut2.poll(ctx),
                Poll::Ready(res) => Poll::Ready(res),
            }
        } else {
            // self->fut2: Pin<&mut F2>
            match self->fut2.poll(ctx) {
                Poll::Pending => self->fut1.poll(ctx),
                Poll::Ready(res) => Poll::Ready(res),
            }
        }
    }
}
}

Without field projection, one would either have to use unsafe or reach for a third party library like pin-project or pin-project-lite and then use the provided project function.

The next 6 months

Finish and accept the Field Projections RFC

Solve big design questions using lang design meetings:

  • figure out the best design for field traits,
  • determine if unsafe field projections should exist,
  • settle on a design for the Project trait,
  • add support for simultaneous projections.

Bikeshed/solve smaller issues:

  • projection operator syntax,
  • should naming field types have a native syntax?
  • naming of the different types and traits,
  • discuss which stdlib types should have field projection.

Implement the RFC and Experiment

  • implement all of the various details from the RFC
  • experiment with field projections in the wild
  • iterate over the design using this experimentation

The "shiny future" we are working towards

The ultimate goal is to have ergonomic field projections available in stable rust. Using it should feel similar to using field access today.

Ownership and team asks

Owner: Benno Lossin

TaskOwner(s) or team(s)Notes
Design meetingTeam lang
RFC decisionTeam lang
ImplementationHelp wanted, Benno Lossin
Standard reviewsTeam compiler

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Finish the libtest json output experiment

Metadata
Point of contactEd Page
Teamstesting-devex, libs-api
StatusProposed

Summary

Finish the libtest json experiment.

Motivation

libtest is the test harness used by default for tests in cargo projects. It provides the CLI that cargo calls into and enumerates and runs the tests discovered in that binary. It ships with rustup and has the same compatibility guarantees as the standard library.

Before 1.70, anyone could pass --format json despite it being unstable. When this was fixed to require nightly, this helped show how much people have come to rely on programmatic output.

Cargo could also benefit from programmatic test output to improve user interactions, including

Most of that involves shifting responsibilities from the test harness to the test runner which has the side effects of:

  • Allowing more powerful experiments with custom test runners (e.g. cargo nextest) as they'll have more information to operate on
  • Lowering the barrier for custom test harnesses (like libtest-mimic) as UI responsibilities are shifted to the test runner (cargo test)

The status quo

The next 6 months

  1. Experiment with potential test harness features
  2. Experiment with test reporting moving to Cargo
  3. Putting forward a proposal for approval

The "shiny future" we are working towards

  • Reporting shifts from test harnesses to Cargo
  • We run test harnesses in parallel

Design axioms

  • Low complexity for third-party test harnesses so its feasible to implement them
  • Low compile-time overhead for third-party test harnesses so users are willing to take the compile-time hit to use them
  • Format can meet expected future needs
    • Expected is determined by looking at what other test harnesses can do (e.g. fixture, paramertized tests)
  • Format can evolve with unexpected needs
  • Cargo perform all reporting for tests and benches

Ownership and team asks

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam testing-devex
Prototype harnessEd Page
Prototype Cargo reporting supportEd Page
Write stabilization reportEd Page

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Implement Open API Namespace Support

Metadata
Point of contactEd Page
Teamscargo, compiler, crates-io
StatusProposed for mentorship

Summary

Navigate the cross-team design work to get RFC 3243 implemented.

Motivation

RFC 3243 proposed opening up namespaces in Rust to extension, managed by the package name with crates-io putting access control on who can publish to a crate's API namespace. This covers multiple teams and needs a lot of coordination to balance the needs of each team as shown on the rustc tracking issue.

The status quo

Cargo support is partially implemented. No compiler support. There is a crates-io prototype for a previous iteration of RFC 3243 but that code base has likely diverged a lot since then.

The next 6 months

Implement at least Cargo and compiler support for this to be experimented with and allow crates-io work.

The "shiny future" we are working towards

Design axioms

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo, compiler
Compiler implementationHelp wanted
Work through lingering cargo issuesGoal point of contact, Ed Page

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Implement restrictions, prepare for stabilization

Metadata
Point of contactJacob Pratt
Teamslang
StatusProposed

Summary

RFC 3323 will be implemented and feature-complete, with all syntax questions resolved. The features will be prepared for stabilization.

Motivation

The RFC for restrictions was accepted over two years ago, but the pull request implementing it has been stalled for a long time for a variety of reasons. Implementing the feature will permit testing, feedback, and stabilization.

The status quo

Sealed traits are a common pattern in Rust, but are not currently supported by the language itself. Instead, they are implemented using a combination of visibility modifiers and nested modules. Fields with restricted mutability are currently only possible with getters and setters, setting aside (ab)using Deref implementations.

More details are available in the RFC.

The next 6 months

The accepted restrictions RFC represents the end goal of this project goal. All unresolved questions should be discussed and resolved, with the two features (impl_restrictions and mut_restrictions) being ready for stabilization. Future possibilities are likely considered at a high level, but are not the focus of this project goal.

Ownership and team asks

Owner: Jacob Pratt

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang
ImplementationJacob Prattold PR is plausibly workable
Standard reviewsTeam compiler
Prioritized nominationsTeam langfor unresolved questions, including syntax
Author stabilization reportJacob Pratt
Stabilization decisionTeam lang
Inside Rust blog post inviting feedbackJacob Prattfeedback on syntax if no team consensus

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Isn't the syntax already decided?

While the RFC was accepted without this being an unresolved question (aside from a simpler syntax for common cases), I believe that an attribute-based syntax such as #[restrict(impl(crate))] may be, but is not necessarily, favorable to the syntax in the RFC. This is because it is backwards-compatible with all existing macros and prevents nontrivial additions to the width of code.

Improve state machine codegen

Metadata
Point of contactFolkert de Vries
TeamsT-compiler
StatusProposed

Summary

We want to improve rustc codegen, based on this initialive by the Trifecta Tech Foundation. The work focusses on improving state machine code generation, and finding (and hopefully fixing) cases where clang produces better code than rustc for roughly equivalent input.

Motivation

Matching C performance is crucial for rust adoption in performance-sensitive domains. Rust is doing well overall, but not good enough.

In compression, video decoding and other high-performance areas, nobody will use rust if it is even a couple percent slower: latency, power (i.e. battery) consumption and other factors are just more important than whatever advantages rust can bring. In particular, we've observed that C code translated to rust code, whether manually or mechanically, often performs a couple percent worse than the original C.

Given that we clang and rustc both use LLVM for code generation, there is no fundamental reason that rust should be slower.

The status quo

Our target audience is users of rust in performance-sensitive domains, where the rustc codegen hinders adoption of rust. Concretely we have most experience with, and knowledge of the bottlenecks in these projects:

In the compression libraries, we spotted a specific pattern (in rust terms, a loop containing a match) where rust is not able to generate good code today. We wrote RFC 3720 to tackle this problem.

In the case of rav1d, the performance is several percent worse than its C equivalent dav1d. The rav1d project used c2rust to translate the dav1d C source to rust. Hence the two code bases are basically equivalent, and we'd expect basically identical performance.

The rav1d developers were unable to track down the reason that rav1d performs worse than dav1d: their impression (that we have confirmed with various rustc developers) is that rustc+llvm is just slightly worse at generating code than clang+llvm, because llvm overfits to what clang gives it.

The next 6 months

Improve state machine codegen

The problem, and a range of possible solutions, is described in RFC 3720.

  • recognize the problematic pattern in zlib-rs in HIR, based on a fragile heuristic
  • ensure it is eventually turned into a goto to the actual target in MIR
  • evaluate how effective that is for other projects (e.g. rustc itself)
  • depending on how RFC 3720 evolves, implement the specific proposal (syntax, lints, error messages)

Finding performance bottlenecks

We want to build a tool that uses creduce and c2rust to find small examples where clang+llvm produces meaningfully better code than rust+llvm.

The output will be either issues with small rust snippets that have suboptimal codegen (compared to clang) or PRs fixing these problems.

The "shiny future" we are working towards

The shiny future is to improve rust codegen to encourage wider adoption of rust in performance-sensitive domains.

Ownership and team asks

Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam compiler
Lang-team experimentTeam lang
Refine RFC 3720Folkert de Vries
ImplementationFolkert de Vries, bjorn3
Standard reviewsTeam compiler

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None yet

Instrument the Rust standard library with safety contracts

Metadata
Point of contactCelina G. Val
Teamslibs
StatusProposed

Summary

Finish the implementation of the contract attributes proposed in the compiler MCP-759, and port safety contracts from the verify-rust-std fork to the Rust standard library.

Motivation

Safety contracts serve as formal specifications that define the preconditions, postconditions, and invariants, that must be maintained for functions and data structures to operate correctly and safely. Currently, the Rust standard library already contains safety pre- and postconditions specified in the unsafe functions' documentation.

Contract attributes will enable developers to define safety requirements and behavioral specifications through programmatic contracts, which can be automatically converted into runtime checks when needed. These contracts can also express conditions that are verifiable through static analysis tools, and also provide foundation for formal verification of the standard library implementation, and other Rust code.

The status quo

Safety conditions are already well documented, and the Rust standard library is also instrumented using check_library_ub and check_language_ub in many different places for conditions that are checkable at runtime.

The compiler team has also accepted Felix Klock's proposal MCP-759 to add experimental contracts attributes, and the initial implementation is currently under review.

Finally, we have annotated and verified around 200 functions in the verify-rust-std fork with safety contracts using contract attributes similar to the ones proposed in MCP-759.

The next 6 months

First, we will keep working with the compiler team to finish the implementation of contract attributes. We'll add support to #[contracts::requires] and #[contracts::ensures] attributes as described in MCP-759, as well type invariant specification.

This will allow users to convert contracts into runtime checks, as well as, provide compiler interface for external tools, such as verification tools, to retrieve the annotated contracts.

Once that has been merged to the compiler, we will work with the library to annotate functions of the standard library with their safety contracts.

The "shiny future" we are working towards

All unsafe functions in Rust should have their safety conditions specified using contracts, and verified that those conditions are enough to guarantee absence of undefined behavior.

Rust users should be able to check that their code do not violate the safety contracts of unsafe functions, which would rule out the possibility that their applications could have a safety bug.

Design axioms

  • No runtime penalty: Instrumentation must not affect the standard library runtime behavior, including performance, unless users opt-in for contract runtime checks.
  • Formal Verification: Enable the verification of the standard library implementation.
  • Contract as code: Keeping the contract language and specification as close as possible to Rust syntax and semantics will lower the barrier for users to understand and be able to write their own contracts.

Ownership and team asks

Owner: Celina G. Val and Michael Tautschnig

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam libs

Experimental Contract attributes

TaskOwner(s) or team(s)Notes
Author MCPComplete Done already by Felix Klock
ImplementationCelina G. ValIn progress.
Standard reviewsTeam compiler
Design meetingTeam compiler

Standard Library Contracts

TaskOwner(s) or team(s)Notes
Standard Library ContractsCelina G. Val, Michael Tautschnig
Writing new contractsHelp wanted
Standard reviewsTeam libs

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author MCP and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.

Frequently asked questions

Integration of the FLS into the Rust Project

Metadata
Point of contactJoel Marcey
Teamsspec
StatusProposed

Summary

Ferrous Systems will be transferring the Ferrocene Language Specification (FLS) to the Rust Project, under the ownership of the Specification Team, or t-spec. In the first half of 2025, the Specification team will integrate the FLS, under an appropriate name, into both its design and development processes, and the project as a whole.

Motivation

The Specification Team has been working for the past year on preparing a specification of Rust. Over this time, the Team has made and began executing several distinct plans to achieve this: creating a new document; modifying the reference; and most recently, agreeing with Ferrous Systems to take ownership of the FLS to support its specification delivery efforts. The current plan is to do the latter two processes in parallel, and in order to do that effectively the Ferrocene Language Specification needs to be adopted and integrated into the project processes and tooling.

The status quo

RFC 3355 describes the goals of the specification as "[Serving] the needs of Rust users, such as authors of unsafe Rust code, those working on safety critical Rust software, language designers, maintainers of Rust tooling, and so on," and "Incorporating it in their process for language evolution. For example, the language team could require a new language feature to be included in the specification as a requirement for stabilization."

Presently, the working draft Specification of Rust consists of a modified version of the reference, achieved by adding paragraph identifiers (almost finished), and slowly modifying the content to more normatively describe the language. This may help achieve one of the presented goals for the the specification, namely incorporation into the language evolution process. However, Ferrous Systems has, over the past 2 years, developed the Ferrocene Language Specification, which has seen adoption in the Safety Critical Space, and a sharp change in the specification would create substantial financial burdens on those early adopters.

Based on more recent discussions and agreements with Ferrous Systems, the Specification Team will be incorporating the the Ferrocene Language Specification as-is into its processes. This will leave us with two documents to maintain, with decisions to make on how they will fit into the Specification delivery process overall.

The next 6 months

In order to properly integrate the Ferrocene Language Specification, presumably under a different name, the specification team will need to adopt processes surrounding modification, editing, review, and release of the document.

The "shiny future" we are working towards

The goal is designed to move forward the Rust Specification, in a way that is satisfying to both internal and external consumers, and that makes progress on the overall goals set out in RFC 3355. It is also designed to put us in a position for a 2025h2 goal of producing a first useful version of the specification that satisfies those goals, as well as any ancillary work that needs to be done along side the specification itself.

Design axioms

The following design axioms apply:

  • Making Decisions Effectively, but Efficiently: When the goal asks the Team to make a decision, the Team should be prepared in advance with the necessary background, and come to consensus based on as much information as is possible, but at the same time, acting with efficiency and alacrity, not spending more time than is necessary on a decision. In particular, the team should not delay discussing a decision more than is necessary.
    • Elaborating on the last part, decisions the team are well aware of needing to make should not be deferred once all of the requesite information is available, unless a higher priority decision needs to supplant it.
  • Iterative changes are better: When it comes to making modifications, particularly to the FLS, slow and gradual ones should be preferred to sharp, major ones.

Ownership and team asks

Owner: As the hired specification editor, Joel Marcey will own the overall goal. Connor Horman will also aid in bringing the goal to completion in their role as a Contractor.

Some subgoals list an expected due/completion date. If one is omitted, compeletion by the end of 2025h1 is implied.

TaskOwner(s) or team(s)Notes
MiscellaneousTeam specTake ownership of the FLS (prior to, or shortly into January 2025).

Integrate FLS into T-spec processes

TaskOwner(s) or team(s)Notes
Review Existing Editorial Standards in the FLSEnd of January 2025
Review Tooling used by the FLSConnor HormanEnd of January 2025
Author Proposal for specifics of FLS integrationConnor HormanMid-Late Februrary 2025
RFC decisionTeam specEnd of March 2025
Adjust Tooling, as neededConnor HormanApril 2025
Begin implementing the integration ProposalConnor Horman

Integrate FLS into release process

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam releaseFebrurary 2025
Link tooling used with FLS to the release processApril 2025
Review of FLS prior to releaseMay 2025
Get FLS into a Rust ReleaseRust 1.89
Standard reviewsTeam release

Frequently asked questions

Making compiletest more maintainable: reworking directive handling

Metadata
Point of contactJieyou Xu
Teamsbootstrap, compiler
StatusProposed

Summary

Short description of what you will do over the next 6 months.

Rework [compiletest]'s directive handling to make it more maintainable, have better UX for compiler contributors, and fix some long-standing issues.

Motivation

rustc relies on the test infrastructure implemented by the test harness [compiletest] (supported by bootstrap) to run the test suites under tests/ (e.g. ui tests, mir-opt tests, run-make tests, etc.). However, [compiletest] is currently very undertested and undermaintained, which is not ideal because we rely on the test suites to check rustc's behavior. The current implementation in [compiletest] is also such that it's very hard and unpleasant to make changes (e.g. adding new directives) to provide up-to-date test infrastructure support for the needs of compiler (and rustdoc) contributors. The UX is not great either because of poor error handling and error reporting.

The status quo

The current status quo is that [compiletest] imposes significant friction for compiler (and rustdoc) contributors who want to run tests and diagnose test failures. [compiletest] error messages are opaque, terse and hard to read. We had to include a separate allow-list of known directives to detect unknown directives. We still sometimes let malformed directives through and silently do nothing. Argument splitting is naive and inconsistent. The implementation is very convoluted. Also there's still insufficient documentation.

See the tracking issue of various directive handling related bugs.

The next 6 months

The key changes I want to achieve:

  1. Directive handling is testable (at all) and in addition have strong test coverage.
  2. Directives have stricter syntax to reduce ambiguity and enable invalid directive detection or make invalid directive detection easier.
  3. Directives are well-documented. Move directive documentation close to directives themselves and make it possible to be generated alongside tool docs for compiletest, so it's less likely to become outdated and to enable documentation coverage enforcement.
    • Also, make sure that we have robust self documentation so it's not only one or two contributors who understands how things work inside compiletest...
  4. Generally improve directive handling robustness. Examples: fixing argument splitting in compile-flags, fix paths related to aux-build, etc.
  5. Test writers and reviewers can receive better diagnostics, for things like a directive is not accepted in a given test suite or why something in compiletest failed.

The "shiny future" we are working towards

My long-term goal for [compiletest] is that I want it to make it significantly easier to maintain. Concretely, this means significantly better test coverage, easier to extend, better documentation. Hopefully, by being more maintainable, we are able to attract more active maintainers from both bootstrap and compiler teams and make the code base significantly more pleasant to work on.

For directive handling specifically, it should mean that:

  • It's relatively straightforward and low friction to implement new directives, including test coverage and documentation. It should be easy to do the right thing.
  • [compiletest] should produce error messages that are easy to read and understand, possibly even making suggestions.
  • Directives should be documented (and enforced to be documented) via rustdoc which are made available on nightly-rustc docs so we can back-link from dev-guide and not have to maintain two sets of docs that are mutually inconsistent.

Ownership and team asks

Owner: [Jieyou Xu]

Note that [compiletest] is (in theory) currently co-maintained by both t-bootstrap and t-compiler, but AFAIK is (in practice) currently not really actively maintained by anyone else. The following team asks are probably mostly compiler for feedback on their use cases (as a test infra consumer) and bootstrap for implementation review.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam bootstrap, compiler, rustdocincluding consultations for desired test behaviors and testing infra consumers
Experimental prototype implementation1Jieyou Xuto see how approaches look like and gain experience/feedback
[compiletest] changes w/ experience from prototypeJieyou Xu
Standard reviewsTeam bootstrap, compilerProbably mostly bootstrap or whoever is more interested in reviewing [compiletest] changes
Inside Rust blog post for project outcomeJieyou Xu
1

I want to start with an out-of-tree experimental prototype to see how the pieces are fit together to make it easier to rapidly iterate and receive feedback without having to mess with the "live" [compiletest] that does not have sufficient test coverage.

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

TODO: pending during project discussions

[Jieyou Xu]: https://github.com/jieyouxu [compiletest]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest

Metrics Initiative

Metadata
Point of contactJane Lusby
TeamsCompiler
StatusProposed

Summary

Build out the support for metrics within the rust compiler and starting with a Proof of concept dashboard for viewing unstable feature usage statistics over time.

Motivation

We're envisioning three use cases for the Metrics Initiative:

  1. Supporting feature development, e.g. answering specific questions such as when the old and new trait solvers diverge, showing unstable feature usage trends, or helping identify and resolve bugs.
  2. Guiding improvements to User Experience, e.g. knowing which compiler errors are causing the most confusion or are hit the most frequently, focusing on improving those first, and verifying that the improvements help.
  3. Improving perf feedback loops and insight, e.g. helping identify pathological edge cases, similar to work Nicholas Nethercote has done manually in the past

We're focusing initially on the first use case since we see that as the most likely to have a significant impact.

The status quo

Currently the Rust compiler has the capability to store to disk a backtrace and additional runtime information whenever an ICE occurs. This is only enabled on nightly due to concerns around where this file is stored, and how the output includes the fully qualified path of the file, which normally includes the username for the user that executed rustc.

Additionally, our users can use Cargo's --timings flag and rustc's -Z self-profile to generate reports on where compile times are going, but these are explicit opt-in actions, that produce output meant for direct human consumption, not for tool analysis.

For the uses of the perf dashboard, internal compiler aggregates can be collected, but lack granularity for complex analysis. These are currently only used to detect changes in behavior between two rustc builds.

All together these tools give us the ability to gather information about the inner workings of the compiler on a case by case basis, but any attempt to piece together trends within this information is often left as a manual process if not left undone entirely. This often leaves teams to guess at how people are using the language or to rely on proxies for that information.

The next 6 months

Sketch out the specific things you are trying to achieve in this goal period. This should be short and high-level -- we don't want to see the design!

  • Initial prototypes and proof of concept impls
    • initial metrics dumping in compiler e.g. unstable feature usage info
    • backend to store metrics
    • enable metrics dumping on existing project infra for open source crates (e.g. docs.rs or crater) and send metrics to backend
    • proof of concept dashboard for viewing metrics

The "shiny future" we are working towards

We'd like to get to the point where lang and libs can pull up a simple dashboard to see exactly what features exist, what their status is, and what their usage over time looks like. Beyond that, we want to get to the point where other contributors and teams can leverage the metrics to answer their own questions while we continue to build up the supporting infrastructure. The metrics should make it possiblle to track how often certain ICEs are encountered or if certain code paths are hit or any other question about real world usage of the compiler that our contributors and maintainers may have.

Design axioms

  • Trust: Do not violate the trust of our users
    • NO TELEMETRY, NO NETWORK CONNECTIONS
    • Emit metrics locally
    • User information should never leave their machine in an automated manner; sharing their metrics should always be opt-in, clear, and manual.
    • All of this information would only be stored on disk, with some minimal retention policy to avoid wasteful use of users’ hard drives
  • Feedback: improving feedback loops to assist with iterative improvement within the project
    • answer questions from real production environments in a privacy-preserving way
    • improve legibility of rare or intermittent issues
    • earlier warnings for ICEs and other major issues on nightly, improving the likelihood that we'd catch them before they hit stable.
      • https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html
  • Performance impact
    • leave no trace (minimize performance impact, particularly for default-enabled metrics)
  • Extensible:
    • it should be easy to add new metrics as needed
    • Only add metrics as a way to answer a specific question in mind, with an explicitly documented rationale
    • machine readable, it should be easy to leverage metrics for analysis with other tools
  • User experience:
    • improving user experience of reporting issues to the project
    • improving the user experience of using the compiler, measuring the impact of changes to user experience

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportcompiler, infra
ImplementationJane Lusby
backend for storing metricsEsteban Kuber
integration with docs.rs or crates.io to gather metrics from open source rust projectsJane Lusby
proof of concept dashboard visualizing unstable feature usage dataHelp Wanted
Standard reviewscompiler

Frequently asked questions

Model coherence in a-mir-formality

Metadata
Point of contactNiko Matsakis
TeamsTypes
StatusProposed

Summary

We will model coherence (including negative impls) in a-mir-formality and compare its behavior against rustc. This will require extending a-mir-formality with the ability to run Rust tests.

Motivation

Over the next six months we will test a-mir-formality's model of coherence against the new trait solver. To do this, we will extend it with the ability to run (small, simple) Rust files as tests and then build a tool to compare its behavior against rustc. This is working towards our ultimate goal for a-mir-formality being an "executable spec and playground" for the Rust type system. There are a number of things that need to happen for that goal to be truly realized in practice but the biggest of them is to be able to compare behavior against rustc.

The status quo

a-mir-formality has a sketch of a model of the Rust type system but tests must be written in a "Rust-like" dialect. This dialect is great for precisely controlling the input but makes it impossible to compare mir-formality's behavior to rustc in any systematic way.

The next 6 months

Our goal for the next 6 months is to use a-mir-formality to document and explore Rust's coherence system. Towards this end we will do two major initiatives:

  • Preparing an explainer that documents a-mir-formality's rules and reading it with the types team;
    • this will also involve changing and improving those rules
  • Extending a-mir-formality with the ability to run (small, simple) unmodified Rust tests.

We will use the ability to run tests to compare the behavior of a-mir-formality against rustc, looking for discrepancies between the model and rustc's actual behavior.

The "shiny future" we are working towards

We are working towards a future where

  • a-mir-formality is regularly tested against a subset of the compiler's test suite;
  • new features that impact the type system are modeled in a-mir-formality prior to stabilization (and perhaps prior to RFC);
  • a-mir-formality is widely maintained by all members of the types team.

Design axioms

The primary "axiom" in choosing this goal is that there's nothing like the magic of running code -- in other words, the best way to make the shiny future come true is going to be making it easy to write tests and play with a-mir-formality. Right now the barrier to entry is still too high.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam types

Modeling and documenting coherence rules

TaskOwner(s) or team(s)Notes
Author explainer for coherence modelNiko Matsakis

Running Rust tests in a-mir-formality

TaskOwner(s) or team(s)Notes
MentorshipNiko Matsakis
ImplementationHelp wanted

Stretch goal: modeling Rust borrow checker

As a stretch goal, we can extend a-mir-formality to model the bodies of functions and try to model the Rust borrow checker.

TaskOwner(s) or team(s)Notes
MentorshipNiko Matsakis
ImplementationHelp wanted

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Next-generation trait solver

Metadata
Point of contactlcnr
Teamstypes
StatusProposed
Tracking issuerust-lang/rust-project-goals#113

Summary

Continue work towards the stabilization of -Znext-solver=globally, collecting and resolving remaining blockers. Extend its use in lints and rustdoc.

Motivation

The next-generation trait solver is intended to fully replace the existing type system components responsible for proving trait bounds, normalizing associated types, and much more. This should fix many long-standing (soundness) bugs, enable future type system improvements, and improve compile-times. It's tracking issue is #107374(https://github.com/rust-lang/rust/issues/107374).

The status quo

There are multiple type system unsoundnesses blocked on the next-generation trait solver: project board. Desirable features such as coinductive trait semantics and perfect derive, where-bounds on binders, and better handling of higher-ranked bounds and types are also stalled due to shortcomings of the existing implementation.

Fixing these issues in the existing implementation is prohibitively difficult as the required changes are interconnected and require major changes to the underlying structure of the trait solver. The Types Team therefore decided to rewrite the trait solver in-tree, and has been working on it since EOY 2022.

The next six months

  • resolve remaining issues affecting our compile-time benchmarks
    • fix the failing tests by 'properly' resolving the underlying issues
      • wg-grammar
      • projection-caching
      • nalgebra-0.33.0
    • improve performance
      • avoid exponential performance hits in all benchmarks
      • get most benchmarks to be neutral or improvements
  • go through the most popular crates on crates.io and fix any encountered issues
  • move additional lints and rustdoc to use the new solver by default
  • publicly ask for testing of -Znext-solver=globally once that's useful

The "shiny future" we are working towards

  • we are able to remove the existing trait solver implementation and significantly cleanup the type system in general, e.g. removing most normalize in the caller by handling unnormalized types in the trait system
  • all remaining type system unsoundnesses are fixed
  • many future type system improvements are unblocked and get implemented
  • the type system is more performant, resulting in better compile times

Design axioms

In order of importance, the next-generation trait solver should be:

  • sound: the new trait solver is sound and its design enables us to fix all known type system unsoundnesses
  • backwards-compatible: the breakage caused by the switch to the new solver should be minimal
  • maintainable: the implementation is maintainable, extensible, and approachable to new contributors
  • performant: the implementation is efficient, improving compile-times

Ownership and team asks

Owner: lcnr

Add'l implementation work: Michael Goulet

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam types
Implementationlcnr, Michael Goulet
Standard reviewsTeam types
FCP decision(s)Team typesfor necessary refactorings

Support needed from the project

  • Types team
    • review design decisions
    • provide technical feedback and suggestion

Outputs and milestones

See next few steps :3

Outputs

Milestones

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Nightly support for ergonomic SIMD multiversioning

Metadata
Point of contactLuca Versari
Teamslang
StatusProposed

Summary

Figure out the best way for Rust to support generating code for multiple SIMD targets in a safe and ergonomic way.

Motivation

Even within the same architecture, CPUs vary significantly in which SIMD ISA extensions they implement1. Most libraries that are shipped to users in binary form thus tend to contain code for different SIMD targets and use runtime dispatch. Having compiler support for this pattern, i.e. by having the compiler generate multiple versions of code written only once, would significantly help drive Rust's adoption in the world of codecs, where some of the most subtle memory vulnerabilities are found, as well as other domains where squeezing out the last bits of performance is fundamental.

1

For example, x86 CPUs currently have about 12 (!) different possible configurations with respect to AVX-512 support alone.

The status quo

Currently, generating efficient code for a specific SIMD ISAs requires annotating the function with appropriate attributes. This is incompatible with generating multiple versions through i.e. generics.

This limitation can be worked around in different ways, all of which with some significant downsides:

  • Intermediate functions can be annotated as #[inline(always)] and inline in a top-level caller, with downsides for code size.
  • Calls between "multiversioned" functions do target selection again, which inhibits inlining and has performance implications.
  • Programmers explicitly define no-inline boundaries and call functions across such boundaries in a different way; this requires significant boilerplate and has bad ergonomics.
  • Macros can create multiple copies of the relevant functions and figure out how to call between those; this is bad for compilation times and not particularly rust-y.

There are currently multiple proposals for ways to resolve the above issues. In brief:

  • allow ADTs to carry feature information and pass it on to functions that take them as argument
  • have functions automatically inherit the target features of their callers
  • let features depend on const generic arguments to functions

The trade-offs between the different approaches are complex, and there is no consensus on the best path forward. More details on the proposals can be found in this document.

The next 6 months

  • A design meeting is scheduled to discuss the best approach forward on this topic.
  • A lang team experiment is approved, enabling exploration in the compiler of the proposed approach.
  • A RFC is posted, based on the results of the exploration, and reviewed.
  • The implementation is updated to reflect changes from the RFC, and becomes broadly available in the nightly compiler.

The "shiny future" we are working towards

Once the proposed design is stabilized, Rust will offer one of the most compelling stories for achieving very high performance on multiple targets, with minimal friction for developers.

This significantly increases the adoption of Rust in performance-critical, safety-sensitive low level libraries.

Design axioms

  • The common case should be simple and ergonomic.
  • Additional flexibility to unlock the maximum possible performance should be possible and sufficiently ergonomic.
  • The vast majority of SIMD usage should be doable in safe Rust.

Ownership and team asks

Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.

TaskOwner(s) or team(s)Notes
Design meetingTeam lang
Lang-team experimentTeam lang
Experimental implementationLuca Versari
Author RFCLuca Versari
RFC decisionTeam lang

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Null and enum-discriminant runtime checks in debug builds

Metadata
Point of contactBastian Kersting
Teamslang, opsem
StatusProposed

Summary

Add runtime checks to rustc that check for null pointers on pointer access and invalid enum discriminants. Similar to integer overflow and pointer alignment checks, this will only be enabled in debug builds.

Motivation

While safe Rust prevents access to null references, unsafe Rust allows you to access null pointers and create null references. It hands over responsibility to the programmer to assure validity of the underlying memory. Especially when interacting with values that cross the language boundary (FFI, e.g. passing a C++ created pointer to Rust), the reasoning about such values is not always straightforward.

At the same time, undefined behavior (UB) is triggered quickly when interacting with invalid pointers. E.g. just the existence of a null reference is UB, it doesn't even have to be dereferenced.

Similar goes for enums. An enum must have a valid discriminant, and all fields of the variant indicated by that discriminant must be valid at their respective type (source). Again, FFI could potentially pass an invalid enum value to Rust and thus cause undefined behavior.

In general, for unsafe code, the responsibility of ensuring the various invariants of the Rust compiler are with the programmer. They have to make sure the value is not accidentally null, misaligned, violates Rust's pointer aliasing rules or any other invariant. The access happens inside an unsafe block.

The status quo

While Miri exists and does a great job at catching various types of UB in unsafe Rust code, it has the downside of only working on pure Rust code. Extern functions can not be called and a mixed language binary is unable to be executed in Miri.

Kani, which verifies unsafe Rust via model checking has similar limitations.

The next 6 months

Within the next half a year, the plan is to start with null and enum discriminant checks to verify the code is upholding these invariants. Since these checks obviously pose a runtime overhead, we only insert them (optionally?) in debug builds. This is similar to the integer overflow and alignment checks that trigger a panic when observing an overflow and terminate the program.

The "shiny future" we are working towards

Similar to how UBSan exists in Clang, we would like to see an option to detect undefined behavior at runtime. As mentioned above, this is critical for cross-language interoperability and can help to catch UB before it reaches production.

The extension of these checks can be done step-by-step, keeping in mind the runtime overhead. Eventually we would like to check (sanitize) most items listed as UB in Rust.

Particularly as next steps we would like to check for UB when:

  • Calling a function with the wrong call ABI or unwinding from a function with the wrong unwind ABI.
  • Performing a place projection that violates the requirements of in-bounds pointer arithmetic.
  • Eventually check the Rust pointer aliasing model (stacked borrows check).

Ownership and team asks

Owner: Bastian Kersting

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam, lang, opsem
ImplementationBastian Kersting, @vabr-g
Standard reviewsTeam compiler, opsem
Design meetingTeam lang, opsem
Lang-team experimentTeam lang

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None yet.

Optimizing Clippy & linting

(a.k.a The Clippy Performance Project)

Metadata
Point of contactAlejandra González
Teamsclippy
StatusProposed
Tracking issuerust-lang/rust-project-goals#114

Summary

This is the formalization and documentation of the Clippy Performance Project, a project first talked about on Zulip, July 2023. As the project consists of several points and is ever-changing, this document also has a dynamic structure and the team can add points.

In short, this is an effort to optimize Clippy, and Rust's linting infrastructure with a point of view of making Clippy faster (both on CI/CD pipelines, and on devs' machines)

Motivation

Clippy can take up to 2.5 times the time that a normal cargo check takes, and it doesn't need to be! Taking so long is expensive both in development time, and in real money.

The status quo

Based on some informal feedback [polls][poll-mastodon], it's clear that Clippy is used in lots of different contexts. Both in developer's IDEs and outside them.

The usage for IDEs is not as smooth as one may desire or expect when comparing to prior art like [Prettier][prettier], [Ruff][ruff], or other tools in the Rust ecosystem rustfmt and Rust-analyzer.

The other big use-case is as a test before committing or on CI. Optimizing Clippy for performance would fold the cost of these tests.

On GitHub Actions, this excessive time can equal the cost of running cargo check on a Linux x64 32-cores machine, instead of a Linux x64 2-cores machine. A 3.3x cost increase.

The next 6 months

In order to achieve a better performance we want to:

  • Have benchmarking software ready to run on the server.
  • Optimize the collection of Minimum Safe Rust Version (MSRVs)
  • Migrate applicable lints to use incremental compilation

Apart from these 3 clear goals, any open issue, open PR or merged PRs with the label performance-project are a great benefit.

The "shiny future" we are working towards

The possible outcome would be a system that can be run on-save without being a hassle to the developer, and that has the minimum possible overhead over cargo check (which, would also be optimized as a side of a lot of a subset of the optimizations).

A developer shouldn't have to get a high-end machine to run a compiler swiftly; and a server should not spend more valuable seconds on linting than strictly necessary.

Ownership and team asks

Owner: Alejandra González

TaskOwner(s) or team(s)Notes
ImplementationAlejandra González, Alex Macleod
Standard reviewsTeam clippy

[poll-mastodon]: https://tech.lgbt/Alejandra González/112747808297589676 [prettier]: https://github.com/prettier/prettier [ruff]: https://github.com/astral-sh/ruff

Promoting Parallel Front End

Metadata
Point of contactSparrow Li
Teamscompiler
StatusAccepted
Tracking issuerust-lang/rust-project-goals#121

Summary

Continue to parallelize front-end stabilization and performance improvements, continuing from the 2024h2 goal.

Motivation

There are still some occasional deadlock issues, and in environments with high thread counts (>16) performance may be reduced due to data races.

The status quo

Many current issues reflect ICE or deadlock problems that occur during the use of parallel front end. We need to resolve these issues to enhance its robustness. We also need theoretical algorithms to detect potential deadlocks in query systems.

The current parallel front end still has room for further improvement in compilation performance, such as parallelization of HIR lowering and macro expansion, and reduction of data contention under more threads (>= 16).

We can use parallel front end in bootstrap to alleviate the problem of slow build of the whole project.

Cargo does not provide an option to enable the use of parallel front end, so it can only be enabled by passing rustc options manually.

The next 6 months

  • Solve reproducible deadlock issues via tests.
  • Enable parallel frontends in bootstrap.
  • Continue to improve parallel compilation performance, with the average speed increase from 20% to 25% under 8 cores and 8 threads.
  • Communicate with Cargo team on the solution and plan to support parallel front end.

The "shiny future" we are working towards

We need a detection algorithm to theoretically prove that the current query system and query execution process do not bring potential deadlock problems.

The existing rayon library implementation is difficult to fundamentally eliminate the deadlock problem, so we may need a better scheduling design to eliminate deadlock without affecting performance.

The current compilation process with GlobalContext as the core of data storage is not very friendly to parallel front end. Maybe try to reduce the granularity (such as modules) to reduce data competition under more threads and improve performance.

Design axioms

The parallel front end should be:

  • safe: Ensure the safe and correct execution of the compilation process
  • consistent: The compilation result should be consistent with that in single thread
  • maintainable: The implementation should be easy to maintain and extend, and not cause confusion to developers who are not familiar with it.

Ownership and team asks

Owner: Sparrow Li and Parallel Rustc WG own this goal

TaskOwner(s) or team(s)Notes
ImplementationSparrow Li
Author testsSparrow Li
Discussion and moral supportTeam compiler

Frequently asked questions

Prototype a new set of Cargo "plumbing" commands

Metadata
Point of contactEd Page
Teamscargo
StatusProposed for mentorship

Summary

Create a third-party cargo subcommand that has "plumbing" (programmatic) subcommands for different phases of Cargo operations to experiment with what Cargo should integrate.

Motivation

Cargo is a "porcelain" (UX) focused command and is highly opinionated which can work well for common cases. However, as Cargo scales into larger applications, users need to adapt to their specific processes and needs.

The status quo

While most Cargo commands can be used programmatically, they still only operate at the porcelain level. Currently, Cargo's plumbing commands are

  • cargo read-manifest:
    • works off of a Cargo.toml file on disk
    • uses a custom json schema
    • deprecated
  • cargo locate-project:
    • works off of a Cargo.toml file on disk
    • text or json output, undocumened json schema
    • uses a pre-1.0 term for package
  • cargo metadata:
    • works off of Cargo.toml, Cargo.lock files on disk
    • uses a custom json schema
    • can include depednency resolution but excludes feature resolution
    • some users want this faster
    • some users want this to report more information
    • See also open issues
  • cargo pkgid:
    • works off of Cargo.toml, Cargo.lock files on disk
    • text output
  • cargo verify-project:
    • works off of a Cargo.toml file on disk
    • uses a custom json schema
    • uses a pre-1.0 term for package
    • deprecated

There have been experiments for a plumbing for builds

  • --build-plan attempts to report what commands will be run so external build tools can manage them.
    • The actual commands to be run is dynamic, based on the output of build scripts from build graph dependencies
    • Difficulty in supporting build pipelining
  • --unit-graph reports the graph the build operates off of which corresponds to calls to the compiler and build scripts
    • Also provides a way to get the results of feature resolution

The next 6 months

Create a third-party subcommand to experiment with plumbing commands.

A build in Cargo can roughly be split into

  1. Locate project
  2. Read the manifests for a workspace
  3. Read lockfile
  4. Lock dependencies
  5. Write lockfile
  6. Resolve features
  7. Plan a build, including reading manifests for transitive dependencies
  8. Execute a build
  9. Stage final artifacts

These could serve as starting points for experimenting with plumbing commands. Staging of final artifacts may not be worth having a dedicated command for. This is exclusively focused on build while other operations may be of interest to users. We can evaluate those commands in the future as they tend to still build off of these same core primitives.

At minimum, later commands in the process would accept output from earlier commands, allowing the caller to either replace commands (e.g. custom dependency resolver) or customize the output (e.g. remove dev-dependencies from manifests).

Encapsulating stabilized file formats can serve as a starting point for output schemas as we already output those and have to deal with stability guarantees around these.

Between planning a build and executing abuild is likely to look like --unit-graph and a plan will need to be put forward for how to work through the open issues. There will likely be similar issues for any other output that can't leverage existing formats.

Cargo's APIs may not be able to expose each of these stages and work may need to be done to adapt it to support these divisions.

The performance of piping output between these commands may be sub-par, coming from a combination of at least

  • Cargo's APIs may require doing more work than is needed for these stages
  • Cargo focuses on json for programamtic output which may prove sub-par (see also zulip)
  • Cargo's serde structures may not be optimized
  • If customizing only a single step in this process, requiring serializing and deserializing through all of the other stages may be superfluous

Low hanging or eggregious bottlenecks may need to be addressed. Otherwise, performance should wait on user feedback.

A schema evolution plan will need to be considered with the design of the schema. How Cargo deals with evolution of existing output could serve as potential starting points:

  • Cargo.toml (generated by cargo package) should not still be readable by cargo versions within the specified package.rust-version
    • In the absence of a package.rust-version, Cargo.toml should only represent features the user explicitly used or optional features that were always allowed on stable cargo
  • Cargo.lock (generated by most commands) is strictly versioned: all versions of Cargo should output should work in all other versions of Cargo for that given version and changing Cargo versions should not cause the output to change
    • Cargo bumps the default format version after it has been stabilized for a "sufficient period of time"
    • The default is capped by what is supported by the lowest package.rust-version in the workspace
  • cargo metadata --format-version: defaults to "latest" ywith a warning
    • We attempt to follow the same practice as Cargo.toml
  • --message-format: no versioning currently
    • We attempt to follow the same practice as Cargo.toml

The "shiny future" we are working towards

  • Collect user feedback on these commands and iterate on them for eventual inclusion into Cargo
  • Evaluate refactoring Cargo to better align with these plumbing commands to have better boundaries between subsystems
  • Evaluate splitting the cargo [lib] into crates for each of these plumbing commands as smaller, more approachable, more "blessed" Rust APIs for users to call into

Design axioms

  • The changes to Cargo should not impede the development of Cargo
  • The schemas and planned evolution should not impede the development of Cargo
  • The plumbing commands should be focused on solving expected or known needs, avoiding speculation.

Ownership and team asks

Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo
ImplementationHelp wanted
Optimizing CargoHelp wanted, Ed Page
Inside Rust blog post inviting feedbackEd Page

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Publish first version of StableMIR on crates.io

Metadata
Point of contactCelina G. Val
Teamsproject-stable-mir
StatusProposed

Summary

Publish StableMIR crate(s) to crates.io to allow tool developers to create applications on the top of the Rust compiler, and extract code information from a compiled Rust crate and their dependencies without using compiler internal APIs.

Motivation

In the past couple of years we have introduced a more stable API, named StableMIR, to the Rust compiler to enable tool developers to analyze and extract information from compiled Rust crates without directly depending on compiler internals. By publishing StableMIR crate(s) to crates.io, we can provide a reliable interface that enables developers to build analysis tools, development environments, and other applications that work with Rust code while being insulated from internal compiler changes.

Publishing these crates through crates.io will make them easily accessible to the broader Rust community and establish a foundation for building a robust ecosystem of development tools. This will benefit the entire Rust ecosystem by enabling developers to create sophisticated tooling such as static analyzers, linters, and development environments that can work reliably across different Rust compiler versions. Besides stability, users will be able to rely on semantic versioning to track and adapt to changes, reducing the existing maintenance burden for these developers.

The status quo

In the past couple of years we have introduced a more stable API, named StableMIR, to the Rust compiler. This API provides tool developers more stability and predictability, reducing the maintenance cost, as well as providing a smaller surface API to reduce the ramp-up time for new developers.

However, StableMIR consumption model is still similar to any other internal compiler crate. It doesn't have any explicit version, and it must be imported using an extern crate statement.

The next 6 months

The first task is to restructure the relationship between stable-mir and rustc_smir crates, eliminating existing dependencies on the stable-mir crate.

This will be followed by forking the stable-mir crate into its own repository, where we'll implement CI jobs designed to detect any breaking changes that might occur due to compiler updates.

Once the structural changes are complete, we'll shift our attention to documentation and publication. This includes creating comprehensive developer documentation that covers maintenance procedures for both crates, ensuring future maintainers have clear guidelines for updates and compatibility management.

The final step will be publishing the newly refactored and documented version of stable-mir to crates.io, making it readily available for tool developers in the Rust ecosystem.

The "shiny future" we are working towards

By establishing a stable and well-documented interface, we would like to empower developers to build a rich tooling ecosystem for Rust that can be maintained in parallel with the Rust compiler's development.

This parallel development model ensures that tools can evolve alongside Rust itself, fostering innovation and reducing bottlenecks.

Design axioms

  • Enable tool developers to implement sophisticated analysis with low maintenance cost.
  • Do not compromise the development and innovation speed of the rust compiler.
  • Crates should follow semantic versioning.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam compiler
ImplementationCelina G. Val
Standard reviewsTeam project-stable-mir
Fork configurationHelp needed
DocumentationHelp needed
Publish crateCelina G. Val

Definitions

Definitions for terms used above:

  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.

Frequently asked questions

Research: How to achieve safety when linking separately compiled code

Metadata
Point of contactMara Bos
Teamslang, compiler
StatusProposed

Summary

Research what "safety" and "unsafety" means when dealing with separately compiled code, like when loading dynamically linked libraries. Specifically, figure out how it'd be possible to provide any kind of safety when linking external code (such as loading a dynamic library or linking a separately compiled static library).

Motivation

Rust has a very clear definition of "safe" and "unsafe" and (usually) makes it easy to stay in the "safe" world. unsafe blocks usually only have to encapsulate very small blocks of which one can (and should) prove soundness manually.

When using #[no_mangle] and/or extern { … } to connect separately compiled code, however, any concept safety pretty much disappears.

While it might be reasonable to make some assumptions about (standardized) symbols like strlen, the unsafe assumption that a symbol with the same name will refer to something of the expected signature is not something that one can prove at compile time, but is rather an (hopeful, perhaps reasonable) expectation of the contents of dynamic libraries available at runtime.

The end result is that for use cases like plugins, we have no option than just unsafely hoping for the best, accepting that we cannot perfectly guarantee that undefined behavior is impossible linking/loading a library/a plugin/some external code.

The status quo

Today, combining separately compiled code (from different Rust projects or different languages) is done through a combination of extern "…" fn, #[repr(…)], #[no_mangle], and extern {…}.

Specifically:

  1. extern "…" fn (which has a bit of a confusing name) is used to specify the calling convention or ABI of a function.

    The default one is the "Rust" ABI, which (purposely) has no stability guarantees. The "C" ABI is often used for its stability guarantees, but places restrictions on the possible signatures.

  2. #[repr(…)] is used to control memory layout.

    The default one is the Rust layout, which (purposely) has no stability guarantees. The C layout is often used for its stability guarantees, but places restrictions on the types.

  3. #[no_mangle] and extern {…} are used to control the symbols used for linking.

    #[no_mangle] is used for exporting an item under a known symbol, and extern { … } is used for importing an item with a known symbol.

There have often been requests for a "stable Rust abi" which usually refers to a calling convention and memory layout that is as unrestrictive as extern "Rust" fn and #[repr(Rust)], but as stable as extern "C" fn and #[repr(C)].

It seems unlikely that extern "Rust" fn and #[repr(Rust)] would ever come with stablity guarantees, as allowing for changes when stability is not necessary has its benefits. It seems most likely that a "stable Rust ABI" will arrive in the form of a new ABI, by adding some kind of extern "Rust-stable-v1" (and repr) or similar (such as extern "crabi" fn and #[repr(crabi)] proposed here), or by slowly extending extern "C" fn and #[repr(C)] to support more types (like tuples and slices, etc.).

Such developments would lift restrictions on which types one can use in FFI, but just a stable calling convention and memory layout will do almost nothing for safety, as linking/loading a symbol (possibly at runtime) with a different signature (or ABI) than expected will still immediately lead to undefined behavior.

Research question and scope

This research project focusses entirely on point 3 above: symbols and linking.

The main research question is:

What is necessary for an alternative for #[no_mangle] and extern { … } to be safe, with a reasonable and usable definition of "safe"?

We believe this question can be answered independently of the specifics of a stable calling convention (point 1) and memory layout (point 2).

RFC3435 "#[export]" for dynamically linked crates proposes one possible way to provide safety in dynamic linking. The goal of the research is to explore the entire solution space and understand the requirements and limitations that would apply to any possible solution/alternative.

The next 6 months

  • Assemble a small research team (e.g. an MSc student, a professor, and a researcher/mentor).
  • Acquire funding.
  • Run this as an academic research project.
  • Publish intermediate results as a blog post.
  • (After ~9 months) Publish a thesis and possibly a paper that answers the research question.

The "shiny future" we are working towards

The future we're working towards is one where (dynamically) linking separately compiled code (e.g. plugins, libraries, etc.) will feel like a first class Rust feature that is both safe and ergonomic.

Depending on the outcomes of the research, this can provide input and design requirements for future (stable) ABIs, and potentially pave the way for safe cross-language linking.

Design axioms

  • Any design is either fully safe, or makes it possible to encapsulate the unsafety in a way that allows one to prove soundness (to reasonable extend).
  • Any design allows for combining code compiled with different versions of the Rust compiler.
  • Any design is usable for statically linking separately (pre) compiled static libraries, dynamically linking/loading libraries, and dynamically loading plugins.
  • Designs require as little assumptions about the calling convention and memory layout. Ideally, the only requirement is that they are stable, which means that the design can be used with the existing extern "C" fn and #[repr(C)].

Ownership and team asks

Owner: Mara Bos and/or Jonathan Dönszelmann

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang
Coordination with universityJonathan DönszelmannDelft University of Technology
Acquire fundingHexcat (= Mara Bos + Jonathan Dönszelmann)
ResearchResearch team (MSc student, professor, etc.)
Mentoring and interfacing with Rust projectMara Bos, Jonathan Dönszelmann
Blog post (author, review)MSc student, Jonathan Dönszelmann, Mara Bos
Experimental implementationMsc student
Lang-team experimentTeam langNiko Matsakis as liaison
Standard reviewsTeam compiler
Thesis / PaperResearch team (MSc student, professor, etc.)

Frequently asked questions

Is there a university and professor interested in this?

Yes! We've discussed this with a professor at the Delft University at Technology, who is excited and already looking for interested students.

Run the 2025H1 project goal program

Metadata
Point of contactNiko Matsakis
Teams[Leadership Council]
StatusProposed

Summary

  • Create a goals team for running the project-goals-program
  • Run the second round of the Rust project goal program experiment

Motivation

Over 2024H2 we ran the first round of an experimental new Rust Project Goal program to reasonable success. Based on feedback received, we will make some minor adjustments to the plan and try a second round. We will also create a team so that the program is being run in a more sustainable way. Assuming that this second result continues to be positive, then in 2025h2 we would be looking to author an RFC describing the structure of the project goal program and making it a recurring part of project life.

The status quo

The Rust Project Goal program aims to resolve a number of challenges that the project faces for which having an established roadmap, along with a clarified ownership for particular tasks, would be useful:

  • Focusing effort and avoiding burnout:
    • One common contributor to burnout is a sense of lack of agency. People have things they would like to get done, but they feel stymied by debate with no clear resolution; feel it is unclear who is empowered to "make the call"; and feel unclear whether their work is a priority.
    • Having a defined set of goals, each with clear ownership, will address that uncertainty.
  • Helping direct incoming contribution:
    • Many would-be contributors are interested in helping, but don't know what help is wanted/needed. Many others may wish to know how to join in on a particular project.
    • Identifying the goals that are being worked on, along with owners for them, will help both groups get clarity.
  • Helping the Foundation and Project to communicate
    • One challenge for the Rust Foundation has been the lack of clarity around project goals. Programs like fellowships, project grants, etc. have struggled to identify what kind of work would be useful in advancing project direction.
    • Declaring goals, and especially goals that are desired but lack owners to drive them, can be very helpful here.
  • Helping people to get paid for working on Rust
    • A challenge for people who are looking to work on Rust as part of their job -- whether that be full-time work, part-time work, or contracting -- is that the employer would like to have some confidence that the work will make progress. Too often, people find that they open RFCs or PRs which do not receive review, or which are misaligned with project priorities. A secondary problem is that there can be a perceived conflict-of-interest because people's job performance will be judged on their ability to finish a task, such as stabilizing a language feature, which can lead them to pressure project teams to make progress.
    • Having the project agree before-hand that it is a priority to make progress in an area and in particular to aim for achieving particular goals by particular dates will align the incentives and make it easier for people to make commitments to would-be employers.

For more details, see

The next 6 months

  • Create a team to run the goal program in a more sustainable way
  • Publish monthly status updates on the goals selected for 2025h1

The "shiny future" we are working towards

We envision the Rust Project Goal program as a permanent and ongoing part of Rust development. People looking to learn more about what Rust is doing will be able to visit the Rust Project Goal website and get an overview; individual tracking issues will give them a detailed rundown of what's been happening.

Rust Project Goals also serve as a "front door" to Rust, giving would-be contributors (particularly more prolific contributors, contractors, or companies) a clear way to bring ideas to Rust and get them approved and tracked.

Running the Rust Project Goals program will be a relatively scalable task that can be executed by a single individual.

Design axioms

  • Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
  • Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
  • Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
  • Nothing good happens without an owner. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
  • To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.

Ownership and team asks

Owner: Niko Matsakis

  • Niko Matsakis can commit 20% time (avg of 1 days per week) to pursue this task, which he estimates to be sufficient.
TaskOwner(s) or team(s)Notes
Begin soliciting goals in Nov 2024Niko Matsakis
Approve goal slate for 2025h1leads of each team
Top-level Rust blog post for 2025h1 goalsNiko Matsakis
Propose team membershipNiko Matsakis
Org decisionTeam leadership-councilapprove creation of new team
January goal updategoals team
February goal updategoals team
Author RFCgoals team
March goal updategoals team
Begin soliciting goals for 2025h2goals team
April goal updategoals team
May goal updategoals team
June goal updategoals team

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None.

Rust Specification Testing

Metadata
Point of contactConnor Horman
TeamsT-spec
StatusProposed

Summary

The Rust test suite covers huge portions of the Rust Compiler (rustc). To ensure that the content of the Rust specification is correct, and ongoing compliance is validated, Rust tests will be added and linked directly in the specification itself.

Motivation

The Rust Specification has been authored over the past year by the Specification Team (t-spec). The team currently has a generally well-defined path forward on the content for the specification. However, to ensure this text is accurate, there needs to be appropriate tests.

The status quo

The rust compiler currently has tests for many aspects of the language, specifically tests of language guarantees that will be documented in the specification, and implementation-specific behaviour that is desirable to test for other reasons (including diagnostics and some optimizations). These tests are largely contained in the ui test suite, are disorganized, and are intermingled. Some tests engage both language-guaranteed behaviour and implementation-specific behaviour.

The next 6 months

New and existing tests will be integrated with the specification through tagging individual tests with paragraph identifiers from the reference or the FLS. In cooperation with the compiler team and the bootstrap team, the test structure will be reorganized to make it more clear which tests are exercising guaranteed aspects of the language and which tests may be exercising chosen details of the Rust Compiler (i.e., rustc) implementation.

The "shiny future" we are working towards

The integration of testing into the specification should:

  • Aid Review of the Reference and Specification, by being able to read Rust code that demonstrates and validates the text of those documents,
  • Likewise assist readers who may wish to view the implications of a given paragraph in a programmatic manner,
  • Aid the development of the Rust Language, and to assist improvements to the processes being considered by the Language Team,
  • Aid the development of the Rust Compiler and its test suite as a whole, by improving organization of the test suite, including differentiating between tests of language-level guaranteed behaviour and tests of implementation-specific behaviour, and
  • Aid in the use of the Rust Specification in the context of safety-critical development, by providing traceability for the content of the Specification.

Ownership and team asks

Owner: Connor Horman

TaskOwner(s) or team(s)Notes
Author RFCConnor Horman
RFC decisionTeam spec compiler bootstrap lang
Move/Extract TestsConnor HormanAs Needed
Annotate Moved Tests
Author new testsConnor Horman

Frequently asked questions

Rust Vision Document

Metadata
Point of contactNiko Matsakis
Teams[Leadership Council]
StatusProposed

Summary

Present a first draft of a "Rust Vision Doc" at the Rust All Hands in May.

The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust. It will use that raw data to make recommendations on what problems Rust should be attacking and what constraints we should be trying to meet. The document will not include specific features or recommendations, which ought to be legislated through RFCs.

Motivation

The goal is to author a longer term "vision doc" that identifies key opportunities for Rust over the next 3-5 years. The document will help us focus our energies and attack problems that move the needle for Rust.

Rust's planning processes have a 6 month time window

Rust's official planning processes are currently scoped at the 6 month horizon (project goals). Of course a number of longer term initiatives are in play, some quite long indeed, such as the long drive towards a better async experience or towards parallel compilation. This planning primarily lives in the heads of the maintainers doing the work. There is no coordinated effort to collect the experiences of Rust users in a larger way.

It's time for us to think beyond "adoption"

Rust's goal since the beginning has been to empower everyone to build reliable and efficient software. We wanted Rust to be a language that would enable more people to build more things more quickly than was possible before. And not just any things, but things that worked well, saved resources, and lasted a long time. This is why Rust prizes performance, reliability, productivity, and long-term maintenance.

Once the basic design of Rust had come into focus, it was clear that our primary goal was focusing on adoption. But now that Rust has established a foothold in the industry, adoption on its own is not clearly the right goal for us. Rust, like most any general purpose language, can be used for all kinds of things. What are the kinds of applications where Rust is already a great fit, and how could it get even better? And what are the kinds of applications where Rust could be a great fit, if we overcame some obstacles?

To know where to go, you have to know where you are

The biggest effort towards central planning was the authoring of the Async Vision Doc, which took place in 2021. The Async Vision Doc effort began by collecting status quo stories described the experiences of using async in a number of scenarios based on a cast of four characters1: Alan, Grace, Niklaus, and Barbara. These stories were "crowd sourced" over several months, during which time we held video chats and interviews.

Writing the "status quo" stories helped us to compensate for the curse of knowledge: the folks working on Async Rust tended to be experts in Async Rust, familiar with the the little tips and tricks that can get you out of a jam. The stories helped us to see the impact from little paper cuts that we had long since overlooked, while also identifying deeper challenges and blockers.

1

Any resemblance between these names and famous programming language pioneers is purely coincidental.

Gathering stories from both individuals and groups

Gathering stories from individuals can be done with the same techniques we used with the Async Vision Doc, like online meetings and soliciting PRs. We may also be able to coordinate with Rust conferences.

For the broader Rust vision doc, we would also like to proactively seek input from groups that we think would have useful context:

  • Rust trainers and consultants;
  • groups driving adoption at companies;
  • groups like the Rust Foundation.

Focus on opportunities and requirements instead of a specific "shiny future"

After the Status Quo story gathering, the Async Vision Doc attempted to author a shiny future. The intent was to align the community around a single vision but (in the opinion of the author, myself) it was not especially successful. There are several reasons for this. For one, the document was never RFC'd, which meant it did not truly represent a consensus. Second, it attempted to paint a more precise picture than was truly possible. The design of new features in complex domains like async is subject to a "fog of war effect"2: the immediate next steps can be relatively clear, and perhaps the end point is even somewhat understood, but the path between will have to figured out as you go. Trying to author a shiny future is inherently challenging.

2

Hat tip to Tyler Mandry for this name.

For the Rust Vision Doc, we plan to take a different approach. Rather than authoring a shiny future, we will identify specific opportunities-- places where we believe Rust could have a huge impact on the state of software development. For each of those, we'll make recommendations about the kinds of problems that need to be solve for Rust to be truly successful in those domains. We will back up those recommendations with references to status quo stories and other data.

The next 6 months

Our goal for the next 6 months is to present a first draft of the vision doc at the Rust All Hands, planned for May 2025. We will use this opportunity to get feedback on the doc structure and recommendations and to begin work on the actual RFC, excepted to be accepted in 2025H2.

Here is the overall plan for 2025H1:

TaskNovDecJanFebMarAprMayJun
Form a team██████
Gather status quo stories██████░░░
Coalesce stories and personae░░░██████
Develop recommendations and goals░░░███
Review RFC Draft 1 at Rust All Hands██████
Publish a blog post with summarized feedback███

The plan actually begins now, in the goal construction phase. One of the tasks to be done is building up a small support team of researchers who will help with doing the interviews and authoring status quo stories and other parts of the document. As goal point of contact, nikomatsakis will select initial members. With the Async Vision Doc, our experience was that most Rust users are eager to share their experiences, but that authoring and upleveling that into a status quo story is challenging. It's better to centralize that authorship into a small group of motivated people.

The plan to finalize the document is as follows:

  • We will be gathering and summarizing data for the first 3 months.
  • In early April we will begin authoring the first draft.
  • We will present the first draft for review at the Rust All hands and associated Rust Week conference.
  • We will publish a blog post with collected feedback.

Approval of the RFC indicates general alignment with the framing and prioritizes it describes. It will not commit any Rust team to any particular action.

The "shiny future" we are working towards

Assuming this vision doc is successful, we believe it should be refreshed on a regular basis. This would be a good completement to the Rust Project Goal system. Project Goals describe the next few steps. The Vision Doc helps to outline the destination.

We also expect that the Vision Doc template may be useful in other more narrow contexts, such as a revised version of the Async Vision Doc,a vision doc for Rust in UI, machine learning, etc.

Design axioms

  • Shared understanding of the status quo is key. The experience of the async vision doc was that documenting the status quo had huge value.
  • Describe the problem and requirements, not the solution. Attempting to design 3-5 years of features in 6 months is clearly impossible. We will focus on identifying areas where Rust can have a big impact and describing the kinds of things that are holding it back.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Select support team membersNiko Matsakis
MiscellaneousTeam leadership-councilCreate supporting subteam + Zulip stream
Gathering of status quo storiesvision team
Prepare draft of RFC to be presented at Rust all handsvision team

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Why are you creating a support team? Should it be a working group?

Oh geez I don't know what to call anything anymore. I think this is a time-limited team created for the purpose of authoring this RFC and then disbanded. We can call that a working group, project group, whatever.

I do think that if this doc is successful there might be a role for a longer-term maintenance team, perhaps one that also helps to run the project goals effort. That's a topic for another day.

SVE and SME on AArch64

Metadata
Point of contactDavid Wood
Teamslang, types, compiler
StatusProposed

Arm's Rust team is David Wood, Adam Gemmell, Jacob Bramley, Jamie Cunliffe and James, as well as Kajetan Puchalski and Harry Moulton as graduates on rotation. This goal will be primarily worked on by David Wood and Jamie Cunliffe.

Summary

Over the next six months, we will aim to merge nightly support for SVE and establish a path towards stabilisation:

  • propose language changes which will enable scalable vector types to be represented in Rust's type system
  • land an experimental nightly implementation of SVE
  • identify remaining blockers for SVE stabilisation and plan their resolution
  • gain a better understanding of SME's implications for Rust and identify first steps towards design and implementation

Motivation

AArch64 is an important architecture for Rust, with two tier 1 targets and over thirty targets in lower tiers. It is widely used by some of Rust's largest stakeholders and as a systems language, it is important that Rust is able to leverage all of the hardware capabilities provided by the architecture, including new SIMD extensions: SVE and SME.

The status quo

SIMD types and instructions are a crucial element of high-performance Rust applications and allow for operating on multiple values in a single instruction. Many processors have SIMD registers of a known fixed length and provide intrinsics which operate on these registers. Arm's Neon extension is well-supported by Rust and provides 128-bit registers and a wide range of intrinsics.

Instead of releasing more extensions with ever increasing register bit widths, recent versions of AArch64 have a Scalable Vector Extension (SVE), with vector registers whose width depends on the CPU implementation and bit-width-agnostic intrinsics for operating on these registers. By using SVE, code won't need to be re-written using new architecture extensions with larger registers, new types and intrinsics, but instead will work on newer processors with different vector register lengths and performance characteristics.

SVE has interesting and challenging implications for Rust, introducing value types with sizes that can only be known at compilation time, requiring significant work on the language and compiler. Arm has since introduced Scalable Matrix Extensions (SME), building on SVE to add new capabilities to efficiently process matrices, with even more interesting implications for Rust.

Hardware is generally available with SVE, and key Rust stakeholders want to be able to use these architecture features from Rust. In a recent discussion on SVE, Amanieu, co-lead of the library team, said:

I've talked with several people in Google, Huawei and Microsoft, all of whom have expressed a rather urgent desire for the ability to use SVE intrinsics in Rust code, especially now that SVE hardware is generally available.

While SVE is specifically an AArch64 extension, the infrastructure for scalable vectors in Rust should also enable Rust to support for RISC-V's "V" Vector Extension, and this goal will endeavour to extend Rust in an architecture-agnostic way. SVE is supported in C through Arm's C Language Extensions (ACLE) but requires a change to the C standard (documented in pages 122-126 of the 2024Q3 ACLE), so Rust has an opportunity to be the first systems programming language with native support for these hardware capabilities.

SVE is currently entirely unsupported by Rust. There is a long-standing RFC for the feature which proposes special-casing SVE types in the type system, and a experimental implementation on this RFC. While these efforts have been very valuable in understanding the challenges involved in implementing SVE in Rust, and providing an experimental forever-unstable implementation, they will not be able to be stabilised as-is.

This goal's owners have an nearly-complete RFC proposing language changes which will allow scalable vectors to fit into Rust's type system - this pre-RFC has been informally discussed with members of the language and compiler teams and will be submitted alongside this project goal.

The next 6 months

The primary objective of this initial goal is to land a nightly experiment with SVE and establish a path towards stabilisation:

  • Landing a nightly experiment is nearing completion, having been in progress for some time. Final review comments are being addressed and both RFC and implementation will be updated shortly.
  • A comprehensive RFC proposing extensions to the type system will be opened alongside this goal. It will primarily focus on extending the Sized trait so that SVE types, which are value types with a static size known at runtime, but unknown at compilation time, can implement Copy despite not implementing Sized.

The "shiny future" we are working towards

Adding support for Scalable Matrix Extensions in Rust is the next logical step following SVE support. There are still many unknowns regarding what this will involve and part of this goal or the next goal will be understanding these unknowns better.

Design axioms

  • Avoid overfitting. It's important that whatever extensions to Rust's type system are proposed are not narrowly tailored to support for SVE/SME, can be used to support similar extensions from other architectures, and unblocks or enables other desired Rust features wherever possible and practical.
  • Low-level control. Rust should be able to leverage the full capabilities and performance of the underlying hardware features and should strive to avoid inherent limitations in its support.
  • Rusty-ness. Extensions to Rust to support these hardware capabilities should align with Rust's design axioms and feel like natural extensions of the type system.

Ownership and team asks

Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP).

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam lang, types, compiler

Land nightly experiment for SVE types

TaskOwner(s) or team(s)Notes
Land nightly experiment for SVE typesJamie Cunliffe
Author RFCUpdate rfcs#3268, will still rely on exceptions in the type system
RFC decisionTeam lang, types
ImplementationUpdate rust#118917
Standard reviewsTeam compiler

Upstream SVE types and intrinsics

TaskOwner(s) or team(s)Notes
Upstream SVE types and intrinsicsJamie CunliffeUsing repr(scalable) from previous work, upstream the nightly intrinsics and types

Extending type system to support scalable vectors

TaskOwner(s) or team(s)Notes
Extending type system to support scalable vectorsDavid Wood
Author RFC
RFC decisionTeam lang, types
Implementation
Standard reviewsTeam compiler

Investigate SME support

TaskOwner(s) or team(s)Notes
Investigate SME supportJamie Cunliffe, David Wood
Discussion and moral supportTeam lang, types, compiler
Draft next goalDavid Wood

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None yet.

Scalable Polonius support on nightly

Metadata
Point of contactRémy Rakic
Teamstypes
StatusProposed

Summary

Keep working on implementing a native rustc version of the Polonius next generation borrow checking algorithm, that would scale better than the previous datalog implementation, continuing from the 2024h2 goal.

Motivation

Polonius is an improved version of the borrow checker that resolves common limitations of the borrow checker and which is needed to support future patterns such as "lending iterators" (see #92985). Its model also prepares us for further improvements in the future.

Some support exists on nightly, but this older prototype has no path to stabilization due to scalability issues. We need an improved architecture and implementation to fix these issues.

The next six months

  • Complete the ongoing work to land polonius on nightly

The "shiny future" we are working towards

Stable support for Polonius.

Ownership and team asks

Owner: lqd

Other support provided by Amanda Stjerna as part of her PhD.

TaskOwner(s) or team(s)Notes
Design reviewNiko Matsakis
ImplementationRémy Rakic, Amanda Stjerna
Standard reviewsTeam typesMatthew Jasper

Support needed from the project

We expect most support to be needed from the types team, for design, reviews, interactions with the trait solver, and so on. We expect Niko Matsakis, leading the polonius working group and design, to provide guidance and design time, and Michael Goulet and Matthew Jasper to help with reviews.

Outputs and milestones

Outputs

Nightly implementation of polonius that passes NLL problem case #3 and accepts lending iterators (#92985).

Performance should be reasonable enough that we can run the full test suite, do crater runs, and test it on CI, without significant slowdowns. We do not expect to be production-ready yet by then, and therefore the implementation would still be gated under a nightly -Z feature flag.

As our model is a superset of NLLs, we expect little to no diagnostics regressions, but improvements would probably still be needed for the new errors.

Milestones

Note: some of these are currently being worked on as part of the 2024h2 goal, and could be completed before the 2025h1 period.

MilestoneContributorNotes
Factoring out higher-ranked concerns from the main pathAmanda Stjerna
↳ [x] rewrite invalid universe constraints with outlives 'static constraintsPR 123720
↳ [ ] completely remove placeholdersin progress PR 130227
Location-sensitive prototype on nightlyRémy Rakicin progress
↳ [x] create structures for location-dependent outlives constraints
↳ [x] build new constraint graph from typeck constraints and liveness constraints
↳ [x] update NLLs for required changes to local & region liveness, loan liveness & loan scopes, (possibly unreachable) kills, bidirectional traversal & active loans
↳ [ ] limit regressions about diagnostics when using the new constraints on diagnostics tailored to the old constraints
↳ [ ] land on nightly under a -Z flag
[x] Debugging / dump tool for analysis of location-sensitive analysisRémy Rakic
[ ] Tests and validation of location-sensitive PoloniusRémy Rakic
↳ [ ] make the full test suite passin progress
↳ [ ] do a crater run for assertions and backwards-compatibility
↳ [ ] expand test suite with tests about the new capabilities
[ ] Location-sensitive pass on nightly, tested on CIRémy Rakic

Secure quorum-based cryptographic verification and mirroring for crates.io

Metadata
Point of contact@walterhpearce
Teamscrates.io, cargo, infra
StatusProposed

Summary

Within 6 months, we will provide preliminary infrastructure to cryptographically verify the crates.io repository and experimental mirrors of it. This will include a chain-of-trust to the Rust Project via a quorum-based mechanism, and methods to verify singular Rust crates, their singular index entries, as well as the index and the artifacts as a whole.

Motivation

Rustaceans need to be able to download crates and know that they're getting the crate files that were published to crates.io without modification. Rustaceans everywhere should be able to use local mirrors of crates.io, such as geographically distributed mirrors or mirrors within infrastructure (e.g. CI) that they're using.

The status quo

Currently cargo and crates.io provide no cryptographic security for the index or crates. The only verification which occurs is via HTTPS validation of the URLs and tamperable hashes within the index. This provides assurance that cargo is talking to crates.io, but does not allow for the possibility of secure mirroring nor protects against compromise or tampering with the index (either at rest or in transit).

There are places where Rust is difficult to use right now. Using Cargo with crates.io works well for Rustaceans with unfirewalled access to high speed Internet, but not all are so lucky. Some are behind restrictive firewalls which they are required to use. Some don't have reliable access to the Internet. In cases like these, we want to support mirrors of crates.io in a secure way that provides cryptographic guarantees that they are getting the same packages as are provided by the Rust Project, without any risk of tampering.

Another reason for wanting to be able to better support mirrors is to address cost pressures on Rust. Approximately half of Rust release and crate traffic is from CI providers. Being able to securely distribute Rust crates from within CI infrastructure would be mutually beneficial, since it would both allow the Rust Foundation to reallocate budget to other uses and would make Rust CI actions faster and more reliable on those platforms.

Finally, supply chain security is a growing concern, particularly among corporate and government users of Rust. The Log4j vulnerability brought much greater attention to the problems that can occur when a single dependency nested arbitrarily deep in a dependency graph has a critical vulnerability. Many of these users are putting significant resources into better understanding their dependencies, which includes being able to attest that their dependencies verifiably came from specific sources like crates.io.

The next 6 months

We would like to have a working production signing pipeline for all crates published to crates.io, which can be verified back to the Rust Project. The leadership council will have selected a trusted root quorum for the project, and that quorum will have completed their first signing ceremony. Crates.io will have integrated automatic signing of published crates into their pipeline and the signatures will be included in the index. Finally, we'll provide some method for end users to verify these signatures (ideally in cargo, but at a minimum as a cargo subcommand for proof-of-concept). We'll use that infrastructure to demonstrate how a mirror could function.

The "shiny future" we are working towards

In the future, we intend to provide production mirroring capabilities, and some mechanism for automatic mirror discovery. Cargo should be able to automatically discover and use mirrors provided within CI infrastructure, within companies, or within geographic regions, and cryptographically verify those that mirrors are providing unmodified crates and indexes from crates.io.

We'll extend this cryptographic verification infrastructure to rustup-distributed Rust releases and nightly versions, and support mirroring of those as well.

We'll provide cryptographic verification of our GitHub source repositories, and some demonstration of how to verify mirrors of those repositories.

We hope to have follow-up RFCs which will enable authors to generate their own quorums for author and organization level signing and validation of the crates they own.

We'll add support for similar cryptographic security for third-party crate repositories.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Inside Rust blog post about staging deployment@walterhpearce
Top-level Rust blog post on production deployment@walterhpearce

Quorum-based cryptographic infrastructure (RFC 3724)

TaskOwner(s) or team(s)Notes
Further revisions to RFC@walterhpearce
RFC decisionTeam cargo crates-io infra
Implementation and staging deployment@walterhpearce, crates-io, infra
MiscellaneousTeam leadership-councilSelect root quorum
Deploy to productionTeam crates-io infra

Draft RFC for mirroring crates.io via alternate repositories

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo
Author RFC@walterhpearce Josh Triplett
Proof of concept technical experiments@walterhpearce

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.

Stabilize public/private dependencies

Metadata
Point of contactEd Page
Teamscargo, compiler
StatusProposed for mentorship

Summary

Find a MVP for stabilization and move it forward.

Motivation

This will allow users to tell Rustc and Cargo what dependencies are private

  • Help users catch ways they unexpectedly expose their implementation details
  • Help tooling better identify what all consistutes an API

The status quo

RFC #1977(https://github.com/rust-lang/rfcs/pull/1977) has been superseded by RFC #3516(https://github.com/rust-lang/rfcs/pull/3516) to reduce complexity on the Cargo side to help get this over the line. However, there is still a lot of complexity on the compiler side to get this right ( rust#3516, rust#119428, ), keeping this feature in limbo

The next 6 months

Work with compiler to identify a minimal subset of functionality for what the lint can do and close out the remaining stabilization tasks.

The "shiny future" we are working towards

Design axioms

  • False negatives are likely better than false positives

Ownership and team asks

This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list Team and the name of the team. The things typically asked of teams are defined in the Definitions section below.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo, compiler
Work through #3516, #119428Help wanted
Finish any remaining tasksHelp wanted
MentoringEd Page
Stabilization reportHelp wanted

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

Unsafe Fields

Metadata
Point of contactJack Wrenn
Teamslang
StatusProposed

Summary

Design and implement a mechanism for denoting when fields carry library safety invariants.

Motivation

The absence of a mechanism for denoting the presence of library safety invariants increases both the risk of working with unsafe code and the difficulty of evaluating its soundness.

The status quo

Presently, Rust lacks mechanisms for denoting when fields carry library safety invariants, and for enforcing extra care around their use. Consequently, to evaluate the soundness of unsafe code (i.e., code which relies on safety invariants being upheld), it is not enough to check the contents of unsafe blocks — one must check all places (including safe contexts) in which safety invariants might be violated. (See The Scope of Unsafe)

For example, consider this idealized Vec:

#![allow(unused)]
fn main() {
pub struct Vec<T> {
    data: Box<[MaybeUninit<T>]>,
    len: usize,
}
}

Although len is bound by a safety invariant, it is trivial to violate its invariant in entirely safe code:

#![allow(unused)]
fn main() {
impl Vec<T> {
    pub fn evil(&mut self) {
        self.len += 2;
    }
}
}

Rust cannot enforce that modifications of len require unsafe, because the language does not provide the programmer a way of communicating to the compiler that len carries safety invariants.

The "shiny future" we are working towards

Rust programmers will use the unsafe keyword to denote fields that carry library safety invariants; e.g.:

#![allow(unused)]
fn main() {
struct Vec<T> {
    // SAFETY: The elements `data[i]` for
    // `i < len` are in a valid state.
    unsafe data: Box<[MaybeUninit<T>]>,
    unsafe len: usize,
}
}

Rust will require that usages of unsafe fields which could violate their safety invariants must only occur within unsafe contexts.

The next 6 months

In the next six months, we will iterate on the design and implementation of unsafe fields. An RFC for unsafe fields will be accepted, and a candidate implementation will — at the very least — be ready to enter the stabilization process.

Design axioms

The design of unsafe fields is guided by three axioms:

  1. Unsafe Fields Denote Safety Invariants A field should be marked unsafe if it carries arbitrary library safety invariants with respect to its enclosing type.
  2. Unsafe Usage is Always Unsafe Uses of unsafe fields which could violate their invariants must occur in the scope of an unsafe block.
  3. Safe Usage is Usually Safe Uses of unsafe fields which cannot violate their invariants should not require an unsafe block.

Ownership and team asks

Owner: Jack Wrenn

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam langZulip
Author RFCJacob PrattRFC3458, Living Design Doc
ImplementationLuca Versari
Standard reviewsTeam compiler
Design meetingTeam lang
RFC decisionTeam lang
Author stabilization reportJack Wrenn

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

TBD

Use annotate-snippets for rustc diagnostic output

Metadata
Point of contactScott Schafer
Teamscompiler
StatusProposed

Summary

Switch to annotate-snippets for rendering rustc's output, with no loss of functionality or visual regressions.

Motivation

Cargo has been adding its own linting system, where it has been using annotate-snippets to try and match Rust's output. This has led to duplicate code between the two, increasing the overall maintenance load. Having one renderer that produces Rust-like diagnostics will make it so there is a consistent style between Rust and Cargo, as well as any other tools with similar requirements like miri, and should lower the overall maintenance burden by rallying behind a single unified solution.

The status quo

Currently rustc has its own Emitter that encodes the theming properties of compiler diagnostics. It has handle all of the intricacies of terminal support (optional color, terminal width querying and adapting of output), layout (span and label rendering logic), and the presentation of different levels of information. Any tool that wants to approximate rustc's output for their own purposes, needs to use a third-party tool that diverges from rustc's output, like annotate-snippets or miette. Any improvements or bugfixes contributed to those libraries are not propagated back to rustc. Because the emitter is part of the rustc codebase, the barrier to entry for new contributors is artificially kept high than it otherwise would be.

annotate-snippets is already part of the rustc codebase, but it is disabled by default, doesn't have extensive testing and there's no way of enabling this output through cargo, which limits how many users can actually make use of it.

The next 6 months

  • annotate-snippets rendered output reaches full parity (modulo reasonable non-significant divergences) with rustc's output
  • A call for testing is made to the community to gather feedback on annotate-snippets

The "shiny future" we are working towards

The outputs of rustc and cargo are fully using annotate-snippets, with no regressions to the rendered output. annotate-snippets grows its feature set, like support for more advanced rendering formats or displaying diagnostics with more than ASCII-art, independently of the compiler development cycle.

Design axioms

  • Match rustc's output: The output of annotate-snippets should match rustc, modulo reasonable non-significant divergences
  • Works for Cargo (and other tools): annotate-snippets is meant to be used by any project that would like "Rust-style" output, so it should be designed to work with any project, not just rustc.

Ownership and team asks

Owner: Esteban Kuber, Scott Schafer

Reach output parity of rustc/annotate-snippets

TaskOwner(s) or team(s)Notes
add suggestionsScott Schafer
Port a subset of rustc's UI testsScott Schafer
address divergencesScott Schafer

Initial use of annotate-snippets

TaskOwner(s) or team(s)Notes
update annotate-snippets to latest version
teach cargo to pass annotate-snippets flagEsteban Kuber
add ui test mode comparing new output
switch default nightly rustc output

Production use of annotate-snippets

TaskOwner(s) or team(s)Notes
switch default rustc output
release notes
switch ui tests to only check new output
Dedicated reviewerTeam compilerEsteban Kuber will be the reviewer

Standard reviews

TaskOwner(s) or team(s)Notes
Standard reviewsTeam compiler

Top-level Rust blog post inviting feedback

TaskOwner(s) or team(s)Notes
Top-level Rust blog post inviting feedback

build-std

Metadata
Point of contactDavid Wood
Teamscargo
StatusProposed

Arm's Rust team is David Wood, Adam Gemmell, Jacob Bramley, Jamie Cunliffe and James. This goal will be primarily worked on by Adam Gemmell, but David Wood can always be contacted for updates.

Summary

Write an RFC for a minimum viable product (MVP) of build-std which has the potential to be stabilised once implemented (as opposed to the currently implemented MVP which is only suitable for experimentation and testing), and then implement it.

Motivation

build-std is a well-known unstable feature in Cargo which enables Cargo to re-build the standard library, this is useful for a variety of reasons:

  • Building the standard library for targets which do not ship with a pre-compiled standard library.
  • Optimising the standard library for known hardware, such as with non-baseline target features or options which optimise for code size. This is a common use case for embedded developers.
  • Re-building the standard library with different configuration options (e.g. changing the optimisation level, using flags which change the ABI, or which add additional exploit mitigations).
  • Re-building the standard library with different cfgs (e.g. disabling backtrace in std), to the extent that such configurations are supported by the standard library.
  • Stabilisation of various compiler flags which change the ABI, add additional exploit mitigations (such as -Zsanitizers=cfi or -Zbranch-protection), or which otherwise only make sense to use when the entire program is compiled with the flag (including std) is blocked on these being unable to be used properly without being able to rebuild std.

These features are more useful for some subsets of the Rust community, such as embedded developers where optimising for size can be more important and where the targets often don't ship with a pre-compiled std.

The fifty-thousand foot view of the work involved in this feature is:

  • Having the standard library sources readily available that match the compiler.
  • Being able to build those sources without using a nightly toolchain, which has many possible solutions.
  • Having a blessed way to build at least core without Cargo, which some users like Rust for Linux would like.
    • This would be optional but may be a side-effect of whatever mechanism for build-std the MVP RFC eventually proposes.
  • Being able to tell the compiler to use the resulting prebuilt standard library sources instead of the built-in standard library, in a standard way.
  • Integrating all of the above into Cargo.
  • Making sure all of this works for targets that don't have a pre-built std.

Rust for Linux and some other projects have a requirement to build core themselves without Cargo (ideally using the same stable compiler they use for the rest of their project), which is a shared requirement with build-std, as whatever mechanism these projects end up using could be re-used by the implementation of build-std and vice-versa.

The status quo

build-std is currently an unstable feature in Cargo which hasn't seen much development or progress since its initial development in 2019/2020. There are a variety of issues in the wg-cargo-std-aware repository which vary from concrete bugs in the current experimental implementation to vague "investigate and think about this" issues, which make the feature difficult to make progress on.

Some of the work required for this exists in the current perma-unstable -Zbuild-std implementation, which may be re-used if appropriate.

Prior to the submission of this goal, this goal has been discussed with the cargo team and leads of the compiler and library teams, ensuring that this goal's owners have liaisons from stakeholder teams and the support of the primary teams involved in the design and implementation.

The next 6 months

There are two primary objectives of this goal in its first six months:

  • Firstly, we will write an MVP RFC that will limit the scope of the feature and make it easier to make progress on build-std.

    It is intended that this RFC will summarize all of the previous discussion, use cases and feedback on build-std. In this documenting of the current state of build-std, this RFC will be well-positioned to propose which use cases should and should not be resolved by build-std (for the final feature, not just this MVP). For example, this RFC will decide whether patching or modifying std is a supported use case for build-std.

    For those use cases solved by build-std, the RFC will select a subset for the new MVP of build-std. It is intended that this MVP be sufficiently useful and complete that it could be stabilised. The design of the MVP will be forward-compatible with all of the other use cases that build-std is intended to solve.

    It is hoped that this RFC should demonstrate a thorough understanding of the design space of build-std and give the responsible upstream teams confidence in our ownership of this feature, and enabling those teams to make a fully informed decision on any proposals made.

  • Next, after and conditional on acceptance of this RFC, we will proceed with its implementation.

The "shiny future" we are working towards

After the approval and implementation of the MVP RFC, there will naturally be follow-up use cases which can be designed and implemented to complete the build-std feature.

Design axioms

  • Enabling build-std without changing any compilation options or configuration should produce an equivalent library to that distributed by the project.
  • Avoid precluding future extensions to build-std.
  • build-std should allow std/alloc/core to be treated more like other dependencies than currently.
    • This represents a general move away from treating std/alloc/core as a special case.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo
Author RFCAdam Gemmell
ImplementationAdam Gemmell
Standard reviewsTeam cargo

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None yet.

rustc-perf improvements

Metadata
Point of contactDavid Wood
Teamsinfra, compiler
StatusProposed

Arm's Rust team is David Wood, Adam Gemmell, Jacob Bramley, Jamie Cunliffe and James. This goal will be primarily worked on by James, but David Wood can always be contacted for updates.

Summary

Add support to rustc-perf for distributed benchmarking across multiple platforms and configuration.

Motivation

Improving the performance of the Rust compiler is a long-standing objective of the Rust project and compiler team, which has led to the development of the project's performance tracking infrastructure. While the performance tracking infrastructure has seen many improvements in recent years, it cannot scale to support multiple benchmarking machines simultaneously.

There are increasingly demands on the performance infrastructure which require a more scalable benchmarking infrastructure - benchmarking the parallel compiler with different thread counts, different codegen backends, or different architectures.

The status quo

rustc-perf does not currently support scheduling and accepting benchmarks from multiple machines, requiring a non-trivial rearchitecting to do so. None of our policies around performance triage and handling regressions currently consider what to do in case of conflicting benchmarking results.

The next 6 months

rustc-perf's maintainers have written a rough draft of the work required to support multiple collectors which will form the basis of the work completed during this goal. After aligning on a implementation plan with the upstream maintainers of rustc-perf and ensuring that the implementation can proceed while placing as little burden on the infra team as possible, the work will largely consist of:

  1. Establish a parallel testing infrastructure to avoid any disruption to the live rustc-perf service
  2. Plan and implement necessary refactorings to the rustc-perf infrastructure enabling distributed benchmarking
  3. Writing tests for the new distributed rustc-perf infrastructure, enabling future development to avoid breakage
  4. Make changes to the database schema to support receiving results from multiple collectors (both being able to distinguish between results from each configuration and be have multiple writes simultaneously)
  5. Update queries and statistics used in summarising collected performance data and identifying outliers
  6. Update perf.rust-lang.org to be able to display performance data from multiple collectors and make appropriate comparisons (within a configuration, not between configurations)

As this work nears completion, this goal's owners will collaborate with the compiler team and its performance working group to extend and update the compiler team's triage and regression handling policies. It is important that there are clear guidelines and procedures for circumstances where a benchmark improves on one platform and regresses on another, or how to weigh benchmark results from unstable features or configurations (e.g. -Zthreads=2) vs the primary benchmarking platforms and configurations.

The "shiny future" we are working towards

Following the completion of this goal, it is anticipated that new platforms and configurations will be added to rustc-perf, but this is unlikely to warrant further goals.

Ownership and team asks

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam infra
Improve rustc-perf implementation workJames
Standard reviewsTeam infra
Deploy to productionTeam infrarustc-perf improvements, testing infrastructure
Draft performance regression policyDavid Wood
Policy decisionTeam compilerUpdate performance regression policy
Inside Rust blog post announcing improvementsDavid Wood

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

None yet.

Not accepted

This section contains goals that were proposed but ultimately not accepted, either for want of resources or consensus. In many cases, narrower versions of these goals were proposed instead.

GoalPoint of contactProgress

Goal motivations

The motivation section for a goal should sell the reader on why this is an important problem to address.

The first paragraph should begin with a concise summary of the high-level goal and why it's important. This is a kind of summary that helps the reader to get oriented.

The status quo section then goes into deeper detail on the problem and how things work today. It should answer the following questions

  • Who is the target audience? Is it a particular group of Rust users, such as those working in a specific domain? Contributors?
  • What do these users do now when they have this problem, and what are the shortcomings of that?

The next few steps can explain how you plan to change that and why these are the right next steps to take.

Finally, the shiny future section can put the goal into a longer term context. It's ok if the goal doesn't have a shiny future beyond the next few steps.

Invited goals

Invited goals are goals accepted whose main implementation tasks have no assigned owner. Teams accept invited goals to represent work that they feel is important but where nobody is available to do the work. These goals are a great place to get started in contributing to Rust. If you see an invited goal you think you could help with, reach out to the point of contact listed on the goal: they are volunteering to mentor.

Goal point of contacts

Every goal has a single point of contact. This is the person responsible for authoring updates and generally tracking the status of the goal. They will get regular pings to provide updates.

We require a single point of contact to avoid confusion. But of course it is possible for that person to delegate the actual authoring of updates to others.

For simple goals, the point of contact is usually the person who owns the implementation.

Owners

To be fully accepted each goal needs a set of owners -- the people who will do the work. This is often a single person or a small group. Owners are written in Each goal has an associated list of tasks to be completed over the next six months. To be accepted, most of the tasks need to have designated owners.

To be fully accepted, a goal must have a designated owner. This is ideally a single, concrete person, though it can be a small group.

Goals without owners can only be accepted in provisional form.

Owners keep things moving

Owners are the ones ultimately responsible for the goal being completed. They stay on top of the current status and make sure that things keep moving. When there is disagreement about the best path forward, owners are expected to make sure they understand the tradeoffs involved and then to use their good judgment to resolve it in the best way.

When concerns are raised with their design, owners are expected to embody not only the letter but also the spirit of the Rust Code of Conduct. They treasure dissent as an opportunity to improve their design. But they also know that every good design requires compromises and tradeoffs and likely cannot meet every need.

Owners own the proposal, teams own the decision

Even though owners are the ones who author the proposal, Rust teams are the ones to make the final decision. Teams can ultimately overrule an owner: they can ask the owner to come back with a modified proposal that weighs the tradeoffs differently. This is right and appropriate, because teams are the ones we recognize as having the best broad understanding of the domain they maintain. But teams should use their power judiciously, because the owner is typically the one who understands the tradeoffs for this particular goal most deeply.

Owners report regularly on progress

One of the key responsibilities of the owner is regular status reporting. Each active project goal is given a tracking issue. Owners are expected to post updates on that tracking issue when they are pinged by the bot. The project will be posting regular blog posts that are generated in a semi-automated fashion from these updates: if the post doesn't have new information, then we will simply report that the owner has not provided an update. We will also reach out to owners who are not providing updates to see whether the goal is in fact stalled and should be removed from the active list.

Ownership is a position of trust

Giving someone ownership of a goal is an act of faith — it means that we consider them to be an individual of high judgment who understands Rust and its values and will act accordingly. This implies to me that we are unlikely to take a goal if the owner is not known to the project. They don’t necessarily have to have worked on Rust, but they have to have enough of a reputation that we can evaluate whether they’re going to do a good job.’

The project goal template includes a number of elements designed to increase trust:

  • The "shiny future" and design axioms give a "preview" of how owner is thinking about the problem and the way that tradeoffs will be resolved.
  • The milestones section indicates the rough order in which they will approach the problem.

Design axioms

Each project goal includes a design axioms section. Design axioms capture the guidelines you will use to drive your design. Since goals generally come early in the process, the final design is not known -- axioms are a way to clarify the constraints you will be keeping in mind as you work on your design. Axioms will also help you operate more efficiently, since you can refer back to them to help resolve tradeoffs more quickly.

Examples

Axioms about axioms

  • Axioms capture constraints. Axioms capture the things you are trying to achieve. The goal ultimately is that your design satisfies all of them as much as possible.
  • Axioms express tradeoffs. Axioms are ordered, and -- in case of conflict -- the axioms that come earlier in the list take precedence. Since axioms capture constraints, this doesn't mean you just ignore the axioms that take lower precedence, but it usually means you meet them in a "less good" way. For example, maybe consider a lint instead of a hard error?
  • Axioms should be specific to your goal. Rust has general design axioms that
  • Axioms are short and memorable. The structure of an axiom should begin with a short, memorable bolded phrase -- something you can recite in meetings. Then a few sentences that explain in more detail or elaborate.

Axioms about the project goal program

  • Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
  • Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
  • Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
  • Owners are first-among-equals. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
  • To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.

Axioms about Rust itself

Still a work in progress! See the Rust design axioms repository.

Frequently asked questions

Where can I read more about axioms?

Axioms are very similar to approaches used in a number of places...

RFC

The RFC proposing the goal program has been opened. See RFC #3614.

Propose a new goal

Status: Accepting for 2025H1

What steps do I take to submit a goal?

Goal proposed are submitted as Pull Requests:

  • Fork the GitHub repository and clone it locally
  • Copy the src/TEMPLATE.md to a file like src/2025h1/your-goal-name.md. Don't forget to run git add.
  • Fill out the your-goal-name.md file with details, using the template and other goals as an example.
    • The goal text does not have to be complete. It can be missing details.
  • Open a PR.

Who should propose a goal?

Opening a goal is an indication that you (or your company, etc) is willing to put up the resources needed to make it happen, at least if you get the indicated support from the teams. These resources are typically development time and effort, but they could be funding (in that case, we'd want to identify someone to take up the goal). If you pass that bar, then by all means, yes, open a goal.

Note though that controversial goals are likely to not be accepted. If you have an idea that you think people won't like, then you should find ways to lower the ask of the teams. For example, maybe the goal should be to perform experiments to help make the case for the idea, rather than jumping straight to implementation.

Can I still do X, even if I don't submit a goal for it?

Yes. Goals are not mandatory for work to proceed. They are a tracking mechanism to help stay on course.

TEMPLATE (replace with title of your goal)

Instructions: Copy this template to a fresh file with a name based on your plan. Give it a title that describes what you plan to get done in the next 6 months (e.g., "stabilize X" or "nightly support for X" or "gather data about X"). Feel free to replace any text with anything, but there are placeholders designed to help you get started.

The point of contact is the person responsible for providing updates.

The status should be either Proposed (if you have owners) or Proposed, Invited (if you do not yet).

Metadata
Point of contactmust be a single Github username like Deleted user
TeamsNames of teams being asked to commit to the goal
StatusProposed

Summary

Short description of what you will do over the next 6 months.

Motivation

Begin with a few sentences summarizing the problem you are attacking and why it is important.

The status quo

Elaborate in more detail about the problem you are trying to solve. This section is making the case for why this particular problem is worth prioritizing with project bandwidth. A strong status quo section will (a) identify the target audience and (b) give specifics about the problems they are facing today. Sometimes it may be useful to start sketching out how you think those problems will be addressed by your change, as well, though it's not necessary.

The next 6 months

Sketch out the specific things you are trying to achieve in this goal period. This should be short and high-level -- we don't want to see the design!

The "shiny future" we are working towards

If this goal is part of a larger plan that will extend beyond this goal period, sketch out the goal you are working towards. It may be worth adding some text about why these particular goals were chosen as the next logical step to focus on.

This text is NORMATIVE, in the sense that teams should review this and make sure they are aligned. If not, then the shiny future should be moved to frequently asked questions with a title like "what might we do next".

However, for most proposals, alignment on exact syntax should not be required to start a goal, only alignment on the problem and the general sketch of the solution. This may vary for goals that are specifically about syntax, such as ergonomic improvements.

Design axioms

This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.

Ownership and team asks

This section lists out the work to be done and the asks from Rust teams. Every row in the table should either correspond to something done by a contributor or something asked of a team.

For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. The owner is ideally identified as a github username like [Deleted user][].

For items asked of teams, list Team and the name of the team, e.g. ![Team][] [compiler] or ![Team][] [compiler], [lang] (note the trailing [] in ![Team][], that is needed for markdown to parse correctly). For team asks, the "task" must be one of the tasks defined in rust-project-goals.toml or cargo rpg check will error.

TaskOwner(s) or team(s)Notes
Discussion and moral supportTeam cargo
Do the workowner

Stabilize feature X

If you have a complex goal, you can include subsections for different parts of it, each with their own table. Must goals do not need this and can make do with a single table. The table in this section also lists the full range of "asks" for a typical language feature; feel free to copy some subset of them into your main table if you are primarily proposing a single feature (note that most features don't need all the entries below).

TaskOwner(s) or team(s)Notes
Lang-team experimentTeam langallows coding pre-RFC; only for trusted contributors
Author RFCGoal point of contact, typically
ImplementationGoal point of contact, typically
Standard reviewsTeam compiler
Design meetingTeam lang
RFC decisionTeam lang
Secondary RFC reviewTeam langmost features don't need this
Author stabilization reportGoal point of contact, typically
Stabilization decisionTeam langit's rare to author rfc, implement, AND stabilize in 6 months

Definitions

Definitions for terms used above:

  • Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
  • Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Frequently asked questions

What do I do with this space?

This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.

Report status

Every accepted project goal has an associated tracking issue. These are created automatically by the project-goals admin tool. Your job as a project goal point of contact is to provide regular status updates in the form of a comment indicating how things are going. These will be collected into regular blog posts on the Rust blog as well as being promoted in other channels.

Updating the progress bar

When we display the status of goals, we include a progress bar based on your documented plan. We recommend you keep this up to date. You can mix and match any of the following ways to list steps.

Checkboxes

The first option is to add checkboxes into the top comment on the tracking issue. Simply add boxes like * [ ] or * [x] for a completed item. The tool will count the number of checkboxes and use that to reflect progress. Your tracking issue will be pre-propulated with checkboxes based on the goal doc, but feel free to edit them.

Best practice is to start with a high level list of tasks:

* [ ] Author code
* [ ] Author RFC
* [ ] Accept RFC
* [ ] Test
* [ ] Stabilize

each time you provide a status update, check off the items that are done, and add new items with more detailed to-do items that represent your next steps.

Search queries

For larger project goals, it can be more convenient to track progress via github issues. You can do that by removing all the checkboxes from your issue and instead adding a "Tracked issues" line into the metadata table on your tracking issue. It should look like this:

| Metadata      | |
| --------      | --- |
| Point of contact | ... |
| Team(s)       | ... |
| Goal document | ... |
| Tracked issues | [rust-lang/rust label:A-edition-2024 label:C-tracking-issue -label:t-libs](...) |

The first 3 lines should already exist. The last line is the one you have to add. The "value" column should have a markdown link, the contents of which begin with a repo name and then search parameters in Github's format. The tool will conduct the search and count the number of open vs closed issues. The (...) part of the link should be to github so that users can click to do the search on their own.

You can find an example on the Rust 2024 Edition tracking issue.

Use "See also" to refer to other tracking issues

If you already have a tracking issue elsewhere, just add a "See also" line into your metadata. The value should be a comma-or-space-separated list of URLs or org/repo#issue github references:

| Metadata      | |
| --------      | --- |
| Point of contact | ... |
| Team(s)       | ... |
| Goal document | ... |
| See also | rust-lang/rust#123 |

We will recursively open up the "see also" issue and extract checkboxes (or search queries / see-also tags) from there.

Binary issues

If we don't find any of the above, we will consider your issue either 0% done if it is not yet closed or 100% done if it is.

Status update comments

Status updates are posted as comments on the Github tracking issue. You will receive regular pings on Zulip to author status updates periodically. It's a good idea to take the opportunity to update your progress checkboxes as well.

There is no strict format for these updates but we recommend including the following information:

  • What happened since the last update? Were any key decisions made or milestones achieved?
  • What is the next step to get done?
  • Are you blocked on anyone or anything?
  • Is there any opportunity to others to pitch in and help out?

Closing the issue

Closing the tracking issue is a signal that you are no longer working on it. This can be because you've achieved your goal or because you have decided to focus on other things. Also, tracking issues will automatically be closed at the end of the project goal period.

When you close an issue, the state of your checkboxes makes a difference. If they are 100% finished, the goal will be listed as completed. If there are unchecked items, the assumption is that the goal is only partly done, and it will be listed as unfinished. So make sure to check the boxes if the goal is done!

Goals team

The Rust goals program is administered by the Goals team. This document serves as the team charter.

Mission

Our mission is to focus the Rust programming language efforts by running and administering an inclusive and dynamic goals program. We work with the project teams to identify the highest priority items and to make sure that the teams are budgeting adequate time and resources to ensure those items are successful. For new contributors who have an idea they'd like to pursue, we work to provide a welcoming "front door" to Rust, connecting their proposal to the maintainers whose support will be needed to make it reality. For existing maintainers, we help them to document the work they are doing and to find new contributors.

Role and duties of team members

Team members perform some subset of the following roles:

  • Attending short sync meetings.
  • When preparing a new goal slate:
    • Advertising the goal program to teams and soliciting participation
    • Reviewing incoming goal proposals for quality and accuracy
    • Seeking feedback on behalf of outsiders' goals
    • Authoring the RFC and hounding team leads to check their boxes
    • Deciding which goals to propose as flagship goals.
  • During the year:
    • Authoring round-up blog posts highlighting progress
    • Updating and maintaining the web-site
    • Checking in with the goal points of contact that are not reporting progress to see if they need help

Role of the lead

The team lead is the owner of the program, meaning that they take ultimately responsible for ensuring the goals program moves forward smoothly. They perform and and all of the member functions as needed, delegating where possible. In the event of conflicts (e.g., which goals to propose as flagship goals in the RFC), team lead makes the final decision.

Running the program

So... somebody suckered you into serving as the owner of the goals program. Congratulations! This page will walk you through the process.

Call for proposals

Each goal milestone corresponds to six months, designated in the format YYYYHN, e.g., 2024H2 or 2025H1. To launch a new goal season, you should get started a month or two before the new season starts:

  • For an H1 season, start around mid Oct or November of the year before.
  • For an H2 season, start around mid April or May of the year before.

This is the checklist of steps to starting accepting goal proposals:

  • Prepare a Call For Proposals blog post on the Inside Rust blog based on this sample.
    • We use Inside Rust and not the Main blog because the target audience is would-be Rust contributors and maintainers.
  • Update the main README page to indicate that the next round of goals is begin accepted.
  • Create a new directory src/YYYYhN, e.g., src/2025h1, with the following files. Note that the sample files below include <!-- XXX --> directives that are detected by the mdbook plugin and replaced with appropriate content automatically.
  • Modify SUMMARY.md to include your new milestone with some text like what is shown below.

Sample SUMMARY.md comments from 2025H1:

# ⏳ 2025H1 goal process

- [Overview](./2025h1/README.md)
- [Proposed goals](./2025h1/goals.md)
- [Goals not accepted](./2025h1/not_accepted.md)

Receiving PRs

to be written

Preparing the RFC

to be written

Opening the RFC

to be written

Once the RFC is accepted

to be written

Running the goal season

Finalizing the goal season

Call for PRs: YYYYHN goals

NOTE: This is a sample blog post you can use as a starting point. To begin a new goal season (e.g., 2222H1), do the following:

  • Copy this file to to the blog.rust-lang.org repository as a new post.
  • Search and replace YYYYHN with 2222H1 and delete this section.
  • Look for other "TBD" sections, you'll want to replace those eventually.

As of today, we are officially accepting proposals for Rust Project Goals targeting YYYYHN (the (TBD) half of YYYY). If you'd like to participate in the process, or just to follow along, please check out the YYYYHN goal page. It includes listings of the goals currently under consideration , more details about the goals program, and instructions for how to submit a goal.

What is the project goals program and how does it work?

Every six months, the Rust project commits to a set of goals for the upcoming half-year. The process involves:

  • the owner of the goal program (currently me) posts a call for proposals (this post);
  • would-be goal points of contact open PRs against the rust-project-goals repository;
  • the goal-program owner gathers feedback on these goals and chooses some of them to be included in the RFC proposing the final slate of goals.

To get an idea what the final slate of goals looks like, check out the RFC from the previous round of goals, RFC (TBD). The RFC describes a set of goals, designates a few of them as flagship goals, and summarizes the work expected from each team. The RFC is approved by (at least) the leads of each team, effectively committing their team to prove the support that is described.

Should I submit a goal?

Opening a goal is an indication that you (or your company, etc) is willing to put up the resources needed to make it happen, at least if you get the indicated support from the teams. These resources are typically development time and effort, but they could be funding (in that case, we'd want to identify someone to take up the goal). If you pass that bar, then by all means, yes, open a goal.

Note though that controversial goals are likely to not be accepted. If you have an idea that you think people won't like, then you should find ways to lower the ask of the teams. For example, maybe the goal should be to perform experiments to help make the case for the idea, rather than jumping straight to implementation.

Can I still do X, even if I don't submit a goal for it?

Yes. Goals are not mandatory for work to proceed. They are a tracking mechanism to help stay on course.

Conclusion

The Rust Project Goals program is driving progress, increasing transparency, and energizing the community. As we enter the second round, we invite you to contribute your ideas and help shape Rust's future. Whether you're proposing a goal or following along, your engagement is vital to Rust's continued growth and success. Join us in making Rust even better in 2025!

Sample: Text for the main README

NOTE: This is a sample section you can use as a starting point.

  • Copy and paste the markdown below into the main README.
  • Replace YYYYHN with 2222h1 or whatever.

(Note that the links on this page are relative to the main README, not its current location.)

Next goal period (YYYYHN)

The next goal period will be YYYYHN, running from MM 1 to MM 30. We are currently in the process of assembling goals. Click here to see the current list. If you'd like to propose a goal, instructions can be found here.

Sample RFC

NOTE: This is a sample RFC you can use as a starting point. To begin a new goal season (e.g., 2222H1), do the following:

  • Copy this file to src/2222H1/README.md.
  • Search and replace YYYYHN with 2222H1 and delete this section.
  • Look for other "TBD" sections, you'll want to replace those eventually.
  • Customize anything else that seems relevant.

Summary

Status: Accepting goal proposals We are in the process of assembling the goal slate.

This is a draft for the eventual RFC proposing the YYYYHN goals.

Motivation

The YYYYHN goal slate consists of 0 project goals, of which we have selected (TBD) as flagship goals. Flagship goals represent the goals expected to have the broadest overall impact.

How the goal process works

Project goals are proposed bottom-up by a point of contact, somebody who is willing to commit resources (time, money, leadership) to seeing the work get done. The point of contact identifies the problem they want to address and sketches the solution of how they want to do so. They also identify the support they will need from the Rust teams (typically things like review bandwidth or feedback on RFCs). Teams then read the goals and provide feedback. If the goal is approved, teams are committing to support the point of contact in their work.

Project goals can vary in scope from an internal refactoring that affects only one team to a larger cross-cutting initiative. No matter its scope, accepting a goal should never be interpreted as a promise that the team will make any future decision (e.g., accepting an RFC that has yet to be written). Rather, it is a promise that the team are aligned on the contents of the goal thus far (including the design axioms and other notes) and will prioritize giving feedback and support as needed.

Of the proposed goals, a small subset are selected by the roadmap owner as flagship goals. Flagship goals are chosen for their high impact (many Rust users will be impacted) and their shovel-ready nature (the org is well-aligned around a concrete plan). Flagship goals are the ones that will feature most prominently in our public messaging and which should be prioritized by Rust teams where needed.

Rust’s mission

Our goals are selected to further Rust's mission of empowering everyone to build reliable and efficient software. Rust targets programs that prioritize

  • reliability and robustness;
  • performance, memory usage, and resource consumption; and
  • long-term maintenance and extensibility.

We consider "any two out of the three" as the right heuristic for projects where Rust is a strong contender or possibly the best option.

Axioms for selecting goals

We believe that...

  • Rust must deliver on its promise of peak performance and high reliability. Rust’s maximum advantage is in applications that require peak performance or low-level systems capabilities. We must continue to innovate and support those areas above all.
  • Rust's goals require high productivity and ergonomics. Being attentive to ergonomics broadens Rust impact by making it more appealing for projects that value reliability and maintenance but which don't have strict performance requirements.
  • Slow and steady wins the race. For this first round of goals, we want a small set that can be completed without undue stress. As the Rust open source org continues to grow, the set of goals can grow in size.

Guide-level explanation

Flagship goals

The flagship goals proposed for this roadmap are as follows:

(TBD)

Why these particular flagship goals?

(TBD--typically one paragraph per goal)

Project goals

The full slate of project goals are as follows. These goals all have identified points of contact who will drive the work forward as well as a viable work plan. The goals include asks from the listed Rust teams, which are cataloged in the reference-level explanation section below.

Invited goals. Some goals of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as a point of contact. To find the invited goals, look for the Help wanted badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactProgress

Reference-level explanation

The following table highlights the asks from each affected team. The "owner" in the column is the person expecting to do the design/implementation work that the team will be approving.

Definitions

Definitions for terms used above:

  • Author RFC and Implementation means actually writing the code, document, whatever.
  • Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
  • RFC decisions means reviewing an RFC and deciding whether to accept.
  • Org decisions means reaching a decision on an organizational or policy matter.
  • Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
  • Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
  • Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
  • Other kinds of decisions:
    • Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
    • Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
    • Library API Change Proposal (ACP) describes a change to the standard library.

Goals

NOTE: This is a sample starting point for the goals.md page in a milestone directory.

  • Search and replace YYYYHN with 2222H1 and delete this section.

This page lists the 0 project goals proposed for YYYYHN.

Just because a goal is listed on this list does not mean the goal has been accepted. The owner of the goal process makes the final decisions on which goals to include and prepares an RFC to ask approval from the teams.

Flagship goals

Flagship goals represent the goals expected to have the broadest overall impact.

GoalPoint of contactProgress

Other goals

These are the other proposed goals.

Invited goals. Some goals of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as a point of contact. To find the invited goals, look for the Help wanted badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactProgress

Overall setup

The rust-project-goals repository is set up as follows

  • an mdbook for generating the main content
  • a Rust binary in src that serves as
    • a runnable utility for doing various admin functions on the CLI (e.g., generating a draft RFC)
    • an mdbook preprocessor for generating content like the list of goals
    • a utility invoked in CI that can query github and produce a JSON with the goal status
  • pages on the Rust website that fetches JSON data from rust-project-goals repo to generate content
    • the JSON data is generated by the Rust binary
  • tracking issues for each active project goal:
    • tagged with C-tracking-issue
    • and added to the appropriate milestone
  • triagebot modifications to link Zulip and the repo tracking issues
    • the command &#64;triagebot ping-goals N D will ping all active goal points of contact to ask them to add updates
      • N is a threshold number of days; if people have posted an update within the last N days, we won't bother them. Usually I do this as the current date + 7, so that people who posted during the current month or the last week of the previous month don't get any pings.
      • D is a word like Sep-22 that indicates the day
    • the bot monitors for comments on github and forwards them to Zulip

Mdbook Plugin

The mdbook is controlled by the mdbook-goals plugin in this repo. This plugin makes various edits to the source:

  • Linking usernames like @foo to their github page and replacing them with their display name.
  • Linking GH references like rust-lang/rust#123.
  • Collating goals, creating tables, etc.

The plugin can also be used from the command line.

Expected book structure

The plugin is designed for the book to have a directory per phase of the goal program, e.g., src/2024h2, src/2025h1, etc. Within this directory there should be:

  • A README.md file that will contain the draft slate RFC.
  • One file per goal. Each goal file must follow the TEMPLATE structure and in particular must have
  • One file per "phase" of the program, e.g., proposed.md etc. (These are not mandatory.)

Plugin replacement text

The plugin will replace the following placeholder texts. Each placeholder is enclosed within an html comment <!-- -->.

Goal count

The placeholder <-- #GOALS --> will be replaced with the total number of goals under consideration (this count excludes goals with the status Not accepted).

Goal listing

The placeholder <-- GOALS '$Status' --> will insert a goal table listing goals of the given status $Status, e.g., <-- GOALS 'Flagship' -->. You can also list multiple status items, e.g., <-- GOALS 'Accepted,Proposed' -->

Commands

There is a CLI for manipulating and checking project goals. You can run it with cargo rpg and it has a number of useful commands, described in this section. Use cargo rpg --help to get a summary.

Creating tracking issues

Usage:

> cargo rpg issues

The issues command is used to create tracking issues at the start of a project goal session. When you first run it, it will simply tell you what actions it plans to take.

To actually commit and create the issues, supply the --commit flag:

> cargo rpg issues --commit

This will also edit the goal documents to include a link to each created tracking issue. You should commit those edits.

You can later re-run the command and it will not repeat actions it has already taken.c

Summarize updates for the monthly blog post

Usage:

> cargo rpg updates --help

The updates command generates the starting point for a monthly blog post. The output is based on the handlebars templates found in the templates directory. The command you probably want most often is something like this

> cargo rpg updates YYYYhN --vscode

which will open the blogpost in a tab in VSCode. This makes it easy to copy-and-paste over to the main Rust blog.

Blog post starting point

The blog post starting point is based on the handlebars template in templates/updates.hbs.

Configuring the LLM

The updates command makes use of an LLM hosted on AWS Bedrock to summarize people's comments. You will need to run aws configure and login with some default credentials. You can skip the LLM by providing the --quick command-line option, but then you have to generate your own text, which can be pretty tedious.