Project goals
This repo tracks the effort to set and track goals for the Rust project.
Current goal period (2024h2)
There are 26 total goals established for the current goal period, which runs until the end of 2024. Of these, 3 were declared as flagship goals:
- Continue simplifying Rust by releasing the Rust 2024 edition.
- Improve the experience of building network systems in Rust by bringing Async Rust experience closer to parity with sync Rust.
- Enable safe abstractions for low-level systems by resolving the biggest blockers to Linux building on stable Rust.
Next goal period (2025h1)
The next goal period will be 2025h1 runs from Jan 1 to Jun 30. We are currently in the process of assembling goals. Click here to see the current list. If you'd like to propose a goal, instructions can be found here.
About the process
Want to learn more? Check out some of the following:
- RFC #3614, which describes the overall goals and plan
- The currently proposed goals for 2024H2
- Instructions for propose a goal of your own
- What it means to be a goal owner
Rust project goals for 2024H2
RFC #3672 has been accepted, establishing 26 total Rust project goals for 2024H2.
Want to learn more?
- Read about the flagship goals:
- Continue simplifying Rust by releasing the Rust 2024 edition.
- Improve the experience of building network systems in Rust by bringing Async Rust experience closer to parity with sync Rust.
- Enable safe abstractions for low-level systems by resolving the biggest blockers to Linux building on stable Rust.
- See the full list of goals.
- Read RFC #3672, which describes the rationale and process used to select the goals.
Goals
This page lists the 26 project goals accepted for 2024h2.
Flagship goals
Flagship goals represent the goals expected to have the broadest overall impact.
Other goals
These are the other accepted goals.
Orphaned goals. Some goals here are marked with the badge for their owner. These goals are called "orphaned goals". Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these orphaned goals, reach out to the mentor listed in the goal to discuss.
Bring the Async Rust experience closer to parity with sync Rust
Metadata | |
---|---|
Short title | Async |
Owner(s) | Tyler Mandry, Niko Matsakis |
Teams | lang, libs, libs-api |
Status | Flagship |
Tracking issue | rust-lang/rust-project-goals#105 |
Summary
Over the next six months, we will deliver several critical async Rust building block features
- resolve the "Send bound" problem, which blocks the widespread usage of async functions in traits;
- reorganize the async WG, so that we can be better aligned and move more swiftly from here out;
- stabilize async closures, allowing for a much wider variety of async related APIs (async closures are implemented on nightly).
Motivation
This goal represents the next step on a multi-year program aiming to raise the experience of authoring "async Rust" to the same level of quality as "sync Rust". Async Rust is a crucial growth area, with 52% of the respondents in the 2023 Rust survey indicating that they use Rust to build server-side or backend applications.
The status quo
Async Rust performs great, but can be hard to use
Async Rust is the most common Rust application area according to our 2023 Rust survey. Rust is a great fit for networked systems, especially in the extremes:
- Rust scales up. Async Rust reduces cost for large dataplanes because a single server can serve high load without significantly increasing tail latency.
- Rust scales down. Async Rust can be run without requiring a garbage collector or even an operating system, making it a great fit for embedded systems.
- Rust is reliable. Networked services run 24/7, so Rust's "if it compiles, it works" mantra means fewer unexpected failures and, in turn, fewer pages in the middle of the night.
Despite async Rust's popularity, using async I/O makes Rust significantly harder to use. As one Rust user memorably put it, "Async Rust is Rust on hard mode." Several years back the async working group collected a number of "status quo" stories as part of authoring an async vision doc. These stories reveal a number of characteristic challenges:
- Common language features do not support async, meaning that users cannot write Rust code in the way they are accustomed to:
- Common async idioms have "sharp edges" that lead to unexpected failures, forcing users to manage cancellation safety, subtle deadlocks and other failure modes for buffered streams. See also tmandry's blog post on Making async Rust reliable).
- Using async today requires users to select a runtime which provides many of the core primitives. Selecting a runtime as a user can be stressful, as the decision once made is hard to reverse. Moreover, in an attempt to avoid "picking favories", the project has not endorsed a particular runtime, making it harder to write new user documentation. Libraries meanwhile cannot easily be made interoperable across runtimes and so are often written against the API of a particular runtime; even when libraries can be retargeted, it is difficult to do things like run their test suites to test compatibility. Mixing and matching libraries can cause surprising failures.
First focus: language parity, interop traits
Based on the above analysis, the Rust org has been focused on driving async/sync language parity, especially in those areas that block the development of a rich ecosystem. The biggest progress took place in Dec 2023, when async fn in traits and return position impl trait in trait were stabilized. Other work includes documenting async usability challenges in the original async vision doc, stabilizing helpers like std::future::poll_fn
, and polishing and improving async error messages.
The need for an aligned, high judgment group of async experts
Progress on async-related issues within the Rust org has been slowed due to lack of coherence around a vision and clear steps. General purpose teams such as lang and libs-api have a hard time determining how to respond to, e.g., particular async stabilization requests, as they lack a means to judge whether any given decision is really the right step forward. Theoretically, the async working group could play this role, but it has not really been structured with this purpose in mind. For example, the criteria for membership is loose and the group would benefit from more representation from async ecosystem projects. This is an example of a larger piece of Rust "organizational debt", where the term "working group" has been used for many different purposes over the years.
The next six months
In the second half of 2024 we are planning on the following work items. The following three items are what we consider to be the highest priority, as they do the most to lay a foundation for future progress (and they themselves are listed in priority order):
- resolve the "Send bound" problem, which blocks the widespread usage of async functions in traits;
- reorganize the async WG, so that we can be better aligned and move more swiftly from here out;
- stabilize async closures, allowing for a much wider variety of async related APIs (async closures are implemented on nightly).
We have also identified three "stretch goals" that we believe could be completed:
- stabilize trait for async iteration
- [support dyn for async traits via a proc macro]
- complete async drop experiments (currently unfunded)
Resolve the "send bound" problem
Although async functions in traits were stabilized, there is currently no way to write a generic function that requires impls where the returned futures are Send
. This blocks the use of async function in traits in some core ecosystem crates, such as tower, which want to work across all kinds of async executors. This problem is called the "send bound" problem and there has been extensive discussion of the various ways to solve it. RFC #3654 has been opened proposing one solution and describing why that path is preferred. Our goal for the year is to adopt some solution on stable.
A solution to the send bound problem should include a migration path for users of the trait_variant
crate, if possible. For RFC #3654 (RTN), this would require implementable trait aliases (see RFC #3437).
Reorganize the Async WG
We plan to reorganize the async working group into a structure that will better serve the projects needs, especially when it comes to aligning around a clear async vision. In so doing, we will help "launch" the async working group out from the launchpad umbrella team and into a more permanent structure.
Despite its limitations, the async working group serves several important functions for async Rust that need to continue:
- It provides a forum for discussion around async-related topics, including the
#async-wg
zulip stream as well as regular sync meetings. These forums don't necessarily get participation by the full set of voices that we would like, however. - It owns async-related repositories, such as the sources for the async Rust book (in dire need of improvement), arewewebyet, and the futures-rs crate. Maintenance of these sites has varied though and often been done by a few individuals acting largely independently.
- It advises the more general teams (typically lang and libs-api) on async-related matters. The authoring of the (mildly dated) async vision doc took place under the auspices of the working group, for example. However, the group lacks decision making power and doesn't have a strong incentive to coalesce behind a shared vision, so it remains more a "set of individual voices" that does not provide the general purpose teams with clear guidance.
We plan to propose one or more permanent teams to meet these same set of needs. The expectation is that these will be subteams under the lang and libs top-level teams.
Stabilize async closures
Building ergonomic APIs in async is often blocked by the lack of async closures. Async combinator-like APIs today typically make use of an ordinary Rust closure that returns a future, such as the filter
API from StreamExt
:
#![allow(unused)] fn main() { fn filter<Fut, F>(self, f: F) -> Filter<Self, Fut, F> where F: FnMut(&Self::Item) -> Fut, Fut: Future<Output = bool>, Self: Sized, }
This approach however does not allow the closure to access variables captured by reference from its environment:
#![allow(unused)] fn main() { let mut accept_list = vec!["foo", "bar"] stream .filter(|s| async { accept_list.contains(s) }) }
The reason is that data captured from the environment is stored in self
. But the signature for sync closures does not permit the return value (Self::Output
) to borrow from self
:
#![allow(unused)] fn main() { trait FnMut<A>: FnOnce<A> { fn call_mut(&mut self, args: A) -> Self::Output; } }
To support natural async closures, a trait is needed where call_mut
is an async fn
, which would allow the returned future to borrow from self
and hence modify the environment (e.g., accept_list
, in our example above). Or, desugared, something that is equivalent to:
#![allow(unused)] fn main() { trait AsyncFnMut<A>: AsyncFnOnce<A> { fn call_mut<'s>( &'s mut self, args: A ) -> impl Future<Output = Self::Output> + use<'s, A>; // ^^^^^^^^^^ note that this captures `'s` // // (This precise capturing syntax is unstable and covered by // rust-lang/rust#123432). } }
The goal for this year to be able to
- support some "async equivalent" to
Fn
,FnMut
, andFnOnce
bounds- this should be usable in all the usual places
- support some way to author async closure expressions
These features should be sufficient to support methods like filter
above.
The details (syntax, precise semantics) will be determined via experimentation and subject to RFC.
Stabilize trait for async iteration
There has been extensive discussion about the best form of the trait for async iteration (sometimes called Stream
, sometimes AsyncIter
, and now being called AsyncGen
). We believe the design space has been sufficiently explored that it should be possible to author an RFC laying out the options and proposing a specific plan.
Release a proc macro for dyn dispatch with async fn
in traits
Currently we do not support using dyn
with traits that use async fn
or -> impl Trait
. This can be solved without language extensions through the use of a proc macro. This should remove the need for the use of the async_trait
proc macro in new enough compilers, giving all users the performance benefits of static dispatch without giving up the flexibility of dynamic dispatch.
Complete async drop experiments
Authors of async code frequently need to call async functions as part of resource cleanup. Because Rust today only supports synchronous destructors, this cleanup must take place using alternative mechanisms, forcing a divergence between sync Rust (which uses destructors to arrange cleanup) and async Rust. MCP 727 proposed a series of experiments aimed at supporting async drop in the compiler. We would like to continue and complete those experiments. These experiments are aimed at defining how support for async drop will be implemented in the compiler and some possible ways that we could modify the type system to support it (in particular, one key question is how to prevent types whose drop is async from being dropped in sync code).
The "shiny future" we are working towards
Our eventual goal is to provide Rust users building on async with
- the same core language capabilities as sync Rust (async traits with dyn dispatch, async closures, async drop, etc);
- reliable and standardized abstractions for async control flow (streams of data, error recovery, concurrent execution), free of accidental complexity;
- an easy "getting started" experience that builds on a rich ecosystem;
- good performance by default, peak performance with tuning;
- the ability to easily adopt custom runtimes when needed for particular environments, language interop, or specific business needs.
Design axiom
- Uphold sync Rust's bar for reliability. Sync Rust famously delivers on the general feeling of "if it compiles, it works" -- async Rust should do the same.
- We lay the foundations for a thriving ecosystem. The role of the Rust org is to develop the rudiments that support an interoperable and thriving async crates.io ecosystem.
- When in doubt, zero-cost is our compass. Many of Rust's biggest users are choosing it because they know it can deliver the same performance (or better) than C. If we adopt abstractions that add overhead, we are compromising that core strength. As we build out our designs, we ensure that they don't introduce an "abstraction tax" for using them.
- From embedded to GUI to the cloud. Async Rust covers a wide variety of use cases and we aim to make designs that can span those differing constraints with ease.
- Consistent, incremental progress. People are building async Rust systems today -- we need to ship incremental improvements while also steering towards the overall outcome we want.
Ownership and team asks
Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP). The overall owners of the async effort (and authors of this goal document) are Tyler Mandry and Niko Matsakis. We have identified owners for subitems below; these may change over time.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Overall program management | Tyler Mandry, Niko Matsakis | |
"Send bound" problem | Niko Matsakis | |
↳ | ||
↳ | Niko Matsakis | |
↳ RFC decision | lang | |
↳ Stabilization decision | lang | |
Async WG reorganization | Niko Matsakis | |
↳ Author proposal | ||
↳ Org decision | libs, lang | |
Async closures | Michael Goulet | |
↳ | ||
↳ Author RFC | ||
↳ RFC decision | lang | |
↳ Design meeting | lang | 2 meetings expected |
↳ Author call for usage | Michael Goulet | |
↳ Stabilization decision | lang | |
Trait for async iteration | Eric Holk | |
↳ Author RFC | ||
↳ RFC decision | libs-api lang | |
↳ Design meeting | lang | 2 meetings expected |
↳ Implementation | ||
Dyn dispatch for AFIT | Santiago Pastorino | |
↳ Implementation | Santiago Pastorino | |
↳ Standard reviews | Tyler Mandry | |
Async drop experiments | Vadim Petrochenkov | |
↳ | ||
↳ | ||
↳ Implementation work | (*) | |
↳ Design meeting | lang | 2 meetings expected |
↳ Standard reviews | compiler |
(*) Implementation work on async drop experiments is currently unfunded. We are trying to figure out next steps.
Support needed from the project
Agreement from lang, libs libs-api to prioritize the items marked in the table above.
The expectation is that
- async closures will occupy 2 design meetings from lang during H2
- async iteration will occupy 2 design meetings from lang during H2 and likely 1-2 from libs API
- misc matters will occupy 1 design meeting from lang during H2
for a total of 4-5 meetings from lang and 1-2 from libs API.
Frequently asked questions
Can we really do all of this in 6 months?
This is an ambitious agenda, no doubt. We believe it is possible if the teams are behind us, but things always take longer than you think. We have made sure to document the "priority order" of items for this reason. We intend to focus our attention first and foremost on the high priority items.
Why focus on send bounds + async closures?
These are the two features that together block the authoring of traits for a number of common interop purposes. Send bounds are needed for generic traits like the Service
trait. Async closures are needed for rich combinator APIs like iterators.
Why not build in dyn dispatch for async fn in traits?
Async fn in traits do not currently support native dynamic dispatch. We have explored a number of designs for making it work but have not completed all of the language design work needed, and are not currently prioritizing that effort. We do hope to support it via a proc macro this year and extend to full language support later, hopefully in 2025.
Why are we moving forward on a trait for async iteration?
There has been extensive discussion about the best design for the "Stream" or "async iter" trait and we judge that the design space is well understood. We would like to unblock generator syntax in 2025 which will require some form of trait.
The majority of the debate about the trait has been on the topic of whether to base the trait on a poll_next
function, as we do today, or to try and make the trait use async fn next
, making it more anaologous with the Iterator
trait (and potentially even making it be two versions of a single trait). We will definitely explore forwards compatibility questions as part of this discussion. Niko Matsakis for example still wants to explore maybe-async-like designs, especially for combinator APIs like map
. However, we also refer to the design axiom that "when in doubt, zero-cost is our compass" -- we believe we should be able to stabilize a trait that does the low-level details right, and then design higher level APIs atop that.
Why do you say that we lack a vision, don't we have an [async vision doc][avd]?
Yes, we do, and the [existing document][avd] has been very helpful in understanding the space. Moreover, that document was never RFC'd and we have found that it lacks a certain measure of "authority" as a result. We would like to drive stronger alignment on the path forward so that we can focus more on execution. But doing that is blocked on having a more effective async working group structure (hence the goal to reorganize the async WG).
What about "maybe async", effect systems, and keyword generics?
Keyword generics is an ambitious initiative to enable code that is "maybe async". It has generated significant controversy, with some people feeling it is necessary for Rust to scale and others judging it to be overly complex. One of the reasons to reorganize the async WG is to help us come to a consensus around this point (though this topic is broader than async).
Resolve the biggest blockers to Linux building on stable Rust
Metadata | |
---|---|
Short title | Rust-for-Linux |
Owner(s) | Niko Matsakis, Josh Triplett |
Teams | lang, libs-api, compiler |
Status | Flagship |
Tracking issue | rust-lang/rust-project-goals#116 |
Summary
Stabilize unstable features required by Rust for Linux project including
- Stable support for RFL's customized ARC type
- Labeled goto in inline assembler and extended
offset_of!
support - RFL on Rust CI
- Pointers to statics in constants
Motivation
The experimental support for Rust development in the Linux kernel is a watershed moment for Rust, demonstrating to the world that Rust is indeed capable of targeting all manner of low-level systems applications. And yet today that support rests on a number of unstable features, blocking the effort from ever going beyond experimental status. For 2024H2 we will work to close the largest gaps that block support.
The status quo
The Rust For Linux (RFL) project has been accepted into the Linux kernel in experimental status. The project's goal, as described in the Kernel RFC introducing it, is to add support for authoring kernel components (modules, subsystems) using Rust. Rust would join C as the only two languages permitted in the linux kernel. This is a very exciting milestone for Rust, but it's also a big challenge.
Integrating Rust into the Linux kernel means that Rust must be able to interoperate with the kernel's low-level C primitives for things like locking, linked lists, allocation, and so forth. This interop requires Rust to expose low-level capabilities that don't currently have stable interfaces.
The dependency on unstable features is the biggest blocker to Rust exiting "experimental" status. Because unstable features have no kind of reliability guarantee, this in turn means that RFL can only be built with a specific, pinned version of the Rust compiler. This is a challenge for distributions which wish to be able to build a range of kernel sources with the same compiler, rather than having to select a particular toolchain for a particular kernel version.
Longer term, having Rust in the Linux kernel is an opportunity to expose more C developers to the benefits of using Rust. But that exposure can go both ways. If Rust is constantly causing pain related to toolchain instability, or if Rust isn't able to interact gracefully with the kernel's data structures, kernel developers may have a bad first impression that causes them to write off Rust altogether. We wish to avoid that outcome. And besides, the Linux kernel is exactly the sort of low-level systems application we want Rust to be great for!
For deeper background, please refer to these materials:
- The article on the latest Maintainer Summit: Committing to Rust for kernel code
- The LWN index on articles related to Rust in the kernel
- The latest status update at LPC.
- Linus talking about Rust.
- Rust in the linux kernel, by Alice Ryhl
- Using Rust in the binder driver, by Alice Ryhl
The next six months
The RFL project has a tracking issue listing the unstable features that they rely upon. After discussion with the RFL team, we identified the following subgoals as the ones most urgent to address in 2024. Closing these issues gets us within striking distance of being able to build the RFL codebase on stable Rust.
- Stable support for RFL's customized ARC type
- Labeled goto in inline assembler and extended
offset_of!
support - RFL on Rust CI ([done now!])
- Pointers to statics in constants
Stable support for RFL's customized ARC type
One of Rust's great features is that it doesn't "bake in" the set of pointer types.
The common types users use every day, such as Box
, Rc
, and Arc
, are all (in principle) library defined.
But in reality those types enjoy access to some unstable features that let them be used more widely and ergonomically.
Since few users wish to define their own smart pointer types, this is rarely an issue and there has been relative little pressure to stabilize those mechanisms.
The RFL project needs to integrate with the Kernel's existing reference counting types and intrusive linked lists.
To achieve these goals they've created their own variant of Arc
(hereafter denoted as rfl::Arc
),
but this type cannot be used as idiomatically as the Arc
type found in libstd
without two features:
- The ability to be used in methods (e.g.,
self: rfl::Arc<Self>
), aka "arbitrary self types", specified in RFC #3519. - The ability to be coerce to dyn types like
rfl::Arc<dyn Trait>
and then support invoking methods onTrait
through dynamic dispatch.- This requires the use of two unstable traits,
CoerceUnsized
andDynDispatch
, neither of which are close to stabilization. - However, RFC #3621 provides for a "shortcut" -- a stable interface using
derive
that expands to those traits, leaving room to evolve the underlying details.
- This requires the use of two unstable traits,
Our goal for 2024 is to close those gaps, most likely by implementing and stabilizing RFC #3519 and RFC #3621.
Labeled goto in inline assembler and extended offset_of!
support
These are two smaller extensions required by the Rust-for-Linux kernel support. Both have been implemented but more experience and/or development may be needed before stabilization is accepted.
RFL on Rust CI
Update: Basic work was completed in PR #125209 by Jakub Beránek during the planning process! We are however still including a team ask of T-compiler to make sure we have agreed around the policy regarding breakage due to unstable features.
Rust sometimes integrates external projects of particular importance or interest into its CI. This gives us early notice when changes to the compiler or stdlib impact that project. Some of that breakage is accidental, and CI integration ensures we can fix it without the project ever being impacted. Otherwise the breakage is intentional, and this gives us an early way to notify the project so they can get ahead of it.
Because of the potential to slow velocity and incur extra work, the bar for being integrated into CI is high, but we believe that Rust For Linux meets that bar. Given that RFL would not be the first such project to be integrated into CI, part of pursuing this goal should be establishing clearer policies on when and how we integrate external projects into our CI, as we now have enough examples to generalize somewhat.
Pointers to statics in constants
The RFL project has a need to create vtables in read-only memory (unique address not required). The current implementation relies on the const_mut_refs
and const_refs_to_static
features (representative example). Discussion has identified some questions that need to be resolved but no major blockers.
The "shiny future" we are working towards
The ultimate goal is to enable smooth and ergonomic interop between Rust and the Linux kernel's idiomatic data structures.
In addition to the work listed above, there are a few other obvious items that the Rust For Linux project needs. If we can find owners for these this year, we could even get them done as a "stretch goal":
Stable sanitizer support
Support for building and using sanitizers, in particular KASAN.
Custom builds of core/alloc with specialized configuration options
The RFL project builds the stdlib with a number of configuration options to eliminate undesired aspects of libcore (listed in RFL#2). They need a standard way to build a custom version of core as well as agreement on the options that the kernel will continue using.
Code-generation features and compiler options
The RFL project requires various code-generation options. Some of these are related to custom features of the kernel, such as X18 support (rust-lang/compiler-team#748) but others are codegen options like sanitizers and the like. Some subset of the options listed on RFL#2 will need to be stabilized to support being built with all required configurations, but working out the precise set will require more effort.
Ergonomic improvements
Looking further afield, possible future work includes more ergonomic versions of the special patterns for safe pinned initialization or a solution to custom field projection for pinned types or other smart pointers.
Design axioms
- First, do no harm. If we want to make a good first impression on kernel developers, the minimum we can do is fit comfortably within their existing workflows so that people not using Rust don't have to do extra work to support it. So long as Linux relies on unstable features, users will have to ensure they have the correct version of Rust installed, which means imposing labor on all Kernel developers.
- Don't let perfect be the enemy of good. The primary goal is to offer stable support for the particular use cases that the Linux kernel requires. Wherever possible we aim to stabilize features completely, but if necessary, we can try to stabilize a subset of functionality that meets the kernel developers' needs while leaving other aspects unstable.
Ownership and team asks
Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP).
- The badge indicates a requirement where Team support is needed.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Overall program management | Niko Matsakis, Josh Triplett | |
Arbitrary self types v2 | Adrian Taylor | |
↳ | RFC #3519 | |
↳ | ||
↳ Implementation | ||
↳ Standard reviews | compiler | |
↳ Stabilization decision | lang | |
Derive smart pointer | Alice Ryhl | |
↳ | RFC #3621 | |
↳ RFC decision | lang | |
↳ Implementation | Ding Xiang Fei | |
↳ Author stabilization report | Ding Xiang Fei | |
↳ Stabilization decision | lang | |
asm_goto | Gary Guo | |
↳ | ||
↳ Real-world usage in Linux kernel | Alice Ryhl | |
↳ Extend to cover full RFC | ||
↳ Author stabilization report | ||
↳ Stabilization decision | lang | |
Extended offset_of syntax | Ding Xiang Fei | |
↳ Stabilization report | ||
↳ Stabilization decision | libs-api | |
Jakub Beránek | ||
↳ | #125209 | |
↳ Policy draft | ||
↳ Policy decision | compiler | |
Pointers to static in constants | Niko Matsakis | |
↳ Stabilization report | ||
↳ Stabilization decision | lang |
Support needed from the project
- Lang team:
- Prioritize RFC and any related design questions (e.g., the unresolved questions)
Outputs and milestones
Outputs
Final outputs that will be produced
Milestones
Milestones you will reach along the way
Frequently asked questions
None yet.
Rust 2024 Edition
Metadata | |
---|---|
Owner(s) | TC |
Teams | lang, types |
Status | Flagship |
Tracking issue | rust-lang/rust-project-goals#117 |
Summary
Feature complete status for Rust 2024, with final release to occur in early 2025.
Motivation
RFC #3501 confirmed the desire to ship a Rust edition in 2024, continuing the pattern of shipping a new Rust edition every 3 years. Our goal for 2024 H2 is to stabilize a new edition on nightly by the end of 2024.
The status quo
Editions are a powerful tool for Rust but organizing them continues to be a "fire drill" each time. We have a preliminary set of 2024 features assembled but work needs to be done to marshal and drive (some subset of...) them to completion.
The next six months
The major goal this year is to release the edition on nightly. Top priority items are as follows:
Item | Tracking | RFC | More to do? |
---|---|---|---|
Reserve gen keyword | https://github.com/rust-lang/rust/issues/123904 | https://github.com/rust-lang/rust/pull/116447 | No. |
Lifetime Capture Rules 2024 | https://github.com/rust-lang/rust/issues/117587 | https://github.com/rust-lang/rfcs/pull/3498 | Yes. |
Precise capturing (dependency) | https://github.com/rust-lang/rust/issues/123432 | https://github.com/rust-lang/rfcs/pull/3617 | Yes. |
Change fallback to ! | https://github.com/rust-lang/rust/issues/123748 | N/A | Yes. |
The full list of tracked items can be found using the A-edition-2024
label..
The "shiny future" we are working towards
The Edition will be better integrated into our release train. Nightly users will be able to "preview" the next edition just like they would preview any other unstable feature. New features that require new syntax or edition-related changes will land throughout the edition period. Organizing the new edition will be rel
Design axioms
The "Edition Axioms" were laid out in RFC #3085:
- Editions do not split the ecosystem. The most important rule for editions is that crates in one edition can interoperate seamlessly with crates compiled in other editions.
- Edition migration is easy and largely automated. Whenever we release a new edition, we also release tooling to automate the migration. The tooling is not necessarily perfect: it may not cover all corner cases, and manual changes may still be required.
- Users control when they adopt the new edition. We recognize that many users, particularly production users, will need to schedule time to manage an Edition upgrade as part of their overall development cycle.
- Rust should feel like “one language”. We generally prefer uniform behavior across all editions of Rust, so long as it can be achieved without compromising other design goals.
- Editions are meant to be adopted. We don’t force the edition on our users, but we do feel free to encourage adoption of the edition through other means.
Ownership and team asks
Owner: TC
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
RFC decision | leadership-council | (RFC #3501) |
Stabilization decision | lang types | |
Top-level Rust blog post |
Outputs and milestones
- Owner: TC
Outputs
- Edition release complete with
- announcement blog post
- edition migration guide
Milestones
Date | Version | Edition stage |
---|---|---|
2024-10-11 | Branch v1.83 | Go / no go on all items |
2024-10-17 | Release v1.82 | Rust 2024 nightly beta |
2025-01-03 | Branch v1.85 | Cut Rust 2024 to beta |
2025-02-20 | Release v1.85 | Release Rust 2024 |
Frequently asked questions
None yet.
"Stabilizable" prototype for expanded const generics
Metadata | |
---|---|
Owner(s) | Boxy |
Teams | types |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#100 |
Summary
Experiment with a new min_generic_const_args
implementation to address challenges found with the existing approach
Motivation
min_const_generics
was stabilized with the restriction that const-generic arguments may not use generic parameters other than a bare const parameter, e.g. Foo<N>
is legal but not Foo<{ T::ASSOC }>
. This restriction is lifted under feature(generic_const_exprs)
however its design is fundamentally flawed and introduces significant complexity to the compiler. A ground up rewrite of the feature with a significantly limited scope (e.g. min_generic_const_args
) would give a viable path to stabilization and result in large cleanups to the compiler.
The status quo
A large amount of rust users run into the min_const_generics
limitation that it is not legal to use generic parameters with const generics. It is generally a bad user experience to hit a wall where a feature is unfinished, and this limitation also prevents patterns that are highly desirable. We have always intended to lift this restriction since stabilizing min_const_generics
but we did not know how.
It is possible to use generic parameters with const generics by using feature(generic_const_exprs)
. Unfortunately this feature has a number of fundamental issues that are hard to solve and as a result is very broken. It being so broken results in two main issues:
- When users hit a wall with
min_const_generics
they cannot reach for thegeneric_const_exprs
feature because it is either broken or has no path to stabilization. - In the compiler, to work around the fundamental issues with
generic_const_exprs
, we have a number of hacks which negatively affect the quality of the codebase and the general experience of contributing to the type system.
The next six months
We have a design for min_generic_const_args
approach in mind but we want to validate it through implementation as const generics has a history of unforeseen issues showing up during implementation. Therefore we will pursue a prototype implementation in 2024.
As a stretch goal, we will attempt to review the design with the lang team in the form of a design meeting or RFC. Doing so will likely also involve authoring a design retrospective for generic_const_exprs
in order to communicate why that design did not work out and why the constraints imposed by min_generic_const_args
makes sense.
The "shiny future" we are working towards
The larger goal here is to lift most of the restrictions that const generics currently have:
- Arbitrary types can be used in const generics instead of just: integers, floats, bool and char.
- implemented under
feature(adt_const_params)
and is relatively close to stabilization
- implemented under
- Generic parameters are allowed to be used in const generic arguments (e.g.
Foo<{ <T as Trait>::ASSOC_CONST }>
). - Users can specify
_
as the argument to a const generic, allowing inferring the value just like with types.- implemented under
feature(generic_arg_infer)
and is relatively close to stabilization
- implemented under
- Associated const items can introduce generic parameters to bring feature parity with type aliases
- implemented under
feature(generic_const_items)
, needs a bit of work to finish it. Becomes significantly more important after implementingmin_generic_const_args
- implemented under
- Introduce associated const equality bounds, e.g.
T: Trait<ASSOC = N>
to bring feature parity with associated types- implemented under
feature(associated_const_equality)
, blocked on allowing generic parameters in const generic arguments
- implemented under
Allowing generic parameters to be used in const generic arguments is the only part of const generics that requires significant amounts of work while also having significant benefit. Everything else is already relatively close to the point of stabilization. I chose to specify this goal to be for implementing min_generic_const_args
over "stabilize the easy stuff" as I would like to know whether the implementation of min_generic_const_args
will surface constraints on the other features that may not be possible to easily fix in a backwards compatible manner. Regardless I expect these features will still progress while min_generic_const_args
is being implemented.
Design axioms
- Do not block future extensions to const generics
- It should not feel worse to write type system logic with const generics compared to type generics
- Avoid post-monomorphization errors
- The "minimal" subset should not feel arbitrary
Ownership and team asks
Owner: Boxy, project-const-generics lead, T-types member
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.
- Subgoal:
- Describe the work to be done and use
↳
to mark "subitems".
- Describe the work to be done and use
- Owner(s) or team(s):
- List the owner for this item (who will do the work) or if an owner is needed.
- If the item is a "team ask" (i.e., approve an RFC), put and the team name(s).
- Status:
- List if there is an owner but they need support, for example funding.
- Other needs (e.g., complete, in FCP, etc) are also fine.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussion and moral support | lang types | |
Implementation and mentoring | Boxy | |
Implementation | Noah Lev | |
Reviewer | Michael Goulet |
Outputs and milestones
Outputs
- A sound, fully implemented
feature(min_generic_const_args)
available on nightly - All issues with
generic_const_exprs
's design have been comprehensively documented (stretch goal) - RFC for
min_generic_const_args
's design (stretch goal)
Milestones
- Prerequisite refactorings for
min_generic_const_args
have taken place - Initial implementation of
min_generic_const_args
lands and is useable on nightly - All known issues are resolved with
min_generic_const_args
- Document detailing
generic_const_exprs
issues - RFC is written and filed for
min_generic_const_args
Frequently asked questions
Do you expect min_generic_const_args
to be stabilized by the end?
No. The feature should be fully implemented such that it does not need any more work to make it ready for stabilization, however I do not intend to actually set the goal of stabilizing it as it may wind up blocked on the new trait solver being stable first.
Assemble project goal slate
Metadata | |
---|---|
Owner(s) | Niko Matsakis |
Teams | leadership-council |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#102 |
Extracted from RFC 3614
Summary
Run an experimental goal program during the second half of 2024
Motivation
This RFC proposes to run an experimental goal program during the second half of 2024 with Niko Matsakis as owner/organizer. This program is a first step towards an ongoing Rust roadmap. The proposed outcomes for 2024 are (1) select an initial slate of goals using an experimental process; (2) track progress over the year; (3) drawing on the lessons from that, prepare a second slate of goals for 2025 H1. This second slate is expected to include a goal for authoring an RFC proposing a permanent process.
The status quo
The Rust project last published an annual roadmap in 2021. Even before that, maintaining and running the roadmap process had proved logistically challenging. And yet there are a number of challenges that the project faces for which having an established roadmap, along with a clarified ownership for particular tasks, would be useful:
- Focusing effort and avoiding burnout:
- One common contributor to burnout is a sense of lack of agency. People have things they would like to get done, but they feel stymied by debate with no clear resolution; feel it is unclear who is empowered to "make the call"; and feel unclear whether their work is a priority.
- Having a defined set of goals, each with clear ownership, will address that uncertainty.
- Helping direct incoming contribution:
- Many would-be contributors are interested in helping, but don't know what help is wanted/needed. Many others may wish to know how to join in on a particular project.
- Identifying the goals that are being worked on, along with owners for them, will help both groups get clarity.
- Helping the Foundation and Project to communicate
- One challenge for the Rust Foundation has been the lack of clarity around project goals. Programs like fellowships, project grants, etc. have struggled to identify what kind of work would be useful in advancing project direction.
- Declaring goals, and especially goals that are desired but lack owners to drive them, can be very helpful here.
- Helping people to get paid for working on Rust
- A challenge for people who are looking to work on Rust as part of their job -- whether that be full-time work, part-time work, or contracting -- is that the employer would like to have some confidence that the work will make progress. Too often, people find that they open RFCs or PRs which do not receive review, or which are misaligned with project priorities. A secondary problem is that there can be a perceived conflict-of-interest because people's job performance will be judged on their ability to finish a task, such as stabilizing a language feature, which can lead them to pressure project teams to make progress.
- Having the project agree before-hand that it is a priority to make progress in an area and in particular to aim for achieving particular goals by particular dates will align the incentives and make it easier for people to make commitments to would-be employers.
For more details, see
- [Blog post on Niko Matsakis's blog about project goals](https://smallcultfollowing.com/babysteps/blog/2023/11/28/project-goals/)
- [Blog post on Niko Matsakis's blog about goal ownership](https://smallcultfollowing.com/babysteps/blog/2024/04/05/ownership-in-rust/)
- nikomatsakis's slides from the Rust leadership summit
- Zulip topic in #council stream. This proposal was also discussed at the leadership council meeting on 2024-04-12, during which meeting the council recommended opening an RFC.
The plan for 2024
The plan is to do a "dry run" of the process in the remainder of 2024. The 2024 process will be driven by Niko Matsakis; one of the outputs will be an RFC that proposes a more permanent process for use going forward. The short version of the plan is that we will
- ASAP (April): Have a ~2 month period for selecting the initial slate of goals. Goals will be sourced from Rust teams and the broader community. They will cover the highest priority work to be completed in the second half of 2024.
- June: Teams will approve the final slate of goals, making them 'official'.
- Remainder of the year: Regular updates on goal progress will be posted
- October: Presuming all goes well, the process for 2025 H1 begins. Note that the planning for 2025 H1 and finishing up of goals from 2024 H2 overlap.
The "shiny future" we are working towards
We wish to get to a point where
- it is clear to onlookers and Rust maintainers alike what the top priorities are for the project and whether progress is being made on those priorities
- for each priority, there is a clear owner who
- feels empowered to make decisions regarding the final design (subject to approval from the relevant teams)
- teams cooperate with one another to prioritize work that is blocking a project goal
- external groups who would like to sponsor or drive priorities within the Rust project know how to bring proposals and get feedback
More concretely, assuming this goal program is successful, we would like to begin another goal sourcing round in late 2024 (likely Oct 15 - Dec 15). We see this as fitting into a running process where the project evaluates its program and re-establishes goals every six months.
Design axioms
- Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
- Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
- Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
- Nothing good happens without an owner. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
- To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.
Ownership and team asks
Owner: Niko Matsakis
- Niko Matsakis can commit 20% time (avg of 1 days per week) to pursue this task, which he estimates to be sufficient.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
RFC decision | leadership-council | |
Inside Rust blog post inviting feedback | Posted | |
Top-level Rust blog post announcing result |
Support needed from the project
- Project website resources to do things like
- post blog posts on both Inside Rust and the main Rust blog;
- create a tracking page (e.g.,
https://rust-lang.org/goals
); - create repositories, etc.
- For teams opting to participate in this experimental run:
- they need to meet with the goal committee to review proposed goals, discuss priorities;
- they need to decide in a timely fashion whether they can commit the proposed resources
Outputs and milestones
Outputs
There are three specific outputs from this process:
- A goal slate for the second half (H2) of 2024, which will include
- a set of goals, each with an owner and with approval from their associated teams
- a high-level write-up of why this particular set of goals was chosen and what impact we expect for Rust
- plan is to start with a smallish set of goals, though we don't have a precise number in mind
- Regular reporting on the progress towards these goals over the course of the year
- monthly updates on Inside Rust (likely) generated by scraping tracking issues established for each goal
- larger, hand authored updates on the main Rust blog, one in October and a final retrospective in December
- A goal slate for the first half (H1) of 2025, which will include
- a set of goals, each with an owner and with approval from their associated teams
- a high-level write-up of why this particular set of goals was chosen and what impact we expect for Rust
- (probably) a goal to author an RFC with a finalized process that we can use going forward
Milestones
Key milestones along the way (with the most impactful highlighted in bold):
Date | 2024 H2 Milestone | 2025 H1 Milestones |
---|---|---|
Apr 26 | Kick off the goal collection process | |
May 24 | Publish draft goal slate, take feedback from teams | |
June 14 | Approval process for goal slate begins | |
June 28 | Publish final goal slate | |
July | Publish monthly update on Inside Rust | |
August | Publish monthly update on Inside Rust | |
September | Publish monthly update on Inside Rust | |
Oct 1 | Publish intermediate goal progress update on main Rust blog | Begin next round of goal process, expected to cover first half of 2025 |
November | Publish monthly update on Inside Rust | Nov 15: Approval process for 2025 H1 goal slate begins |
December | Publish retrospective on 2024 H2 | Announce 2025 H1 goal slate |
Process to be followed
The owner plans to author up a proposed process but rough plans are as follows:
- Create a repository rust-lang/project-goals that will be used to track proposed goals.
- Initial blog post and emails soliciting goal proposals, authored using the same format as this goal.
- Owner will consult proposals along with discussions with Rust team members to assemble a draft set of goals
- Owner will publish a draft set of goals from those that were proposed
- Owner will read this set with relevant teams to get feedback and ensure consensus
- Final slate will be approved by each team involved:
- Likely mechanism is a "check box" from the leads of all teams that represents the team consensus
It is not yet clear how much work it will be to drive this process. If needed, the owner will assemble a "goals committee" to assist in reading over goals, proposing improvements, and generally making progress towards a coherent final slate. This committee is not intended to be a decision making body.
Frequently asked questions
Is there a template for project goals?
This RFC does not specify details, so the following should not be considered normative. However, you can see a preview of what the project goal process would look like at the nikomatsakis/rust-project-goals repository; it contains a goal template. This RFC is in fact a "repackaged" version of 2024's proposed Project Goal #1.
Why is the goal completion date targeting end of year?
In this case, the idea is to run a ~6-month trial, so having goals that are far outside that scope would defeat the purpose. In the future we may want to permit longer goal periods, but in general we want to keep goals narrowly scoped, and 6 months seems ~right. We don't expect 6 months to be enough to complete most projects, but the idea is to mark a milestone that will demonstrate important progress, and then to create a follow-up goal in the next goal season.
How does the goal completion date interact with the Rust 2024 edition?
Certainly I expect some of the goals to be items that will help us to ship a Rust 2024 edition -- and likely a goal for the edition itself (presuming we don't delay it to Rust 2025).
Do we really need a "goal slate" and a "goal season"?
Some early drafts of project goals were framing in a purely bottom-up fashion, with teams approving goals on a rolling basis. That approach though has the downside that the project will always be in planning mode which will be a continuing time sink and morale drain. Deliberating on goals one at a time also makes it hard to weigh competing goals and decide which should have priority.
There is another downside to the "rolling basis" as well -- it's hard to decide on next steps if you don't know where you are going. Having the concept of a "goal slate" allows us to package up the goals along with longer term framing and vision and make sure that they are a coherent set of items that work well together. Otherwise it can be very easy for one team to be solving half of a problem while other teams neglect the other half.
Do we really need an owner?
Nothing good happens without an owner. The owner plays a few important roles:
- Publicizing and organizing the process, authoring blog posts on update, and the like.
- Working with individual goal proposals to sharpen them, improve the language, identify milestones.
- Meeting with teams to discuss relative priorities.
- Ensuring a coherent slate of goals.
- For example, if the cargo team is working to improve build times in CI, but the compiler team is focused on build times on individual laptops, that should be surfaced. It may be that its worth doing both, but there may be an opportunity to do more by focusing our efforts on the same target use cases.
Isn't the owner basically a BDFL?
Simply put, no. The owner will review the goals and ensure a quality slate, but it is up to the teams to approve that slate and commit to the goals.
Why the six months horizon?
Per the previous points, it is helpful to have a "season" for goals, but having e.g. an annual process prevents us from reacting to new ideas in a nimble fashion. At the same time, doing quarterly planning, as some companies do, is quite regular overhead. Six months seemed like a nice compromise, and it leaves room for a hefty discussion period of about 2 months, which sems like a good fit for an open-source project.
Associated type position impl trait
Metadata | |
---|---|
Owner(s) | Oliver Scherer |
Teams | types, lang |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#103 |
Summary
Stable support for impl Trait
in the values of associated types (aka "associated type position impl trait" or ATPIT)
Motivation
Rust has been on a long-term quest to support impl Trait
in more and more locations ("impl Trait everywhere"). The next step in that journey is supporting impl Trait
in the values of associated types (aka "associated type position impl trait" or ATPIT), which allows impls to provide more complex types as the value of an associated type, particularly anonymous types like closures and futures. It also allows impls to hide the precise type they are using for an associated type, leaving room for future changes. This is the latest step towards the overall vision of support impl Trait
notation in various parts of the Rust language.
The status quo
Impls today must provide a precise and explicit value for each associated type. For some associated types, this can be tedious, but for others it is impossible, as the proper type involves a closure or other aspect which cannot be named. Once a type is specified, impls are also unable to change that type without potentially breaking clients that may have hardcoded it.
Rust's answer to these sorts of problems is impl Trait
notation, which is used in a number of places within Rust to indicate "some type that implements Trait
":
- Argument position impl Trait ("APIT"), in inherent/item/trait functions, in which
impl Trait
desugars to an anonymous method type parameter (sometimes called "universal" impl Trait); - Return type position in inherent/item functions ("RPIT") and in trait ("RPITIT") functions, in which
impl Trait
desugars to a fresh opaque type whose value is inferred by the compiler.
ATPIT follows the second pattern, creating a new opaque type.
The next six months
The plan for 2024 is to stabilize Associated Type Position Impl Trait (ATPIT). The design has been finalized from the lang team perspective for some time, but the types team is still working out final details. In particular, the types team is trying to ensure that whatever programs are accepted will also be accepted by the next generation trait solver, which handles opaque types in a new and simplified way.
The "shiny future" we are working towards
This goal is part of the "impl Trait everywhere" effort, which aims to support impl Trait
in any position where it makes sense. With the completion of this goal we will support impl Trait
in
- the type of a function argument ("APIT") in inherent/item/trait functions;
- return types in functions, both inherent/item functions ("RPIT") and trait functions ("RPITIT");
- the value of an associated type in an impl (ATPIT).
Planned extensions for the future include:
- allowing
impl Trait
in type aliases ("TAIT"), liketype I = impl Iterator<Item = u32>
; - allowing
impl Trait
in let bindings ("LIT"), likelet x: impl Future = foo()
; - dyn safety for traits that make use of RTPIT and async functions.
Other possible future extensions are:
- allowing
impl Trait
in where-clauses ("WCIT"), likewhere T: Foo<impl Bar>
; - allowing
impl Trait
in struct fields, likestruct Foo { x: impl Display }
;
See also: the explainer here for a "user's guide" style introduction, though it's not been recently updated and may be wrong in the details (especially around TAIT).
Design axioms
None.
Ownership and team asks
Owner: oli-obk owns this goal.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Oliver Scherer | |
FCP decisions | types | |
Stabilization decision | types lang |
Frequently asked questions
None yet.
Begin resolving cargo-semver-checks
blockers for merging into cargo
Metadata | |
---|---|
Owner(s) | Predrag Gruevski |
Teams | cargo |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#104 |
Summary
Design and implement cargo-semver-checks
functionality that lies on the critical path for merging the tool into cargo itself.
Motivation
Cargo assumes that all packages adhere to semantic versioning (SemVer). However, SemVer adherence is quite hard in practice: research shows that accidental SemVer violations are relatively common (lower-bound: in 3% of releases) and happen to Rustaceans of all skill levels. Given the significant complexity of the Rust SemVer rules, improvements here require better tooling.
cargo-semver-checks
is a linter for semantic versioning (SemVer) in Rust.
It is broadly adopted by the Rust community, and the cargo team has expressed interest in merging it into cargo itself as part of the existing cargo publish
workflow.
By default, cargo publish
would require SemVer compliance, but offer a flag (analogous to the --allow-dirty
flag for uncommitted changes) to override the SemVer check and proceed with publishing anyway.
The cargo team has identified a set of milestones and blockers that must be resolved before cargo-semver-checks
can be integrated into the cargo publish
workflow.
Our goal here is to resolve one of those blockers (cargo manifest linting), and chart a path toward resolving the rest in the future.
The status quo
Work in three major areas is required to resolve the blockers for running cargo-semver-checks
as part of cargo publish
:
- Support for cargo manifest linting, and associated CLI changes
- Checking of cross-crate items
- SemVer linting of type information
Fully resolving all three areas is likely a 12-24 month undertaking, and beyond the scope of this goal on its own. Instead, this goal proposes to accomplish intermediate steps that create immediate value for users and derisk the overall endeavor, while needing only "moral support" from the cargo team as its only requirement.
Cargo manifest linting
Package manifests have SemVer obligations: for example, removing a feature name that used to exist is a major breaking change.
Currently, cargo-semver-checks
is not able to catch such breaking changes.
It only draws information from a package's rustdoc JSON, which does not include the necessary manifest details and does not have a convincing path to doing so in the future.
Design and implementation work is required to allow package manifests to be linted for breaking changes as well.
This "rustdoc JSON only" assumption is baked into the cargo-semver-checks
CLI as well, with options such as --baseline-rustdoc
and --current-rustdoc
that allow users to lint with a pre-built rustdoc JSON file instead of having cargo-semver-checks
build the rustdoc JSON itself.
Once manifest linting is supported, users of such options will need to somehow specify a Cargo.toml
file (and possibly even a matching Cargo.lock
) in addition to the rustdoc JSON.
Additional work is required to determine how to evolve the CLI to support manifest linting and future-proof it to the level necessary to be suitable for stabilizing as part of cargo's own CLI.
Checking of cross-crate items
Currently, cargo-semver-checks
performs linting by only using the rustdoc JSON of the target package being checked.
However, the public API of a package may expose items from other crates.
Since rustdoc no longer inlines the definitions of such foreign items into the JSON of the crate whose public API relies on them, cargo-semver-checks
cannot see or analyze them.
This causes a massive number of false-positives ("breakage reported incorrectly") and false-negatives ("lint for issue X fails to spot an instance of issue X"). In excess of 90% of real-world false-positives are traceable back to a cross-crate item, as measured by our SemVer study!
For example, the following change is not breaking but cargo-semver-checks
will incorrectly report it as breaking:
#![allow(unused)] fn main() { // previous release: pub fn example() {} // in the new release, imagine this function moved to `another_crate`: pub use another_crate::example; }
This is because the rustdoc JSON that cargo-semver-checks
sees indeed does not contain a function named example
.
Currently, cargo-semver-checks
is incapable of following the cross-crate connection to another_crate
, generating its rustdoc JSON, and continuing its analysis there.
Resolving this limitation will require changes to how cargo-semver-checks
generates and handles rustdoc JSON, since the set of required rustdoc JSON files will no longer be fully known ahead of time.
It will also require CLI changes in the same area as the changes required to support manifest linting.
While there may be other challenges on rustc and rustdoc's side before this feature could be fully implemented, we consider those out of scope here since there are parallel efforts to resolve them.
The goal here is for cargo-semver-checks
to have its own story straight and do the best it can.
SemVer linting of type information
Currently, cargo-semver-checks
lints cannot represent or examine type information.
For example, the following change is breaking but cargo-semver-checks
will not detect or report it:
#![allow(unused)] fn main() { // previous release: pub fn example(value: String) {} // new release: pub fn example(value: i64) {} }
Analogous breaking changes to function return values, struct fields, and associated types would also be missed by cargo-semver-checks
today.
The main difficulty here lies with the expressiveness of the Rust type system. For example, none of the following changes are breaking:
#![allow(unused)] fn main() { // previous release: pub fn example(value: String) {} // new release: pub fn example(value: impl Into<String>) {} // subsequent release: pub fn example<S: Into<String>>(value: S) {} }
Similar challenges exist with lifetimes, variance, trait solving, async fn
versus fn() -> impl Future
, etc.
While there are some promising preliminary ideas for resolving this challenge, more in-depth design work is necessary to determine the best path forward.
The next 6 months
Three things:
- Implement cargo manifest linting
- Implement CLI future-proofing changes, with manifest linting and cross-crate analysis in mind
- Flesh out a design for supporting cross-crate analysis and type information linting in the future
The "shiny future" we are working towards
Accidentally publishing SemVer violations that break the ecosystem is never fun for anyone involved.
From a user perspective, we want a fearless cargo update
: one's project should never be broken by updating dependences without changing major versions.
From a maintainer perspective, we want a fearless cargo publish
: we want to prevent breakage, not to find out about it when a frustrated user opens a GitHub issue. Just like cargo flags uncommitted changes in the publish flow, it should also quickly and accurately flag breaking changes in non-major releases. Then the maintainer may choose to release a major version instead, or acknowledge and explicitly override the check to proceed with publishing as-is.
To accomplish this, cargo-semver-checks
needs the ability to express more kinds of lints (including manifest and type-based ones), eliminate false-positives, and stabilize its public interfaces (e.g. the CLI). At that point, we'll have lifted the main merge-blockers and we can consider making it a first-party component of cargo itself.
Ownership and team asks
Owner: Predrag Gruevski, as maintainer of cargo-semver-checks
I (Predrag Gruevski) will be working on this effort. The only other resource request would be occasional discussions and moral support from the cargo team, of which I already have the privilege as maintainer of a popular cargo plugin.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation of cargo manifest linting + CLI | Predrag Gruevski | |
Initial design for cross-crate checking | Predrag Gruevski | |
Initial design for type-checking lints | Predrag Gruevski | |
Discussion and moral support | cargo |
Frequently asked questions
Why not use semverver instead?
Semverver is a prior attempt at enforcing SemVer compliance, but has been deprecated and is no longer developed or maintained. It relied on compiler-internal APIs, which are much more unstable than rustdoc JSON and required much more maintenance to "keep the lights on." This also meant that semverver required users to install a specific nightly versions that were known to be compatible with their version of semverver.
While cargo-semver-checks
relies on rustdoc JSON which is also an unstable nightly-only interface, its changes are much less frequent and less severe.
By using the Trustfall query engine, cargo-semver-checks
can simultaneously support a range of rustdoc JSON formats (and therefore Rust versions) within the same tool.
On the maintenance side, cargo-semver-checks
lints are written in a declarative manner that is oblivious to the details of the underlying data format, and do not need to be updated when the rustdoc JSON format changes.
This makes maintenance much easier: updating to a new rustdoc JSON format usually requires just a few lines of code, instead of "a few lines of code apiece in each of hundreds of lints."
Const traits
Metadata | |
---|---|
Owner(s) | Deadbeef |
Teams | types, lang |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#106 |
Summary
Experiment with effects-based desugaring for "maybe-const" functionality
Motivation
Rust's compile time functionalities (const fn
, const
s, etc.) are greatly limited in terms of expressivity because const functions currently do not have access to generic trait bounds as runtime functions do. Developers want to write programs that do complex work in compile time, most of the times to offload work from runtime, and being able to have const trait
s and const impl
s will greatly reduce the difficulty of writing such compile time functions.
The status quo
People write a lot of code that will be run in compile time. They include procedural macros, build scripts (42.8k hits on GitHub for build.rs
), and const functions/consts (108k hits on GitHub for const fn
). Not being able to write const functions with generic behavior is often cited as a pain point of Rust's compile time capabilities. Because of the limited expressiveness of const fn
, people may decide to move some compile time logic to a build script, which could increase build times, or simply choose not to do it in compile time (even though it would have helped runtime performance).
There are also language features that require the use of traits, such as iterating with for
and handling errors with ?
. Because the Iterator
and Try
traits currently cannot be used in constant contexts, people are unable to use ?
to handle results, nor use iterators e.g. for x in 0..5
.
The next six months
In 2024, we plan to:
- Finish experimenting with an effects-based desugaring for ensuring correctness of const code with trait bounds
- Land a relatively stable implementation of const traits
- Make all UI tests pass.
The "shiny future" we are working towards
We're working towards enabling developers to do more things in general within a const
context. Const traits is a blocker for many future possibilities (see also the const eval feature skill tree) including heap operations in const contexts.
Design axioms
None.
Ownership and team asks
Owner: Deadbeef
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.
- Subgoal:
- Describe the work to be done and use
↳
to mark "subitems".
- Describe the work to be done and use
- Owner(s) or team(s):
- List the owner for this item (who will do the work) or if an owner is needed.
- If the item is a "team ask" (i.e., approve an RFC), put and the team name(s).
- Status:
- List if there is an owner but they need support, for example funding.
- Other needs (e.g., complete, in FCP, etc) are also fine.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Deadbeef and project-const-traits | |
Discussion and moral support | types lang |
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
Ergonomic ref-counting
Metadata | |
---|---|
Owner(s) | Jonathan Kelley |
Teams | lang, libs-api |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#107 |
Summary
Deliver nightly support some solution to reduce the ergonomic pain of working with ref-counted and cheaply cloneable types.
Motivation
For 2024H2 we propose to improve ergonomics of working with "cheaply cloneable" data, most commonly reference-counted values (Rc
or Arc
). Like many ergonomic issues, these impact all users, but the impact is particularly severe for newer Rust users, who have not yet learned the workarounds, or those doing higher-level development, where the ergonomics of Rust are being compared against garbage-collected languages like Python, TypeScript, or Swift.
The status quo
Many Rust applications—particularly those in higher-level domains—use reference-counted values to pass around core bits of context that are widely used throughout the program. Reference-counted values have the convenient property that they can be cloned in O(1) time and that these clones are indistinguishable from one another (for example, two handles to a Arc<AtomicInteger>
both refer to the same counter). There are also a number of data structures found in the stdlib and ecosystem, such as the persistent collections found in the im
crate or the Sender
type from std::sync::mpsc
and tokio::sync::mpsc
, that share this same property.
Rust's current rules mean that passing around values of these types must be done explicitly, with a call to clone
. Transforming common assignments like x = y
to x = y.clone()
can be tedious but is relatively easy. However, this becomes a much bigger burden with closures, especially move
closures (which are common when spawning threads or async tasks). For example, the following closure will consume the state
handle, disallowing it from being used in later closures:
#![allow(unused)] fn main() { let state = Arc::new(some_state); tokio::spawn(async move { /* code using `state` */ }); }
This scenario can be quite confusing for new users (see e.g. this 2014 talk at StrangeLoop where an experienced developer describes how confusing they found this to be). Many users settle on a workaround where they first clone the variable into a fresh local with a new name, such as:
#![allow(unused)] fn main() { let state = Arc::new(some_state); let _state = state.clone(); tokio::spawn(async move { /*code using `_state` */ }); let _state = state.clone(); tokio::spawn(async move { /*code using `_state` */ }); }
Others adopt a slightly different pattern leveraging local variable shadowing:
#![allow(unused)] fn main() { let state = Arc::new(some_state); tokio::spawn({ let state = state.clone(); async move { /*code using `state`*/ } }); }
Whichever pattern users adopt, explicit clones of reference counted values leads to significant accidental complexity for many applications. As noted, cloning these values is both cheap at runtime and has zero semantic importance, since each clone is as good as the other.
Impact on new users and high-level domains
The impact of this kind of friction can be severe. While experienced users have learned the workaround and consider this to be a papercut, new users can find this kind of change bewildering and a total blocker. The impact is also particularly severe on projects attempting to use Rust in domains traditionally considered "high-level" (e.g., app/game/web development, data science, scientific computing). Rust's strengths have made it a popular choice for building underlying frameworks and libraries that perform reliably and with high performance. However, thanks in large part to these kind of smaller, papercut issues, it is not a great choice for consumption of these libraries
Users in higher-level domains are accustomed to the ergonomics of Python or TypeScript, and hence ergonomic friction can make Rust a non-starter. Those users that stick with Rust long enough to learn the workarounds, however, often find significant value in its emphasis on reliability and long-term maintenance (not to mention performance). Small changes like avoiding explicit clones for reference-counted data can both help to make Rust more appealing in these domains and help Rust in other domains where it is already widespead.
The next six months
The goal for the next six months is to
- author and accept an RFC that reduces the burden of working with clone, particularly around closures
- land a prototype nightly implementation.
The "shiny future" we are working towards
This goal is scoped around reducing (or eliminating entirely) the need for explicit clones for reference-counted data. See the FAQ for other potential future work that we are not asking the teams to agree upon now.
Design consensus points
We don't have consensus around a full set of "design axioms" for this design, but we do have alignment around the following basic points:
- Explicit ref-counting is a major ergonomic pain point impacting both high- and low-level, performance oriented code.
- The worst ergonomic pain arises around closures that need to clone their upvars.
- Some code will want the ability to precisely track reference count increments.
- The design should allow user-defined types to "opt-in" to the lightweight cloning behavior.
Ownership and team asks
The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Overall program management | Jonathan Kelley | |
Author RFC | TBD | |
Design meeting | lang | 2 meetings expected |
RFC decision | lang libs-api | |
Nightly implementation | Santiago Pastorino | |
Standard reviews | compiler | |
Blog post on Inside Rust |
- The badge indicates a requirement where Team support is needed.
Support needed from the project
As owners of this goal...
- We are happy to author RFCs and/or work with other experienced RFC authors.
- We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
- We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
- We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.
The primary project support will be design bandwidth from the [lang team].
Outputs and milestones
Outputs
Final outputs that will be produced
Milestones
Milestones you will reach along the way
Frequently asked questions
After this, are we done? Will high-level Rust be great?
Accepting this goal only implies alignment around reducing (or eliminating entirely) the need for explicit clones for reference-counted data. For people attempting to use Rust as part of higher-level frameworks like Dioxus, this is an important step, but one that would hopefully be followed by further ergonomics work. Examples of language changes that would be helpful are described in the (not accepted) goals around a renewed ergonomics initiative and improve compilation speed.
Explore sandboxed build scripts
Metadata | |
---|---|
Owner(s) | Weihang Lo |
Teams | cargo |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#108 |
Summary
Explore different strategies for sandboxing build script executions in Cargo.
Motivation
Cargo users can opt-in to run build scripts in a sandboxed environment, limiting their access to OS resources like the file system and network. By providing a sandboxed environment for build script executions, fewer repetitive code scrutinies are needed. The execution of a build script also becomes more deterministic, helping the caching story for the ecosystem in the long run.
The status quo
Build scripts in Cargo can do literally anything from network requests to executing arbitrary binaries. This isn't deemed a security issue as it is "by design". Unfortunately, this "by design" virtue relies on trust among developers within the community. When trust is broken by some incidents, even just once, the community has no choice but to intensively review build scripts in their dependencies.
Although there are collaborative code review tools like cargo-vet and cargo-crev to help build trust,
comprehensive review is still impractical,
especially considering the pace of new version releases.
In Rust, the unsafe
keyword helps reviewers identify code sections that require extra scrutiny.
However, an unsandboxed build script is effectively an enormous unsafe
block,
making comprehensive review impractical for the community.
Besides the security and trust issues, in an unsandboxed build script, random network or file system access may occur and fail. These kinds of "side effects" are notoriously non-deterministic, and usually cause retries and rebuilds in build pipelines. Because the build is not deterministic, reproducibility cannot be easily achieved, making programs harder to trace and debug.
There is one 2024 GSoC project "Sandboxed and Deterministic Proc Macro using Wasm" experimenting with the possibility of using WebAssembly to sandbox procedural macros. While build scripts and proc-macros are different concepts at different levels, they share the same flaw — arbitrary code execution. Given that we already have experiments on the proc-macros side, it's better we start some groundwork on build scripts in parallel, and discuss the potential common interface for Cargo to configure them.
The next 6 months
- Look at prior art in this domain, especially for potential blockers and challenges.
- Prototype on sandboxing build scripts. Currently looking at WebAssembly System Interface (WASI) and Cackle.
- Provide a way to opt-in sandboxed build scripts for Cargo packages, and design a configurable interface to grant permissions to each crate.
- Based on the results of those experiments, consider whether the implementation should be a third-party Cargo plugin first, or make it into Cargo as an unstable feature (with a proper RFC).
The "shiny future" we are working towards
These could become future goals if this one succeeds:
- The sandboxed build script feature will be opted-in at first when stabilized. By the next Edition, sandboxed build scripts will be on by default, hardening the supply chain security.
- Cargo users only need to learn one interface for both sandboxed proc-macros and build scripts. The configuration for build scripts will also cover the needs for sandboxed proc-macros,
- Crates.io and the
cargo info
command display the permission requirements of a crate, helping developers choose packages based on different security level needs. - The runtime of the sandbox environment is swappable, enabling the potential support of remote execution without waiting for a first-party solution. It also opens a door to hermetic builds.
Design axioms
In order of importance, a sandboxed build script feature should provide the following properties:
- Restrict runtime file system and network access, as well as process spawning, unless allowed explicitly.
- Cross-platform supports. Cargo is guaranteed to work on tier 1 platforms. This is not a must have for experiments, but is a requirement for stabilization.
- Ensure
-sys
crates can be built within the sandbox. Probing and building from system libraries is the major use case of build scripts. We should support it as a first-class citizen. - Declarative configuration interface to grant permissions to packages. A declarative configuration helps us analyze permissions granted more easily, without running the actual code.
- Don't block the build when the sandboxed feature is off. The crates.io ecosystem shouldn't rely on the interface to successfully build things. That would hurt the integration with other external build systems. It should work as if it is an extra layer of security scanning.
- Room for supporting different sandbox runtimes and strategies. This is for easier integration into external build systems, as well as faster iteration for experimenting with new ideas.
Currently out of scope:
- Terminal user interface.
- Pre-built build script binaries.
- Hermetic builds, though this extension should be considered.
- Support for all tier 2 with-host-tools platforms. As an experiment, we follow what the chosen sandbox runtime provides us.
- On-par build times. The build time is expected to be impacted because build script artifacts are going to build for the sandbox runtime. This prevents an optimization that when "host" and "target" platforms are the same, Cargo tries to share artifacts between build scripts and applications.
Ownership and team asks
Owner: Weihang Lo, though I also welcome someone else to take ownership of it. I would be happy to support them as a Cargo maintainer.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Design | Weihang Lo | (or mentee) |
Discussion and moral support | cargo | |
Security reviews | ||
Standard reviews | cargo | |
Collaboration with GSoC proc-macro project | compiler | |
Summary of experiments or RFC | Weihang Lo | (or mentee) |
For security reviews, I'd like assistance from experts in security domains. Ideally those experts would be from within the community, such as the Security Response or Secure Code working groups. However, I don't want to pressure that goal since comprehensive security reviews are extremely time-consuming. Outside experts are also welcome.
Outputs and milestones
Outputs
As the work here is mostly experiments and prototyping, based on the results, the outputs could be:
- A report about why these methods have failed to provide a proper sandboxed environment for build scripts in Cargo, plus some other areas worth exploring in the future.
- A configurable sandboxed environment for build scripts landed as an unstable feature in Cargo, or provided via crates.io as a third-party plugin for faster experimenting iteration.
- An RFC proposing a sandboxed build script design to the Rust project.
Milestones
Milestone | Expected Date |
---|---|
Summarize the prior art for sandbox strategies | 2024-07 |
Prototype a basic sandboxed build script implementation | 2024-08 |
Draft a configurable interface in Cargo.toml | 2024-10 |
Integrate the configurable interface with the prototype | 2024-12 |
Ask some security experts for reviewing the design | TBD |
Write an RFC summary for the entire prototyping process | TBD |
Frequently asked questions
Q: Why can't build script be removed?
The Rust crates.io ecosystem depends heavily on build scripts. Some foundational packages use build scripts for essential tasks, such as linking to C dependencies. If we shut down this option without providing an alternative, half of the ecosystem would collapse.
That is to say, build script is a feature included in the stability guarantee that we cannot simply remove, just like the results of the dependency resolution Cargo produces.
Q: Why are community tools like cargo-vet
and cargo-crev
not good enough?
They are all excellent tools. A sandboxed build script isn't meant to replace any of them. However, as aforementioned, those tools still require intensive human reviews, which are difficult to achieve (cargo-crev has 103 reviewers and 1910 reviews at the time of writing).
A sandboxed build script is a supplement to them. Crate reviews are easier to complete when crates explicitly specify the permissions granted.
Q: What is the difference between sandboxed builds and deterministic builds?
Sandboxing is a strategy that isolates the running process from other resources on a system, such as file system access.
A deterministic build always produces the same output given the same input.
Sandboxing is not a requirement for deterministic builds, and vice versa. However, sandboxing can help deterministic builds because hidden dependencies often come from the system via network or file system access.
Q: Why do we need to build our own solution versus other existing solutions?
External build systems are aware of this situation. For example, Bazel provides a sandboxing feature. Nix also has a sandbox build via a different approach. Yet, migrating to existing solutions will be a long-term effort for the entire community. It requires extensive exploration and discussions from both social and technical aspects. At this moment, I don't think the Cargo team and the Rust community are ready for a migration.
Expose experimental LLVM features for automatic differentiation and GPU offloading
Metadata | |
---|---|
Owner(s) | Manuel Drehwald |
Teams | lang, compiler |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#109 |
Other tracking issues | rust-lang/rust#124509, rust-lang/rust#124509 |
Summary
Expose experimental LLVM features for automatic differentiation and GPU offloading.
Motivation
Scientific computing, high performance computing (HPC), and machine learning (ML) all share the interesting challenge in that they each, to different degrees, care about highly efficient library and algorithm implementations, but that these libraries and algorithms are not always used by people with deep experience in computer science. Rust is in a unique position because ownership, lifetimes, and the strong type system can prevent many bugs. At the same time strong alias information allows compelling performance optimizations in these fields, with performance gains well beyond that otherwise seen when comparing C++ with Rust. This is due to how automatic differentiation and GPU offloading strongly benefit from aliasing information.
The status quo
Thanks to PyO3, Rust has excellent interoperability with Python. Conversely, C++ has a relatively weak interop story. This can lead Python libraries to using slowed C libraries as a backend instead, just to ease bundling and integration. Fortran is mostly used in legacy places and hardly used for new projects.
As a solution, many researchers try to limit themself to features which are offered by compilers and libraries built on top of Python, like JAX, PyTorch, or, more recently, Mojo. Rust has a lot of features which make it more suitable to develop a fast and reliable backend for performance critical software than those languages. However, it lacks two major features which developers now expect. One is high performance autodifferentiation. The other is easy use of GPU resources.
Almost every language has some way of calling hand-written CUDA/ROCm/Sycl kernels, but the interesting feature of languages like Julia, or of libraries like JAX, is that they offer users the ability to write kernels in the language the users already know, or a subset of it, without having to learn anything new. Minor performance penalties are not that critical in such cases, if the alternative are a CPU-only solution. Otherwise worthwhile projects such as Rust-CUDA end up going unmaintained due to being too much effort to maintain outside of LLVM or the Rust project.
Elaborate in more detail about the problem you are trying to solve. This section is making the case for why this particular problem is worth prioritizing with project bandwidth. A strong status quo section will (a) identify the target audience and (b) give specifics about the problems they are facing today. Sometimes it may be useful to start sketching out how you think those problems will be addressed by your change, as well, though it's not necessary.
The next six months
We are requesting support from the Rust project for continued experimentation:
- Merge the
#[autodiff]
fork. - Expose the experimental batching feature of Enzyme, preferably by a new contributor.
- Merge an MVP
#[offloading]
fork which is able to run simple functions using rayon parallelism on a GPU or TPU, showing a speed-up.
The "shiny future" we are working towards
The purpose of this goal is to enable continued experimentation with the underlying LLVM functionality. The eventual goal of this experimentation is that all three proposed features (batching, autodiff, offloading) can be combined and work nicely together. The hope is that we will have state-of-the-art libraries like faer to cover linear algebra, and that we will start to see more and more libraries in other languages using Rust with these features as their backend. Cases which don't require interactive exploration will also become more popular in pure Rust.
Caveats to this future
There is not yet consensus amongst the relevant Rust teams as to how and/or whether this functionality should be exposed on stable. Some concerns that continued experimentation will hopefully help to resolve:
- How effective and general purpose is this functionality?
- How complex is this functionality to support, and how does that trade off with the value it provides? What is the right point on the spectrum of tradeoffs?
- Can code using these Rust features still compile and run on backends other than LLVM, and on all supported targets? If not, how should we manage the backend-specific nature of it?
- Can we avoid tying Rust features too closely to the specific properties of any backend or target, such that we're confident these features can remain stable over decades of future landscape changes?
- Can we fully implement every feature of the provided functionality (as more than a no-op) on fully open systems, despite the heavily proprietary nature of parts of the GPU and accelerator landscape?
Design axioms
Offloading
- We try to provide a safe, simple and opaque offloading interface.
- The "unit" of offloading is a function.
- We try to not expose explicit data movement if Rust's ownership model gives us enough information.
- Users can offload functions which contains parallel CPU code, but do not have final control over how the parallelism will be translated to co-processors.
- We accept that hand-written CUDA/ROCm/etc. kernels might be faster, but actively try to reduce differences.
- We accept that we might need to provide additional control to the user to guide parallelism, if performance differences remain unacceptably large.
- Offloaded code might not return the exact same values as code executed on the CPU. We will work with t-opsem to develop clear rules.
Autodiff
- We try to provide a fast autodiff interface which supports most autodiff features relevant for scientific computing.
- The "unit" of autodiff is a function.
- We acknowledge our responsibility since user-implemented autodiff without compiler knowledge might struggle to cover gaps in our features.
- We have a fast, low level, solution with further optimization opportunities, but need to improve safety and usability (i.e. provide better high level interfaces).
- We need to teach users more about autodiff "pitfalls" and provide guides on how to handle them. See, e.g. https://arxiv.org/abs/2305.07546.
- We do not support differentiating inline assembly. Users are expected to write custom derivatives in such cases.
- We might refuse to expose certain features if they are too hard to use correctly and provide little gains (e.g. derivatives with respect to global vars).
Add your design axioms here. Design axioms clarify the constraints and tradeoffs you will use as you do your design work. These are most important for project goals where the route to the solution has significant ambiguity (e.g., designing a language feature or an API), as they communicate to your reader how you plan to approach the problem. If this goal is more aimed at implementation, then design axioms are less important. Read more about design axioms.
Ownership and team asks
Owner: Manuel Drehwald
Manuel S. Drehwald is working 5 days per week on this, sponsored by LLNL and the University of Toronto (UofT). He has a background in HPC and worked on a Rust compiler fork, as well as an LLVM-based autodiff tool for the last 3 years during his undergrad. He is now in a research-based master's degree program. Supervision and discussion on the LLVM side will happen with Johannes Doerfert and Tom Scogland.
Resources: Domain and CI for the autodiff work is being provided by MIT. This might be moved to the LLVM org later this year. Hardware for benchmarks is being provided by LLNL and UofT. CI for the offloading work will be provided by LLNL or LLVM (see below).
Minimal "smoke test" reviews will be needed from the compiler-team. The Rust language changes at this stage are expected to be a minimal wrapper around the underlying LLVM functionality and the compiler team need only vet that the feature will not hinder usability for ordinary Rust users or cause undue burden on the compiler architecture itself. There is no requirement to vet the quality or usability of the design.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Development | Manuel Drehwald | |
Lang-team experiment | lang | (approved) |
Standard reviews | compiler |
Outputs and milestones
Outputs
-
An
#[offload]
rustc-builtin-macro which makes a function definition known to the LLVM offloading backend.- Made a PR to enable LLVM's offloading runtime backend.
- Merge the offload macro frontend
- Merge the offload Middle-end
-
An
offload!([GPU1, GPU2, TPU1], foo(x, y,z));
macro (placeholder name) which will execute functionfoo
on the specified devices. -
An
#[autodiff]
rustc-builtin-macro which differentiates a given function.- Merge the Autodiff macro frontend
- Merge the Autodiff Enzyme backend
- Merge the Autodiff Middle-end
-
A
#[batching]
rustc-builtin-macro which fuses N function calls into one call, enabling better vectorization.
Milestones
-
The first offloading step is the automatic copying of a slice or vector of floats to a device and back.
-
The second offloading step is the automatic translation of a (default)
Clone
implementation to create a host-to-device and device-to-host copy implementation for user types. -
The third offloading step is to run some embarrassingly parallel Rust code (e.g. scalar times Vector) on the GPU.
-
Fourth we have examples of how rayon code runs faster on a co-processor using offloading.
-
Stretch-goal: combining autodiff and offloading in one example that runs differentiated code on a GPU.
Frequently asked questions
Why do you implement these features only on the LLVM backend?
Performance-wise, we have LLVM and GCC as performant backends. Modularity-wise, we have LLVM and especially Cranelift being nice to modify. It seems reasonable that LLVM thus is the first backend to have support for new features in this field. Especially the offloading support should be supportable by other compiler backends, given pre-existing work like OpenMP offloading and WebGPU.
Do these changes have to happen in the compiler?
Yes, given how Rust works today.
However, both features could be implemented in user-space if the Rust compiler someday supported reflection. In this case we could ask the compiler for the optimized backend IR for a given function. We would then need use either the AD or offloading abilities of the LLVM library to modify the IR, generating a new function. The user would then be able to call that newly generated function. This would require some discussion on how we can have crates in the ecosystem that work with various LLVM versions, since crates are usually expected to have a MSRV, but the LLVM (and like GCC/Cranelift) backend will have breaking changes, unlike Rust.
Batching?
This is offered by all autodiff tools. JAX has an extra command for it, whereas Enzyme (the autodiff backend) combines batching with autodiff. We might want to split these since both have value on their own.
Some libraries also offer array-of-struct vs struct-of-array features which are related but often have limited usability or performance when implemented in userspace.
Writing a GPU backend in 6 months sounds tough...
True. But similar to the autodiff work, we're exposing something that's already existing in the backend.
Rust, Julia, C++, Carbon, Fortran, Chappel, Haskell, Bend, Python, etc. should not all have write their own GPU or autodiff backends. Most of these already share compiler optimization through LLVM or GCC, so let's also share this. Of course, we should still push to use our Rust specific magic.
Rust Specific Magic?
TODO
How about Safety?
We want all these features to be safe by default, and are happy to not expose some features if the gain is too small for the safety risk. As an example, Enzyme can compute the derivative with respect to a global. That's probably too niche, and could be discouraged (and unsafe) for Rust.
Extend pubgrub to match cargo's dependency resolution
Metadata | |
---|---|
Owner(s) | Jacob Finkelman |
Teams | cargo |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#110 |
Summary
Implement a standalone library based on pubgrub that model's cargo dependency resolution and validate its accurate with testing against crates found on crates.io. This lays the groundwork for improved cargo error messages, extensions for hotly requested features (e.g., better MSRV support, CVE-aware resolution, etc), and support for a richer ecosystem of cargo extensions.
Motivation
Cargo's dependency resolver is brittle and under-tested. Disentangling implementation details, performance optimizations, and user-facing functionality will require a rewrite. Making the resolver a standalone modular library will make it easier to test and maintain.
The status quo
Big changes are required in cargo's resolver: there is lots of new functionality that will require changes to the resolver and the existing resolver's error messages are terrible. Cargo's dependency resolver solves the NP-Hard problem of taking a list of direct dependencies and an index of all available crates and returning an exact list of versions that should be built. This functionality is exposed in cargo's CLI interface as generating/updating a lock file. Nonetheless, any change to the current resolver in situ is extremely treacherous. Because the problem is NP-Hard it is not easy to tell what code changes break load-bearing performance or correctness guarantees. It is difficult to abstract and separate the existing resolver from the code base, because the current resolver relies on concrete datatypes from other modules in cargo to determine if a set of versions have any of the many ways two crate versions can be incompatible.
The next six months
Develop a standalone library for doing dependency resolution with all the functionality already supported by cargo's resolver. Extensively test this library to ensure maximum compatibility with existing behavior. Prepare for experimental use of this library inside cargo.
The "shiny future" we are working towards
Eventually we should replace the existing entangled resolver in cargo with one based on separately maintained libraries. These libraries would provide simpler and isolated testing environments to ensure that correctness is maintained. Cargo plugins that want to control or understand what lock file cargo uses can interact with these libraries directly without interacting with the rest of cargo's internals.
Design axioms
- Correct: The new resolver must perform dependency resolution correctly, which generally means matching the behavior of the existing resolver, and switching to it must not break Rust projects.
- Complete output: The output from the new resolver should be demonstrably correct. There should be enough information associated with the output to determine that it made the right decision.
- Modular: There should be a stack of abstractions, each one of which can be understood, tested, and improved on its own without requiring complete knowledge of the entire stack from the smallest implementation details to the largest overall use cases.
- Fast: The resolver can be a slow part of people's workflow. Overall performance must be a high priority and a focus.
Ownership and team asks
Owner: Jacob Finkelman will own and lead the effort.
I (Jacob Finkelman) will be working full time on this effort. I am a member of the Cargo Team and a maintainer of pubgrub-rs.
Integrating the new resolver into Cargo and reaching the shiny future will require extensive collaboration and review from the Cargo Team. However, the next milestones involve independent work exhaustively searching for differences in behavior between the new and old resolvers and fixing them. So only occasional consultation-level conversations will be needed during this proposal.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation work on pubgrub library | eh2046 | |
Discussion and moral support | cargo |
Outputs
Standalone crates for independent components of cargo's resolver. We have already developed https://github.com/pubgrub-rs/pubgrub for solving the core of dependency resolution and https://github.com/pubgrub-rs/semver-pubgrub for doing mathematical set operations on Semver requirements. The shiny future will involve several more crates, although their exact borders have not yet been determined. Eventually we will also be delivering a -Z pubgrub
for testing the new resolver in cargo itself.
Milestones
For all crate versions on crates.io the two resolvers agree about whether there is a solution.
Build a tool that will look at the index from crates.io and for each version of each crate, make a resolution problem out of resolving the dependencies. This tool will save off an independent test case for each time pubgrub and cargo disagree about whether there is a solution. This will not check if the resulting lock files are the same or even compatible, just whether they agree that a lock file is possible. Even this crude comparison will find many bugs in how the problem is presented to pubgrub. This is known for sure, because this milestone has already been achieved.
For all crate versions on crates.io the two resolvers accept the other one's solution.
The tool from previous milestone will be extended to make sure that the lock file generated by pubgrub can be accepted by cargo's resolver and vice versa. How long will this take? What will this find? No way to know. To quote FractalFir "If I knew where / how many bugs there are, I would have fixed them already. So, providing any concrete timeline is difficult."
For all crate versions on crates.io the performance is acceptable.
There are some crates where pubgrub takes a long time to do resolution, and many more where pubgrub takes longer than cargo's existing resolver. Investigate each of these cases and figure out if performance can be improved either by improvements to the underlying pubgrub algorithm or the way the problem is presented to pubgrub.
Frequently asked questions
If the existing resolver defines correct behavior then how does a rewrite help?
Unless we find critical bugs with the existing resolver, the new resolver and cargo's resolver should be 100% compatible. This means that any observable behavior from the existing resolver will need to be matched in the new resolver. The benefits of this work will come not from changes in behavior, but from a more flexible, reusable, testable, and maintainable code base. For example: the base pubgrub
crate solves a simpler version of the dependency resolution problem. This allows for a more structured internal algorithm which enables complete error messages. It's also general enough not only to be used in cargo but also in other package managers. We already have contributions from the maintainers of uv
who are using the library in production.
Implement "merged doctests" to save doctest time
Metadata | |
---|---|
Owner(s) | Guillaume Gomez |
Teams | rustdoc |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#111 |
Guillaume Gomez: https://github.com/GuillaumeGomez
Motivation
Most of the time in doctests is spent in compilation. Merging doctests and compiling them together allows to greatly reduce the overall amount of time.
The status quo
The next six months
- Finish reviewing the pull request
- Run crater with the feature enabled by default.
- Merge it.
The "shiny future" we are working towards
Merged doctests.
Design axioms
N/A
Ownership and team asks
Owner: Guillaume Gomez
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Guillaume Gomez | |
Standard reviews | rustdoc |
Frequently asked questions
None yet.
Make Rustdoc Search easier to learn
Metadata | |
---|---|
Owner(s) | Michael Howell |
Teams | rustdoc, rustdoc-frontend |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#112 |
Summary
To make rustdoc's search engine more useful:
- Respond to some existing feedback.
- Write blog and forum posts to advertise these new features to the larger community, and seek out feedback to continue the progress.
Motivation
Rustdoc Search is going to be some people's primary resource for finding things. There are a few reasons for this:
- It's available. Away from the computer and trying to help someone else out from a smartphone? Evaluating Rust before you install anything? Rustdoc outputs web pages, so you can still use it.
- If you have a pretty good idea of what you're looking for, it's way better than a general search engine. It offers structured features based on Rust, like type-driven search and crate filtering, that aren't available in DuckDuckGo because it doesn't know about them.
The status quo
Unfortunately, while most people know it exists, they don't know about most of what it can do. A lot of people literally ask "Does Rust have anything like Hoogle?", and they don't know that it's already there. We've had other people who didn't see the tab bar, and it doesn't seem like people look under the ? button, either.
Part of the problem is that they just never try.
[Ted Scharff][] User:
I'd never used the search bar inside the docs before
[Ted Scharff][] User:
It's because usually the searches inside all of the sites are pretty broken & useless
[Ted Scharff][] User:
but this site is cool. docs are very well written and search is fast, concise...
Mostly, we've got a discoverability problem.
The next 6 months
- Implement a feature to show type signatures in type-driven search results, so it's easier to figure out why a result came up https://github.com/rust-lang/rust/pull/124544.
- When unintuitive results come up, respond by either changing the algorithm or changing the way it's presented to help it make sense.
- Do we need to do something to make levenshtein matches more obvious?
- Seek out user feedback on Internals.
Popular stuff should just be made to work, and what's already there can be made more obvious with education and good UI design.
The "shiny future" we are working towards
Rustdoc Search should be a quick, natural way to find things in your dependencies.
Design axioms
The goal is to reach this point without trying to be a better Google than Google is. Rustdoc Search should focus on what it can do that other search engines can't:
- Rustdoc Search is not magic, and it doesn't have to be.
- A single crate, or even a single dependency tree, isn't that big. Extremely fancy techniques—beyond simple database sharding and data structures like bloom filters or tries—aren't needed.
- If you've already added a crate as a dependency or opened its page on docs.rs, there's no point in trying to exploit it with SEO spam (the crate is already on the other side of the airtight hatchway).
- Rustdoc is completely open source. There are no secret anti-spam filters. Because it only searches a limited set of pre-screened crates (usually just one), it will never need them.
- Rustdoc knows the Rust language. It can, and should, offer structured search to build on that.
Ownership and team asks
Owner: Michael Howell
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list and the name of the team. The things typically asked of teams are defined in the Definitions section below.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussion and moral support | rustdoc | |
Improve on any discovered weaknesses | ||
↳ Implementation: show type signature in SERP | Michael Howell | |
↳ Implementation: tweak search algo | Michael Howell | |
↳ Standard reviews | rustdoc-frontend | |
↳ Design meeting | rustdoc-frontend | |
↳ FCP review | rustdoc-frontend | |
Feedback and testing | ||
↳ Inside Rust blog post inviting feedback |
Definitions
Definitions for terms used above:
- Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
- Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Frequently asked questions
Search over all of the crates when?
Docs.rs can do it if they want, but Michael Howell isn't signing up for a full-time job dealing with SEO bad actors.
Full-text search when?
That path is pretty rough. Bugs, enormous size, and contentious decisions on how to handle synonyms abound.
Next-generation trait solver
Metadata | |
---|---|
Owner(s) | lcnr |
Teams | types |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#113 |
Summary
In the next 6 months we plan to extend the next-generation trait solver as follows:
- stabilize the use of the next-generation trait solver in coherence checking
- use the new implementation in rustdoc and lints where applicable
- share the solver with rust-analyser
- successfully bootstrap the compiler when exclusively using the new implementation and run crater
Motivation
The existing trait system implementation has many bugs, inefficiencies and rough corners which require major changes to its implementation. To fix existing unsound issues, accommodate future improvements, and to improve compile times, we are reimplementing the core trait solver to replace the existing implementations of select
and fulfill
.
The status quo
There are multiple type system unsoundnesses blocked on the next-generation trait solver: project board. Desirable features such as coinductive trait semantics and perfect derive, where-bounds on binders, and better handling of higher-ranked bounds and types are also stalled due to shortcomings of the existing implementation.
Fixing these issues in the existing implementation is prohibitively difficult as the required changes are interconnected and require major changes to the underlying structure of the trait solver. The Types Team therefore decided to rewrite the trait solver in-tree, and has been working on it since EOY 2022.
The next six months
- stabilize the use of the next-generation trait solver in coherence checking
- use the new implementation in rustdoc and lints where applicable
- share the solver with rust-analyser
- successfully bootstrap the compiler when exclusively using the new implementation and run crater
The "shiny future" we are working towards
- we are able to remove the existing trait solver implementation and significantly cleanup the type system in general, e.g. removing most
normalize
in the caller by handling unnormalized types in the trait system - all remaining type system unsoundnesses are fixed
- many future type system improvements are unblocked and get implemented
- the type system is more performant, resulting in better compile times
Design axioms
In order of importance, the next-generation trait solver should be:
- sound: the new trait solver is sound and its design enables us to fix all known type system unsoundnesses
- backwards-compatible: the breakage caused by the switch to the new solver should be minimal
- maintainable: the implementation is maintainable, extensible, and approachable to new contributors
- performant: the implementation is efficient, improving compile-times
Ownership and team asks
Owner: lcnr
Add'l implementation work: Michael Goulet
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussion and moral support | types | |
Stabilize next-generation solver in coherence | ||
↳ Implementation | lcnr, Michael Goulet | |
↳ Standard reviews | types | |
↳ Standard reviews | rust-analyzer | |
↳ Stabilization decision | types | |
Support next-generation solver in rust-analyzer | ||
↳ Implementation (library side) | owner and others | |
↳ Implementation (rust-analyzer side) | TBD | |
↳ Standard reviews | types | |
↳ Standard reviews | rust-analyzer |
Support needed from the project
- Types team
- review design decisions
- provide technical feedback and suggestion
- rust-analyzer team
- contribute to integration in Rust Analyzer
- provide technical feedback to the design of the API
Outputs and milestones
See next few steps :3
Outputs
Milestones
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
Optimizing Clippy & linting
(a.k.a The Clippy Performance Project)
Metadata | |
---|---|
Owner(s) | Alejandra González |
Teams | clippy |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#114 |
Summary
This is the formalization and documentation of the Clippy Performance Project, a project first talked about on Zulip, July 2023. As the project consists of several points and is ever-changing, this document also has a dynamic structure and the team can add points.
In short, this is an effort to optimize Clippy, and Rust's linting infrastructure with a point of view of making Clippy faster (both on CI/CD pipelines, and on devs' machines)
Motivation
Clippy can take up to 2.5 times the time that a normal cargo check
takes, and it doesn't need to be! Taking so long is expensive both in development time, and in real money.
The status quo
Based on some informal feedback [polls][poll-mastodon], it's clear that Clippy is used in lots of different contexts. Both in developer's IDEs and outside them.
The usage for IDEs is not as smooth as one may desire or expect when comparing to prior art like [Prettier][prettier], [Ruff][ruff], or other tools in the Rust ecosystem rustfmt
and Rust-analyzer.
The other big use-case is as a test before committing or on CI. Optimizing Clippy for performance would fold the cost of these tests.
On GitHub Actions, this excessive time can equal the cost of running cargo check
on a Linux x64 32-cores machine, instead of a Linux x64 2-cores machine. A 3.3x cost increase.
The next 6 months
In order to achieve a better performance we want to:
- Keep working on, and eventually merge rust#125116
- Improve of checking on proc-macros & expansions, maybe by precomputing expanded spans or memoizing the checking functions.
- Optimize checking for MSRVs and
#[clippy::msrv]
attributes. (Probably using static values, precomputing MSRV spans?) - Migrate applicable lints to use incremental compilation
Apart from these 4 clear goals, any open issue, open PR or merged PRs with the label performance-project
are a great benefit.
The "shiny future" we are working towards
The possible outcome would be a system that can be run on-save without being a hassle to the developer, and that has the minimum possible overhead over cargo check
(which, would also be optimized as a side of a lot of a subset of the optimizations).
A developer shouldn't have to get a high-end machine to run a compiler swiftly; and a server should not spend more valuable seconds on linting than strictly necessary.
Ownership and team asks
Owner: Alejandra Gonzalez, a.k.a. Alejandra González
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Optimization work | ||
↳ Implementation | Alejandra González, Alex Macleod | |
↳ Standard reviews | clippy |
Frequently Asked Questions
[poll-mastodon]: https://tech.lgbt/Alejandra González/112747808297589676 [prettier]: https://github.com/prettier/prettier [ruff]: https://github.com/astral-sh/ruff
Patterns of empty types
Metadata | |
---|---|
Owner(s) | @Nadrieril |
Teams | lang |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#115 |
Summary
Introduce an RFC for never patterns or other solutions for patterns involving uninhabited types.
Motivation
The story about pattern-matching is incomplete with regards to empty types: users sometimes have to
write unreachable!()
for cases they know to be impossible. This is a papercut that we can solve,
and would make for a more consistent pattern-matching story.
This is particularly salient as the never type !
is planned to be stabilized in edition 2024.
The status quo
Empty types are used commonly to indicate cases that can't happen, e.g. in error-generic interfaces:
#![allow(unused)] fn main() { impl TryFrom<X> for Y { type Error = Infallible; ... } // or impl SomeAST { pub fn map_nodes<E>(self, f: impl FnMut(Node) -> Result<Node, E>) -> Result<Self, E> { ... } } // used in the infallible case like: let Ok(new_ast) = ast.map_nodes::<!>(|node| node) else { unreachable!() }; // or: let new_ast = match ast.map_nodes::<!>(|node| node) { Ok(new_ast) => new_ast, Err(never) => match never {}, } }
and conditional compilation:
#![allow(unused)] fn main() { pub struct PoisonError<T> { guard: T, #[cfg(not(panic = "unwind"))] _never: !, } pub enum TryLockError<T> { Poisoned(PoisonError<T>), WouldBlock, } }
For the most part, pattern-matching today treats empty types as if they were non-empty. E.g. in the
above example, both the else { unreachable!() }
above and the Err
branch are required.
The unstable exhaustive_patterns
allows all patterns of empty type to be omitted. It has never
been stabilized because it goes against design axiom n.1 "Pattern semantics are predictable" when
interacting to possibly-uninitialized data.
The next six months
The first step is already about to complete: the min_exhaustive_patterns
feature is in FCP and
about to be stabilized. This covers a large number of use-cases.
After min_exhaustive_patterns
, there remains the case of empty types behind references, pointers
and union fields. The current proposal for these is never_patterns
; next steps are to submit the
RFC and then finish the implementation according to the RFC outcome.
The "shiny future" we are working towards
The ideal endpoint is that users never have code to handle a pattern of empty type.
Design axioms
- Pattern semantics are predictable: users should be able to tell what data a pattern touches by looking at it. This is crucial when matching on partially-initialized data.
- Impossible cases can be omitted: users shouldn't have to write code for cases that are statically impossible.
Ownership and team asks
Owner: @Nadrieril
I (@Nadrieril) am putting forward my own contribution for driving this forward, both on the RFC and implementation sides. I am an experienced compiler contributor and have been driving this forward already for several months.
- I expect to be authoring one RFC, on never patterns (unless it gets rejected and we need
a different approach).
- The feature may require one design meeting.
- Implementation work is 80% done, which leaves about 80% more to do. This will require reviews from the compiler team, but not more than the ordinary.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Author RFC | @Nadrieril | |
Implementation | @Nadrieril | |
Standard reviews | compiler | |
Discussion and moral support | lang | |
Author stabilization report | Goal owner |
Note:
- RFC decisions, Design Meetings, and Stabilizaton decisions were intentionally not included in the above list of asks. The lang team is not sure it can commit to completing those reviews on a reasonable timeline.
Frequently asked questions
Provided reasons for yanked crates
Metadata | |
---|---|
Owner(s) | Rustin |
Teams | crates-io, cargo |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#101 |
Summary
Over the next 6 months, we will add support to the registry yank API for providing a reason when a crate is yanked. This reason can then be displayed to users. After this feature has been up and running for a while, we'll open it up to Cargo to support filling in the reason for yanking.
Motivation
When a crate is updated to address a critical issue—such as a fix for a soundness bug or a security vulnerability—it is beneficial to yank previous versions and prompt users to upgrade with a yank reason. Additionally, if a crate is renamed or deprecated, the yank message can provide guidance on the new recommended crate or version. This ensures that users are aware of necessary updates and can maintain the security and stability of their projects.
The status quo
We came up with this need eight years ago, but it was never implemented.
This feature has the following potential use cases:
- When a crate is fixed because it will be broken in the next version of the compiler (e.g. a soundness fix or bug fix) then the previous versions can be yanked and nudge users forward.
- If a crate is fixed for a security reason, the old versions can be yanked and the new version can be suggested.
- If a crate is renamed (or perhaps deprecated) to another then the yank message can indicate what to do in that situation.
Additionally, if we can persist this information to the crates.io index, we can make it available as meta-information to other platforms, such as security platforms like RustSec.
The next 6 months
The primary goal for the next 6 months is to add support to the registry's yank API.
After that, next steps include (these can be done in many different orders):
- add support on the browser frontend for giving a reason
- add support on the cargo CLI for giving a reason
- add reason to the index
- add support on the cargo CLI for showing the reason
Design axioms
When considering this feature, we need to balance our desire for a perfect, structured yank message with a usable, easy-to-use yank message. We need to start with this feature and leave room for future extensions, but we shouldn't introduce complexity and support for all requirements from the start.
Ownership and team asks
Owner:
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Yank crates with a reason | ||
↳ Implementation | Rustin | |
↳ Standard reviews | crates-io | |
↳ Try it out in crates.io | crates-io | |
↳ Author RFC | Rustin | |
↳ Approve RFC | cargo, crates-io | |
↳ Implementation in Cargo side | Rustin | |
↳ Inside Rust blog post inviting feedback | Rustin | |
↳ Stabilization decision | cargo |
Frequently asked questions
What might we do next?
We could start with plain text messages, but in the future we could consider designing it as structured data. This way, in addition to displaying it to Cargo users, we can also make it available to more crates-related platforms for data integration and use.
Scalable Polonius support on nightly
Metadata | |
---|---|
Owner(s) | Rémy Rakic |
Teams | types |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#118 |
Summary
Implement a native rustc version of the Polonius next generation borrow checking algorithm, that would scale better than the previous datalog implementation.
Motivation
Polonius is an improved version of the borrow checker that resolves common limitations of the borrow checker and which is needed to support future patterns such as "lending iterators" (see #92985). Its model also prepares us for further improvements in the future.
Some support exists on nightly, but this older prototype has no path to stabilization due to scalability issues. We need an improved architecture and implementation to fix these issues.
The status quo
The next six months
- Land polonius on nightly
The "shiny future" we are working towards
Stable support for Polonius.
Design axioms
N/A
Ownership and team asks
Owner: lqd
Other support provided by Amanda Stjerna as part of her PhD.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Design review | Niko Matsakis | |
Implementation | Rémy Rakic, Amanda Stjerna | |
Standard reviews | types | Matthew Jasper |
Support needed from the project
We expect most support to be needed from the types team, for design, reviews, interactions with the trait solver, and so on. We expect Niko Matsakis, leading the polonius working group and design, to provide guidance and design time, and Michael Goulet and Matthew Jasper to help with reviews.
Outputs and milestones
Outputs
Nightly implementation of polonius that passes NLL problem case #3 and accepts lending iterators (#92985).
Performance should be reasonable enough that we can run the full test suite, do crater runs, and test it on CI, without significant slowdowns. We do not expect to be production-ready yet by then, and therefore the implementation would still be gated under a nightly -Z feature flag.
As our model is a superset of NLLs, we expect little to no diagnostics regressions, but improvements would probably still be needed for the new errors.
Milestones
Milestone | Expected date |
---|---|
Factoring out higher-ranked concerns from the main path | TBD |
Replace parts of the borrow checker with location-insensitive Polonius | TBD |
Location-sensitive prototype on nightly | TBD |
Verify full test suite/crater pass with location-sensitive Polonius | TBD |
Location-sensitive pass on nightly, tested on CI | TBD |
Frequently asked questions
None yet.
Stabilize cargo-script
Metadata | |
---|---|
Owner(s) | Ed Page |
Teams | cargo, lang |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#119 |
Summary
Stabilize support for "cargo script", the ability to have a single file that contains both Rust code and a Cargo.toml
.
Motivation
Being able to have a Cargo package in a single file can reduce friction in development and communication, improving bug reports, educational material, prototyping, and development of small utilities.
The status quo
Today, at minimum a Cargo package is at least two files (Cargo.toml
and either main.rs
or lib.rs
).
The Cargo.toml
has several required fields.
To share this in a bug report, people resort to
- Creating a repo and sharing it
- A shell script that cats out to multiple files
- Manually specifying each file
- Under-specifying the reproduction case (likely the most common due to being the eaisest)
To create a utility, a developer will need to run cargo new
, update the
Cargo.toml
and main.rs
, and decide on a strategy to run this (e.g. a shell
script in the path that calls cargo run --manifest-path ...
).
The next six months
The support is already implemented on nightly. The goal is to stabilize support. With RFC #3502 and RFC #3503 approved, the next steps are being tracked in rust-lang/cargo#12207.
At a high-level, this is
- Add support to the compiler for the frontmatter syntax
- Add support in Cargo for scripts as a "source"
- Polish
The "shiny future" we are working towards
Design axioms
- In the trivial case, there should be no boilerplate. The boilerplate should scale with the application's complexity.
- A script with a couple of dependencies should feel pleasant to develop without copy/pasting or scaffolding generators.
- We don't need to support everything that exists today because we have multi-file packages.
Ownership and team asks
Owner: epage
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.
- Subgoal:
- Describe the work to be done and use
↳
to mark "subitems".
- Describe the work to be done and use
- Owner(s) or team(s):
- List the owner for this item (who will do the work) or if an owner is needed.
- If the item is a "team ask" (i.e., approve an RFC), put and the team name(s).
- Status:
- List if there is an owner but they need support, for example funding.
- Other needs (e.g., complete, in FCP, etc) are also fine.
Stabilize doc_cfg
Metadata | |
---|---|
Owner(s) | Guillaume Gomez |
Teams | rustdoc |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#120 |
Guillaume Gomez: https://github.com/GuillaumeGomez
Motivation
The doc_cfg
features would allow to provide more information to crate users reading the documentation.
The status quo
The next six months
- Merge the RFC.
- Implement it.
- Stabilize it.
The "shiny future" we are working towards
Stable support for doc_cfg
features.
Design axioms
N/A
Ownership and team asks
Owner: Guillaume Gomez
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Guillaume Gomez | |
RFC decision | rustdoc | |
Standard reviews | rustdoc |
Outputs and milestones
Outputs
Milestones
Milestone | Expected date |
---|---|
Merge RFC | TBD |
Implement RFC | TBD |
Stabilize the features | TBD |
Frequently asked questions
None yet.
Stabilize parallel front end
Metadata | |
---|---|
Owner(s) | Sparrow Li |
Teams | compiler |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#121 |
Summary
We will move rustc's support for parallel front end closer to stability by resolving ICE and deadlock issues, completing the test suite for multithreaded scenario and integrating parallel front end into bootstrap. This fits into our larger goal of improving rustc build times by 20% by leveraging multiple cores and enhance its robustness.
Motivation
The parallel front end has been implemented in nightly, but there are still many problems that prevent it from being stable and used at scale.
The status quo
Many current issues reflect ICE or deadlock problems that occur during the use of parallel front end. We need to resolve these issues to enhance its robustness.
The existing compiler testing framework is not sufficient for the parallel front end, and we need to enhance it to ensure the correct functionality of the parallel front end.
The current parallel front end still has room for further improvement in compilation performance, such as parallelization of HIR lowering and macro expansion, and reduction of data contention under more threads (>= 16).
We can use parallel front end in bootstrap to alleviate the problem of slow build of the whole project.
Cargo does not provide an option to enable the use of parallel front end, so it can only be enabled by passing rustc options manually.
The next 6 months
- Solve ICE and deadlock issues (unless they are due to lack of rare hardware environment or are almost impossible to reproduce).
- Improve the parallel compilation test framework and enable parallel front end in UI tests.
- Enable parallel frontends in bootstrap.
- Continue to improve parallel compilation performance, with the average speed increase from 20% to 25% under 8 cores and 8 threads.
- Communicate with Cargo team on the solution and plan to support parallel front end.
The "shiny future" we are working towards
The existing rayon library implementation is difficult to fundamentally eliminate the deadlock problem, so we may need a better scheduling design to eliminate deadlock without affecting performance.
The current compilation process with GlobalContext
as the core of data storage is not very friendly to parallel front end. Maybe try to reduce the granularity (such as modules) to reduce data competition under more threads and improve performance.
Design axioms
The parallel front end should be:
- safe: Ensure the safe and correct execution of the compilation process
- consistent: The compilation result should be consistent with that in single thread
- maintainable: The implementation should be easy to maintain and extend, and not cause confusion to developers who are not familiar with it.
Ownership and team asks
Owner: Sparrow Li and Parallel Rustc WG own this goal
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Sparrow Li | |
Author tests | Sparrow Li | |
Discussion and moral support | compiler |
Frequently asked questions
Survey tools suitability for Std safety verification
Metadata | |
---|---|
Owner(s) | Celina V. |
Teams | Libs |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#126 |
Summary
Instrument a fork of the standard library (the verify-rust-std repository) with safety contracts, and employ existing verification tools to verify the standard library.
Motivation
The Rust Standard Library is the foundation of portable Rust software. It provides efficient implementations and safe abstractions for the most common general purpose programming data structures and operations. For doing so, they perform unsafe operations internally.
Despite being constantly battle-tested, the implementation of the standard library has not been formally verified or proven safe. A safety issue in the Standard Library may affect almost all Rust applications, and this effort is the first step to enhance the safety guarantees of the Standard Library, hence, the Rust ecosystem.
The status quo
Rust has a very active and diverse formal methods community that has been developing automated or semi-automated verification tools that can further validate Rust code beyond the guarantees provided by the compiler. These tools can complement Rust's safety guarantees, and allow developers to eliminate bugs and formally prove the correctness of their Rust code.
There are multiple verification techniques, and each have their own strength and limitations. Some tools like Creusot and Prusti can prove correctness of Rust, including generic code, but they cannot reason about unsafe Rust, and they are not fully automated.
On the other hand, tools like Kani and Verus are able to verify unsafe Rust, but they have their own limitations, for example, Kani verification is currently bounded in the presence of loops, and it can only verify monomorphic code, while Verus requires an extended version of the Rust language which is accomplished via macros.
Formal verification tools such as Creusot, Kani, and Verus have demonstrated that it is possible to write verify Rust code that is amenable to automated or semi-automated verification. For example, Kani has been successfully applied to different Rust projects, such as: Firecracker microVM, s2n-quic, and Hifitime.
Applying those techniques to the Standard library will allow us to assess these different verification techniques, identify where all these tools come short, and help us guide research required to address those gaps.
Contract language
Virtually every verification tool has its own contract specification language, which makes it hard to combine tools to verify the same system. Specifying a contract language is outside the scope of this project. However, we plan to adopt the syntax that proposed in this MCP, and keep our fork synchronized with progress made to the compiler contract language, and help assess its suitability for verification.
This will also allow us to contribute back the contracts added to the fork.
Repository Configuration
Most of the work for this project will be developed on the top of the verify-rust-std. This repository has a subtree of the Rust library folder, which is the verification target. We have already integrated Kani in CI, and we are in the process of integrating more tools.
This repository also includes "Challenges", which are verification problems that we believe are representative of different verification targets in the Standard library. We hope that these challenges will help contributors to narrow down which parts of the standard library they can verify next. New challenges can also be proposed by any contributor.
The next 6 months
First, we will instrument some unsafe functions of the forked Rust Standard Library with function contracts, and safe abstractions with safety type invariants.
Then we will employ existing verification tools to verify that the annotated unsafe functions are in-fact safe as long as its contract pre-conditions are preserved. And we will also check that any post condition is respected. With that, we will work on proving that safe abstractions do not violate any safety contract, and that it does not leak any unsafe value through its public interface.
Type invariants will be employed to verify that unsafe value encapsulation is strong enough to guarantee the safety of the type interface. Any method should be able to assume the type invariant, and it should also preserve the type invariant. Unsafe methods contract must be enough to guarantee that the type invariant is also preserved at the end of the call.
Finally, we hope to contribute upstream contracts and type invariants added to this fork using the experimental contract support proposed in this MCP.
This is open source and very much open to contributions of tools/techniques/solutions. We introduce problems (currently phrased as challenges) that we believe are important to the Rust and verification communities. These problems can be solved by anyone.
The "shiny future" we are working towards
We are working towards the enhancement of Rust verification tools, so it can eventually be incorporated as part of regular Rust development cycle for code that require the usage of unsafe Rust.
The Rust Standard Library is the perfect candidate given its blast radios and its extensive usage of unsafe Rust to provide performant abstractions.
Design axioms
- No runtime penalty: Instrumentation must not affect the standard library runtime behavior, including performance.
- Automated Verification: Our goal is to verify the standard library implementation. Given how quickly the standard library code evolves, automated verification is needed to ensure new changes preserve the properties previously verified.
- Contract as code: Keeping the contract language and specification as close as possible to Rust syntax and semantics will lower the barrier for users to understand and be able to write their own contracts.
Ownership and team asks
Owner: Celina V.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussion and moral support | libs | |
Standard review | libs | We would like to contribute upstream the contracts added to the fork. |
Problem proposals | Help Wanted | |
Fork maintenance | Celina V., Jaisurya Nanduri | |
Fork PR Reviews | Own Committee | We are gathering a few contributors with expertise knowledge. |
Instrumentation and verification | Help Wanted |
Definitions
Definitions for terms used above:
- Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
- Standard reviews refers to reviews for PRs against the Rust repository; these PRs are not expected to be unduly large or complicated.
- Problem proposals refers to creating a scoped task that involves verifying a chunk of the standard library.
- Fork PR reviews means a group of individuals who will review the changes made to the fork, as they're expected to require significant context. Besides contracts, these changes may include extra harnesses, lemmas, ghost-code.
- Fork maintenance means configuring CI, performing periodic fork update from upstream, tool integration.
- Instrumentation and verification is the work of specifying contracts, invariants, and verifying a specific part of the standard library.
Frequently asked questions
Testing infra + contributors for a-mir-formality
Metadata | |
---|---|
Owner(s) | Niko Matsakis |
Teams | types |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#122 |
Summary
The goal for a-mir-formality this year is to bootstrap it as a live, maintained project:
- Achieve 2 regular contributors from T-types in addition to Niko Matsakis
- Support fuzz testing and/or the ability to test against rustc
Motivation
The status quo
Most communication and definition of Rust's type/trait system today takes place through informal argument and with reference to compiler internals. a-mir-formality offers a model of Rust at a much higher level, but it remains very incomplete compared to Rust and, thus far, it has been primarily developed by Niko Matsakis.
The next six months
The goal for a-mir-formality this year is to bootstrap it as a live, maintained project:
- Achieve 2 regular contributors from T-types in addition to Niko Matsakis
- Support fuzz testing and/or the ability to test against rustc
The "shiny future" we are working towards
The eventual goal is for a-mir-formality to serve as the official model of how the Rust type system works. We have found that having a model enables us to evaluate designs and changes much more quickly than trying to do everything in the real compiler. We envision a-mir-formality being updated with new features prior to stabilization which will require it to be a living codebase with many contributors. We also envision it being tested both through fuzzing and by comparing its results to the compiler to detect drift.
Design axioms
- Designed for exploration and extension by ordinary Rust developers. Editing and maintaining formality should not require a PhD. We prefer lightweight formal methods over strong static proof.
- Focused on the Rust's static checking. There are many things that a-mir-formality could model. We are focused on those things that we need to evaluate Rust's static checks. This includes the type system and trait system.
- Clarity over efficiency. Formality's codebase is only meant to scale up to small programs. Efficiency is distinctly secondary.
- The compiler approximates a-mir-formality, a-mir-formality approximates the truth. Rust's type system is Turing Complete and cannot be fully evaluated. We expect the compiler to have safeguards (for example, overflow detection) that may be more conservative than those imposed by a-mir-formality. In other words, formality may accept some programs the compiler cannot evaluate for practical reasons. Similarly, formality will have to make approximations relative to the "platonic ideal" of what Rust's type system would accept.
Ownership and team asks
Owner: Niko Matsakis
We will require participation from at least 2 other members of T-types. Current candidates are lcnr + compiler-errors.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Implementation | Niko Matsakis, lcnr, and others | |
Standard reviews | types |
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
Use annotate-snippets for rustc diagnostic output
Metadata | |
---|---|
Owner(s) | Esteban Kuber, Scott Schafer |
Teams | compiler |
Status | Accepted |
Tracking issue | rust-lang/rust-project-goals#123 |
Summary
Switch to annotate-snippets for rendering rustc's output, with no loss of functionality or visual regressions.
Motivation
Cargo has been adding its own linting system, where it has been using annotate-snippets to try and match Rust's output. This has led to duplicate code between the two, increasing the overall maintenance load. Having one renderer that produces Rust-like diagnostics will make it so there is a consistent style between Rust and Cargo, as well as any other tools with similar requirements like miri, and should lower the overall maintenance burden by rallying behind a single unified solution.
The status quo
Currently rustc has its own Emitter that encodes the theming properties of compiler diagnostics. It has handle all of the intricancies of terminal support (optional color, terminal width querying and adapting of output), layout (span and label rendering logic), and the presentation of different levels of information. Any tool that wants to approximate rustc's output for their own purposes, needs to use a third-party tool that diverges from rustc's output, like annotate-snippets or miette. Any improvements or bugfixes contributed to those libraries are not propagated back to rustc. Because the emitter is part of the rustc codebase, the barrier to entry for new contributors is artificially kept high than it otherwise would be.
annotate-snippets is already part of the rustc codebase, but it is disabled by default, doesn't have extensive testing and there's no way of enabling this output through cargo, which limits how many users can actually make use of it.
The next 6 months
- annotate-snippets rendered output reaches full partity (modulo reasonable non-significant divergences) with rustc's output
- rustc is fully using annotate-snippets for their output.
The "shiny future" we are working towards
The outputs of rustc and cargo are fully using annotate-snippets, with no regressions to the rendered output. annotate-snippets grows its feature set, like support for more advanced rendering formats or displaying diagnostics with more than ASCII-art, independently of the compiler development cycle.
Design axioms
This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.
- Match rustc's output: The output of annotate-snipepts should match rustc, modulo reasonable non-significant divergences
- Works for Cargo (and other tools): annotate-snippets is meant to be used by any project that would like "Rust-style" output, so it should be designed to work with any project, not just rustc.
Ownership and team asks
Owner: Esteban Kuber, Scott Schafer
Identify a specific person or small group of people if possible, else the group that will provide the owner
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.
- Subgoal:
- Describe the work to be done and use
↳
to mark "subitems".
- Describe the work to be done and use
- Owner(s) or team(s):
- List the owner for this item (who will do the work) or if an owner is needed.
- If the item is a "team ask" (i.e., approve an RFC), put and the team name(s).
- Status:
- List if there is an owner but they need support, for example funding.
- Other needs (e.g., complete, in FCP, etc) are also fine.
Adjust the table below; some common examples are shown below.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Reach output parity of rustc/annotate-snippets | ||
↳ Port a subset of rustc's UI tests | Scott Schafer | |
↳ Make list of current unnaddressed divergences | Scott Schafer | |
↳ address divergences | Scott Schafer | |
Initial use of annotate-snippets | ||
↳ update annotate-snippets to latest version | ||
↳ teach cargo to pass annotate-snippets flag | Esteban Kuber | |
↳ add ui test mode comparing new output | ||
↳ switch default nightly rustc output | ||
Production use of annotate-snippets | ||
↳ switch default rustc output | ||
↳ release notes | ||
↳ switch ui tests to only check new output | ||
↳ dedicated reviewer | compiler | Esteban Kuber will be the reviewer |
Standard reviews | compiler | |
Top-level Rust blog post inviting feedback |
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
User-wide build cache
Metadata | |
---|---|
Owner(s) | |
Teams | cargo |
Status | Orphaned |
Tracking issue | rust-lang/rust-project-goals#124 |
Summary
Extend Cargo's caching of intermediate artifacts across a workspace to caching them across all workspaces of the user.
Motivation
The primary goal of this effort is to improve build times by reusing builds across projects.
Secondary goals are
- Reduce disk usage
- More precise cross-job caching in CI
The status quo
When Cargo performs a build, it will build the package you requested and all dependencies individually, linking them in the end. These build results (intermediate build artifacts) and the linked result (final build artifact) are stored in the target-dir, which is per-workspace by default.
Ways cargo will try to reuse builds today:
- On a subsequent build, Cargo tries to reuse these build results by "fingerprinting" the inputs to the prior build and checking if that fingerprint has changed.
- When dependencies are shared by host (
build.rs
, proc-macros) and platform-target and the platform-target is the host, Cargo will attempt to share host/target builds
Some users try to get extra cache reuse by assigning all workspaces to use the same target-dir.
- Cross-project conflicts occur because this shares both intermediate (generally unique) and final build artifacts (might not be unique)
cargo clean
will clear the entire cache for every project- Rebuild churn from build inputs, like
RUSTFLAGS
, that cause a rebuild but aren't hashed into the file path
In CI, users generally have to declare what directory is should be cached between jobs. This directory will be compressed and uploaded at the end of the job. If the next job's cache key matches, the tarball will be downloaded and decompressed. If too much is cached, the time for managing the cache can dwarf the benefits of the cache. Some third-party projects exist to help manage cache size.
The next 6 months
Add support for user-wide intermediate artifact caching
- Re-work target directory so each intermediate artifact is in a self-contained directory
- Develop and implement transition path for tooling that accesses intermediate artifacts
- Adjust
cargo build
to- Hash all build inputs into a user-wide hash key
- If hash key is present, use the artifacts straight from the cache, otherwise build it and put it in the cache
- Limit this immutable packages ("non-local" in cargo terms, like Registry, git dependencies)
- Limit this to idempotent packages (can't depend on proc-macro, can't have a
build.rs
) - Evaluate risks and determine how we will stabilize this (e.g. unstable to stable, opt-in to opt-out to only on)
- Track intermediate build artifacts for garbage collection
- Explore
- Idempotence opt-ins for
build.rs
or proc-macros until sandboxing solutions can determine the level of idempotence. - a CLI interface for removing anything in the cache that isn't from this CI job's build, providing more automatic CI cache management without third-party tools.
- Idempotence opt-ins for
Compared to pre-built binaries, this is adaptive to what people use
- feature flags
- RUSTFLAGS
- dependency versions
A risk is that this won't help as many people as they hope because being able to reuse caches between projects will depend on the exact dependency tree for every intermediate artifact. For example, when building a proc-macro
unicode-ident
has few releases, so its likely this will get heavy reuseproc-macro2
is has a lot of releases and depends onunicode-ident
quote
has a lot of releases and depends onproc-macro2
andunicode-ident
syn
has a lot of releases and depends onproc-macro2
,unicode-ident
, and optionally onquote
With syn
being a very heavy dependency, if it or any of its dependency versions are mismatched between projects,
the user won't get shared builds of syn
.
See also cargo#5931.
The "shiny future" we are working towards
The cache lookup will be extended with plugins to read and/or write to different sources. Open source projects and companies can have their CI read from and write to their cache. Individuals who trust the CI can then configure their plugin to read from the CI cache.
A cooperating CI service could provide their own plugin that, instead of caching everything used in the last job and unpacking it in the next, their plugin could download only the entries that will be needed for the current build (e.g. say a dependency changed) and only upload the cache entries that were freshly built. Fine-grained caching like this would save the CI service on bandwidth, storage, and the compute time from copying, decompressing, and compressing the cache. Users would have faster CI time and save money on their CI service, minus any induced demand that faster builds creates.
On a different note, as sandboxing efforts improve, we'll have precise details
on the inputs for build.rs
and proc-macros and can gauge when there is
idempotence (and verify the opt-in mentioned earlier).
Design axioms
This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.
Ownership and team asks
Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list and the name of the team. The things typically asked of teams are defined in the Definitions section below.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
User-wide caching | ||
↳ Implementation | Goal owner | |
↳ Standard reviews | cargo | |
↳ Mentoring and guidance | Ed Page | |
↳ Design meeting | cargo |
Frequently asked questions
Why not pre-built packages?
Pre-built packages requires guessing
- CPU Architecture
- Feature flags
- RUSTFLAGS
- Dependency versions
If there are any mismatches there, then the pre-built package can't be used.
A build cache can be populated with pre-built packages and react to the unique circumstances of the user.
Why not sccache?
Tools like sccache try to infer inputs for hashing a cache key from command-line arguments. This has us reusing the extra knowledge Cargo has to get more accurate cache key generation.
If this is limited to immutable, idempotent packages, is this worth it?
In short, yes.
First, this includes an effort to allow packages to declare themselves as idempotent. Longer term, we'll have sandboxing to help infer / verify idempotence.
If subtle dependency changes prevent reuse across projects, is this worth it?
In short, yes.
This is a milestone on the way to remote caches. Remote caches allows access to CI build caches for the same project you are developing on, allowing full reuse at the cost of network access.
Not accepted
This section contains goals that were proposed but ultimately not accepted, either for want of resources or consensus. In many cases, narrower versions of these goals were accepted.
Goal | Owner | Team |
---|---|---|
Contracts and Invariants | Felix Klock | |
Experiment with relaxing the Orphan Rule | ||
Faster build experience | Jonathan Kelley | |
Reduce clones and unwraps, support partial borrows | Jonathan Kelley | |
Seamless C support | Github usernames or other identifying info for goal owners |
Contracts and Invariants
Metadata | |
---|---|
Owner(s) | Felix Klock |
Teams | lang, libs, compiler |
Status | Not accepted |
Motivation
We wish to extend Rust in three ways:
First, we want to extend the Rust language to enable Rust developers to write predicates, called "contracts", and attach them to specific points in their program. We intend for this feature to be available to all Rust code, including that of the standard library.
Second, we want to extend the Rust crate format such that the contracts for a crate can be embedded and then later extracted by third-party tools.
Third, we want to extend project-supported Rust compiler and interpreter tools, such as rustc
and miri
, to compile the code in a mode that dynamically checks the associated contracts (note that such dynamic checking might be forced to be incomplete for some contracts).
Examples of contracts we envision include: pre- and post-conditions on Rust methods; representation invariants for abstract data types; and loop invariants on for
and while
loops.
Our motivation for this is that contracts are key for reasoning about software correctness. Formal verification tools such as Creusot, Kani, and Verus have demonstrated that it is possible to write Rust code that is coupled with automated verification mechanisms. But furthermore, we assert that contracts can provide value even if we restrict our attention to dynamic checking: By having a dedicated construct for writing method specifications and invariants formally, we will give our tools new avenues for testing our programs in an automated fashion, similar to the benefits provided by fuzzing.
The status quo
Currently, if you want to specify the behavior of a Rust method and check that the specification is correct, you can either attempt construct a test suite that covers the entirety of your specification, or you can manually embed contract-like predicates into your code. Embedding contract-like predicates is typically done via variations of either 1. assert!
, 2. debug_assert!
, or 3. similar cfg
-guarded code sequences that abort/panic when a predicate fails to hold.
All of the existing options are limited.
First, most specifications rely on quantified forms, such as "for all X, P(X) implies Q(X)." The "for all" quantifier needs some language support to be expressed; it cannot be written as executable Rust code (except as a loop over the domain, which is potentially infinite).
Second, directly expressing contracts as an assertion mixes it in with the rest of the code, which makes it difficult or impossible for third-party verification tools to extract the contracts in order to reason about them.
As an example for why a tool might want to extract the contracts, the Kani model checker works by translating a whole program (including its calls to library code) into a form that is passed to an off-the-shelf model checker. Kani would like to use contracts as a way to divide-and-conquer the verification effort. The API for a method is abstracted by its associated contract. Instead of reasoning about the whole program, it now has two subproblems: prove that the method on its own satisfies its associated contract, and in the rest of the program, replace calls to that method by the range of behaviors permitted by the contract.
Third: the Racket language has demonstrated that when you have dynamic dispatch (via higher-order functions or OOP), then assertions embedded in procedure bodies are a subpar way of expressing specifications. This is because when you compose software components, it is non-trivial to take an isolated assertion failure and map it to which module was actually at fault. Having a separate contract language might enable new tools to record enough metadata to do proper "blame tracking." But to get there, we first have to have a way to write contracts down in the first place.
The next six months
-
Develop a contract predicate sublanguage, suitable for interpretation by rustc, miri, and third-party verification tools.
-
Extend Rust compiler to enable contract predicates to be attached to items and embedded in Rust crate rlib files.
-
Work with wg-formal-methods (aka the "Rust Formal Methods Interest Group") to ensure that the embedded predicates are compatible with their tools.
-
Work with Lang and Libs team on acceptable surface-level syntax for contracts. In particular, we want contracts to be used by the Rust standard library. (At the very least, for method pre- and post-conditions; I can readily imagine, however, also attaching contracts to
unsafe { ... }
blocks.) -
Extend miri to evaluate contract predicates. Add primitives for querying memory model to contract language, to enable contracts that talk about provenance of pointers.
The "shiny future" we are working towards
My shiny future is that the people "naturally" write Rust crates that can be combined with distinct dynamic-validation and verification tools. Today, if you want to use any verification tool, you usually have to pick one and orient your whole code base around using it. (E.g., the third-party verification tools often have their own (rewrite of a subset of the) Rust standard library, if only so that they can provide the contracts that our standard library is missing.)
But in my shiny future, people get to reuse the majority of their contracts and just "plug in" different dynamic-validation and static-verification tools, and all the tools know how to leverage the common contract language that is built into Rust.
Design axioms
Felix presented these axioms as "Shared Values" at the 2024 Rust Verification Workshop.
1. Purpose: Specification Mechanism first, Verification Mechanism second
Contracts have proven useful under the "Design by Contract" philosophy, which usually focuses on pre + post + frame conditions (also known as "requires", "ensures", and "modifies" clauses in systems such as the Java Modelling Language). In other words, attaching predicates to the API boundaries of code, which makes contracts a specification mechanism.
There are other potential uses for attaching predicates to points in the code, largely for encoding formal correctness arguments. My main examples of these are Representation Invariants, Loop Invariants, and Termination Measures (aka "decreasing functions").
In an ideal world, contracts would be useful for both purposes. But if we have to make choices about what to prioritize, we should focus on the things that make contracts useful as an API specification mechanism. In my opinion, API specification is the use-case that is going to benefit the broadest set of developers.
2. Contracts should be (semi-)useful out-of-the-box
This has two parts:
Anyone can eat: I want any Rust developer to be able to turn on "contract checking" in some form without having to change toolchain nor install 3rd-party tool, and get some utility from the result. (It's entirely possible that contracts become even more useful when used in concert with a suitable 3rd-party tool; that's a separate matter.)
Anyone can cook: Any Rust developer can also add contracts to their own code, without having to change their toolchain.
3. Contracts are not just assertions
Contracts are meant to enable modular reasoning.
Contracts have both a dynamic semantics and a static semantics.
In an ideal dynamic semantics, a broken contract will identify which component is at fault for breaking the contract. (We do acknowledge that precise blame assignment becomes non-trivial with dynamic dispatch.)
In an ideal static semantics, contracts enable theorem provers to choose, instead of reasoning about F(G)
, to instead allow independent correctness proofs for F(...)
and ... G ...
.
4. Balance accessibility over power
For accessibility to the developer community, Rust contracts should strive for a syntax that is, or closely matches, the syntax of Rust code itself. Deviations from existing syntax or semantics must meet a high bar to be accepted for contract language.
But some deviations should be possible, if justified by their necessity for correct expression of specifications. Contracts may need forms that are not valid Rust code. For example, for-all quantifiers will presumably need to be supported, and will likely have a dynamic semantics that is necessarily incomplete compared to an idealized static semantics used by a verification tool. (Note that middle grounds exist here, such as adding a forall(|x: Type| { pred(x) })
intrinsic that is fine from a syntax point of view and is only troublesome in terms of what semantics to assign to it.)
Some expressive forms might be intentionally unavailable to normal object code. For example, some contracts may want to query provenance information on pointers, which would make such contracts unevaluatable in rustc
object code (and then one would be expected to use miri
or similar tools to get checking of such contracts).
5. Accept incompleteness
Not all properties of interest can be checked/6 at runtime; similarly, not all statements can be proven true or false.
Full functional correctness specifications are often not economically feasible to develop and maintain.
We must accept limitations on both dynamic validation and static verification strategies, and must choose our approximations accordingly.
An impoverished contract system may still be useful for specifying a coarser range of properties (such as invariant maintenance, memory safety, panic-freedom).
6. Embrace tool diversity
Different static verification systems require or support differing levels of expressiveness. And the same is true for dynamic validation tools! (E.g. consider injecting assertions into code via rustc
, vs interpreters like miri
or binary instrumentation via valgrind
).
An ideal contract system needs to deal with this diversity in some manner. For example, we may need to allow third-party tools to swap in different contracts (and then also have to meet some added proof obligation to justify the swap).
7. Verification cannot be bolted on, ... but validation can
In general, code must be written with verification in mind as one of its design criteria.
We cannot expect to add contracts to arbitrary code and be able to get it to pass a static verifier.
This does not imply that contracts must be useless for arbitrary code. Dynamic contract checks have proven useful for the Racket community.
Racket development style: add more contracts to the code when debugging (including, but not limited to, contract failures)
A validation mechanism can be bolted-on after the fact.
Ownership and team asks
Owner: pnkfelix
pnkfelix has been working on the Rust compiler since before Rust 1.0; he was co-lead of the Rust compiler team from 2019 until 2023. pnkfelix has taken on this work as part of a broader interest in enforcing safety and correctness properties for Rust code.
celinval is also assisting. celinval is part of the Amazon team producing the Kani model checker for Rust. The Kani team has been using contracts as an unstable feature in order to enable modular verification; you can read more details about it on Kani's blog post.
Support needed from the project
-
Compiler: We expect to be authoring three kinds of code: 1. unstable surface syntax for expressing contract forms, 2. new compiler intrinsics that do contract checks, and 3. extensions to the rlib format to allow embedding of contracts in a readily extractable manner. We expect that we can acquire review capacity via AWS-employed members of compiler-contributors; the main issue is ensuring that the compiler team (and project as a whole) is on board for the extensions as envisioned.
-
Libs-impl: We will need libs-impl team engagement to ensure we design a contract language that the standard library implementors are willing to use. To put it simply: If we land support for contracts without uptake within the Rust standard library, then the effort will have failed.
-
Lang: We need approval for a lang team experiment to design the contract surface language. However, we do not expect this surface language to be stabilized in 2024, and therefore the language team involvement can be restricted to "whomever is interested in the effort." In addition, it seems likely that at least some of the contracts work will dovetail with the ghost-code initiative
-
WG-formal-methods: We need engagement with the formal-methods community to ensure our contract system is serving their needs.
-
Stable-MIR: Contracts and invariants both require evaluation of predicates, and linking those predicates with intermediate states of the Rust abstract machine. Such predicates and their linkage with the code itself can be encoded in various ways. While this goal proposal does not commit to any particular choice of encoding, we want to avoid introducing unnecessary coupling with compiler-internal structures. If Stable-MIR can be made capable of meeting the technical needs of contracts, then it may be a useful option to consider in the design space of predicate encoding and linkage.
-
Miri: We would like assistance from the miri developers on the right way to extend miri to have configurable contract-checking (i.e. to equip
miri
with enhanced contract checking that is not available in normal object code).
Outputs and milestones
Outputs
Rust standard library ships with contracts in the rlib (but not built into the default object code).
Rustc has unstable support for embedding dynamic contract-checks into Rust object code.
Some dynamic tool (e.g. miri or valgrind) that can dynamically check contracts whose bodies are not embedded into the object code.
Some static verification tool (e.g. Kani) leverages contracts shipped with Rust standard library.
Milestones
Unstable syntactic support for contracts in Rust programs (at API boundaries at bare minimum, but hopefully also at other natural points in a code base.)
Support for extracting contracts for a given item from an rlib.
Frequently asked questions
Q: How do contracts help static verification tools?
Answer: Once we have a contract language built into rustc, we can include its expressions as part of the compilation pipeline, turning them into HIR, THIR, MIR, et cetera.
For example, we could add contract-specific intrinsics that map to new MIR instructions. Then tools can decide to interpret those instructions. rustc, on its own, can decide whether it wants to map them to LLVM, or into valgrind calls, et cetera. (Or compiler could throw them away; but: unused = untested = unmaintained)
Q: Why do you differentiate between the semantics for dynamic validation vs static verification?
See next question for an answer to this.
Q: How are you planning to dynamically check arbitrary contracts?
A dynamic check of the construct forall(|x:T| { … })
sounds problematic for most T of interest
pnkfelix's expectation here is that we would not actually expect to support forall(|x:T| ...)
in a dynamic context, not in the general case of arbitrary T
.
pnkfelix's current favorite solution for cracking this nut: a new form, forall!(|x:T| suchas: [x_expr1, x_expr2, …] { … })
,
where the semantics is "this is saying that the predicate must hold for all T, but in particular, we are hinting to the dynamic semantics that it can draw from the given sample population denoted by x_expr1
, x_expr2
, etc.
Static tools can ignore the sample population, while dynamic tools can use the sample population directly, or feed them into a fuzzer, etc.
Q: Doesn't formal specifications need stuff like unbounded arithmetic?
Some specifications benefit from using constructs like unbounded integers, or sequences, or sets. (Especially important for devising abstraction functions/relations to describe the meaning of a given type.)
Is this in conflict with “Balance accessibility over power”?
pnkfelix sees two main problems here: 1. Dynamic interpretation may incur unacceptably high overhead. 2. Freely copying terms (i.e. ignoring ownership) is sometimes useful.
Maybe the answer is that some contract forms simply cannot be interpreted via the Rust abstract machine. And to be clear: That is not a failure! If some forms can only be checked in the context of miri
or some other third-party tool, so be it.
Q: What about unsafe code?
pnkfelix does not know the complete answer here.
Some dynamic checks would benefit from access to memory model internals.
But in general, checking the correctness of an unsafe abstraction needs type-specific ghost state (to model permissions, etc). We are leaving this for future work, it may or may not get resolved this year.
Experiment with relaxing the Orphan Rule
Summary
Experimental implementation and draft RFCs to relax the orphan rule
Motivation
Relax the orphan rule, in limited circumstances, to allow crates to provide implementations of third-party traits for third-party types. The orphan rule averts one potential source of conflicts between Rust crates, but its presence also creates scaling issues in the Rust community: it prevents providing a third-party library that integrates two other libraries with each other, and instead requires convincing the author of one of the two libraries to add (optional) support for the other, or requires using a newtype wrapper. Relaxing the orphan rule, carefully, would make it easier to integrate libraries with each other, share those integrations, and make it easier for new libraries to garner support from the ecosystem.
The status quo
Suppose a Rust developer wants to work with two libraries: lib_a
providing
trait TraitA
, and lib_b
providing type TypeB
. Due to the orphan rule, if
they want to use the two together, they have the following options:
-
Convince the maintainer of
lib_a
to provideimpl TraitA for TypeB
. This typically involves an optional dependency onlib_b
. This usually only occurs iflib_a
is substantially less popular thanlib_b
, or the maintainer oflib_a
is convinced that others are likely to want to use the two together. This tends to feel "reversed" from the norm. -
Convince the maintainer of
lib_b
to provideimpl TraitA for TypeB
. This typically involves an optional dependency onlib_a
. This is only likely to occur iflib_a
is popular, and the maintainer oflib_b
is convinced that others may want to use the two together. The difficulty in advocating this, scaled across the community, is one big reason why it's difficult to build new popular crates built around traits (e.g. competing serialization/deserialization libraries, or competing async I/O traits). -
Vendor either
lib_a
orlib_b
into their own project. This is inconvenient, adds maintenance costs, and isn't typically an option for public projects intended for others to use. -
Create a newtype wrapper around
TypeB
, and implementTraitA
for the wrapper type. This is less convenient, propagates throughout the crate (and through other crates if doing this in a library), and may require additional trait implementations for the wrapper thatTypeB
already implemented.
All of these solutions are suboptimal in some way, and inconvenient. In particular, all of them are much more difficult than actually writing the trait impl. All of them tend to take longer, as well, slowing down whatever goal depended on having the trait impl.
The next six months
We propose to
- Experiment on nightly with alternate orphan rules
- Idea 1. Try relaxing the orphan rule for binary crates, since this cannot create library incompatibilities in the ecosystem. Allow binary crates to implement third-party traits for third-party types, possibly requiring a marker on either the trait or type or both. See how well this works for users.
- Idea 2. Try allowing library crates to provide third-party impls as long as no implementations actually conflict. Perhaps require marking traits and/or types that permit third-party impls, to ensure that crates can always implement traits for their own types.
- Draft RFCs for features above, presuming experiments turn out well
The "shiny future" we are working towards
Long-term, we'll want a way to resolve conflicts between third-party trait impls.
We should support a "standalone derive" mechanism, to derive a trait for a type without attaching the derive to the type definition. We could save a simple form of type information about a type, and define a standalone deriving mechanism that consumes exclusively that information.
Given such a mechanism, we could then permit any crate to invoke the standalone derive mechanism for a trait and type, and allow identical derivations no matter where they appear in the dependency tree.
Design axioms
-
Rustaceans should be able to easily integrate a third-party trait with a third-party type without requiring the cooperation of third-party crate maintainers.
-
It should be possible to publish such integration as a new crate. For instance, it should be possible to publish an
a_b
crate integratinga
withb
. This makes it easier to scale the ecosystem and get adoption for new libraries. -
Crate authors should have some control over whether their types have third-party traits implemented. This ensures that it isn't a breaking change to introdice first-party trait implementations.
Ownership and team asks
Owner:
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams.
- Subgoal:
- Describe the work to be done and use
↳
to mark "subitems".
- Describe the work to be done and use
- Owner(s) or team(s):
- List the owner for this item (who will do the work) or if an owner is needed.
- If the item is a "team ask" (i.e., approve an RFC), put and the team name(s).
- Status:
- List if there is an owner but they need support, for example funding.
- Other needs (e.g., complete, in FCP, etc) are also fine.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Ownership and implementation | ||
RFC authoring | ||
Design consultation/iteration | Josh Triplett | |
Design meeting | lang types | Up to 1 meeting, if needed |
Frequently asked questions
Won't this create incompatibilities between libraries that implement the same trait for the same type?
Yes! The orphan rule is a tradeoff. It was established to avert one source of potential incompatibility between library crates, in order to help the ecosystem grow, scale, and avoid conflicts. However, the presence of the orphan rule creates a different set of scaling issues and conflicts. This project goal proposes to adjust the balance, attempting to achieve some of the benefits of both.
Why was this goal not approved for 2024H2?
Primarily for capacity reasons:
- lcnr commented that there was no capacity on the types team for reviewing.
- tmandry commented that the goal as written was not necessarily focused on the right constraints (text quoted below).
It strikes me as quite open ended and not obviously focused on the right constraints. (cc Josh Triplett as mentor)
For example, we could choose to relax the orphan rule only within a restricted set of co-versioned crates that we treat as "one big crate" for coherence purposes. This would not meet the axioms listed in the goal, but I believe it would still improve things for a significant set of users.
If we instead go with visibility restrictions on impls, that might work and solve a larger subset, but I think the design will have to be guided by someone close to the implementation to be viable.
I would love to have a design meeting if a viable looking design emerges, but I want to make sure this feedback is taken into account before someone spends a lot of time on it.
These points can be considered and addressed at a later time.
Faster build experience
Metadata | |
---|---|
Owner(s) | Jonathan Kelley |
Teams | lang, compiler, cargo |
Status | Not accepted |
Summary
Improvements to make iterative builds faster.
Motivation
For 2024H2, we propose to create better caching to speed up build times, particularly in iterative, local development. Build time affects all Rust users, but it can be a particular blocker for Rust users in "higher-level" domains like app/game/web development, data science, and scientific computing. These developers are often coming from interpreted languages like Python and are accustomed to making small, quick changes and seeing immediate feedback. In those domains, Rust has picked up momentum as a language for building underlying frameworks and libraries thanks to its lower-level nature. The motivation of this project goal is to make Rust a better choice for higher level programming subfields by improving the build experience (see also the partner goal related to language papercuts).
The status quo
Rust has recently seen tremendous adoption in a number of high-profile projects. These include but are not limited to: Firecracker, Pingora, Zed, Datafusion, Candle, Gecko, Turbopack, React Compiler, Deno, Tauri, InfluxDB, SWC, Ruff, Polars, SurrealDB, NPM and more. These projects tend to power a particular field of development: SWC, React, and Turbopack powering web development, Candle powering AI/ML, Ruff powering Python, InfluxDB and Datafusion powering data science etc. Projects in this space devote significant time to improving build times and the iterative experience, often requiring significant low-level hackery. See e.g. Bevy's page on fast compiles. Seeing the importance of compilation time, other languages like Zig and Go have made fast compilation times a top priority, and developers often cite build time as one of their favorite things about the experience of using those languages. We don't expect Rust to match the compilation experience of Go or Zig -- at least not yet! -- but some targeted improvements could make a big difference.
The next six months
The key areas we've identified as avenues to speed up iterative development include:
- Speeding up or caching proc macro expansion
- A per-user cache for compiled artifacts
- A remote cache for compiled artifacts integrated into Cargo itself
There are other longer term projects that would be interesting to pursue but don't necessarily fit in the 2024 goals:
- Partial compilation of invalid Rust programs that might not pass "cargo check"
- Hotreloading for Rust programs
- A JIT backend for Rust programs
- An incremental linker to speed up test/example/benchmark compilation for workspaces
Procedural macro expansion caching or speedup
Today, the Rust compiler does not necessarily cache the tokens from procedural macro expansion. On every cargo check
, and cargo build
, Rust will run procedural macros to expand code for the compiler. The vast majority of procedural macros in Rust are idempotent: their output tokens are simply a deterministic function of their input tokens. If we assumed a procedural macro was free of side-effects, then we would only need to re-run procedural macros when the input tokens change. This has been shown in prototypes to drastically improve incremental compile times (30% speedup), especially for codebases that employ lots of derives (Debug, Clone, PartialEq, Hash, serde::Serialize).
A solution here could either be manual or automatic: macro authors could opt-in to caching or the compiler could automatically cache macros it knows are side-effect free.
Faster fresh builds and higher default optimization levels
A "higher level Rust" would be a Rust where a programmer would be able to start a new project, add several large dependencies, and get to work quickly without waiting minutes for a fresh compile. A web developer would be able to jump into a Tokio/Axum heavy project, a game developer into a Bevy/WGPU heavy project, or a data scientist into a Polars project and start working without incurring a 2-5 minute penalty. In today's world, an incoming developer interested in using Rust for their next project immediately runs into a compile wall. In reality, Rust's incremental compile times are rather good, but Rust's perception is invariably shaped by the "new to Rust" experience which is almost always a long upfront compile time.
Cargo's current compilation model involves downloading and compiling dependencies on a per-project basis. Workspaces allow you to share a set of dependency compilation artifacts across several related crates at once, deduplicating compilation time and reducing disk space usage.
A "higher level Rust" might employ some form of caching - either per-user, per-machine, per-organization, per-library, or otherwise - such that fresh builds are just as fast as incremental builds. If the caching was sufficiently capable, it could even cache dependency artifacts at higher optimization levels. This is particularly important for game development, data science, and procedural macros where debug builds of dependencies run significantly slower than their release variant. Projects like Bevy and WGPU explicitly guide developers to manually increase the optimization level of dependencies since the default is unusably slow for game and graphics development.
Generally, a "high level Rust" would be fast-to-compile and maximally performant by default. The tweaks here do not require language changes and are generally a question of engineering effort rather than design consensus.
The "shiny future" we are working towards
A "high level Rust" would be a Rust that has a strong focus on iteration speed. Developers would benefit from Rust's performance, safety, and reliability guarantees without the current status quo of long compile times, verbose code, and program architecture limitations.
A "high level" Rust would:
- Compile quickly, even for fresh builds
- Be terse in the common case
- Produce performant programs even in debug mode
- Provide language shortcuts to get to running code faster
In our "shiny future," an aspiring genomics researcher would:
- be able to quickly jump into a new project
- add powerful dependencies with little compile-time cost
- use various procedural macros with little compile-time cost
- cleanly migrate their existing program architecture to Rust with few lifetime issues
- employ various shortcuts like unwrap to get to running code quicker
Design axiomsda
- Preference for minimally invasive changes that have the greatest potential benefit
- No or less syntax is preferable to more syntax for the same goal
- Prototype code should receive similar affordances as production code
- Attention to the end-to-end experience of a Rust developer
- Willingness to make appropriate tradeoffs in favor of implementation speed and intuitiveness
Ownership and team asks
The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
proc macro expansion caching | jkelleyrtp + tbd | |
global dependency cache | jkelleyrtp + tbd |
- The badge indicates a requirement where Team support is needed.
Support needed from the project
- We are happy to author RFCs and/or work with other experienced RFC authors.
- We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
- We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
- We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.
Outputs and milestones
Outputs
Final outputs that will be produced
Milestones
Milestones you will reach along the way
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
Reduce clones and unwraps, support partial borrows
Metadata | |
---|---|
Owner(s) | Jonathan Kelley |
Teams | lang |
Status | Not accepted |
Motivation
For 2024H2, we propose to continue with the ergonomics initiative, targeting several of the biggest friction points in everyday Rust development. These issues affect all Rust users, but the impact and severity varies dramatically. Many experienced users have learned the workarounds and consider them papercuts, but for newer users, or in domains traditionally considered "high-level" (e.g., app/game/web development, data science, scientific computing) these kinds of issues can make Rust a non-starter. In those domains, Rust has picked up momentum as a language for building underlying frameworks and libraries thanks to its lower-level nature. However, thanks in large part to these kind of smaller, papercut issues, it is not a great choice for consumption of these libraries - many projects instead choose to expose bindings for languages like Python and JavaScript. The motivation of this project goal is to make Rust a better choice for higher level programming subfields by identifying and remedying language papercuts with minimally invasive language changes. In fact, these same issues arise in other Rust domains: for example, Rust is a great choice for network services where performance is a top consideration, but perhaps not a good choice for "everyday" request-reply services, thanks in no small part to papercuts and small-time friction (as well as other gaps like the needing more libraries, which are being addressed in the async goal).
The status quo
Rust has recently seen tremendous adoption in a number of high-profile projects. These include but are not limited to: Firecracker, Pingora, Zed, Datafusion, Candle, Gecko, Turbopack, React Compiler, Deno, Tauri, InfluxDB, SWC, Ruff, Polars, SurrealDB, NPM and more. These projects tend to power a particular field of development: SWC, React, and Turbopack powering web development, Candle powering AI/ML, Ruff powering Python, InfluxDB and Datafusion powering data science etc.
These projects tend to focus on accelerating development in higher level languages. In theory, Rust itself would be an ideal choice for development in the respective spaces. A Rust webserver can be faster and more reliable than a JavaScript webserver. However, Rust's perceived difficulty, verbosity, compile times, and iteration velocity limit its adoption in these spaces. Various language constructs nudge developers to a particular program structure that might be non-obvious at its outset, resulting in slower development times. Other language constructs influence the final architecture of a Rust program, making it harder to migrate one's mental model as they transition to using Rust. Other Rust language limitations lead to unnecessarily verbose or noisy code. While Rust is not necessarily a bad choice for any of these fields, the analogous frameworks (Axum, Bevy, Dioxus, Polars, etc) are rather nascent and frequently butt up against language limitations.
If we could make Rust a better choice for "higher level" programming - apps, games, UIs, webservers, datascience, high-performance-computing, scripting - then Rust would see much greater adoption outside its current bubble. This would result in more corporate interest, excited contributors, and generally positive growth for the language. With more "higher level" developers using Rust, we might see an uptick in adoption by startups, research-and-development teams, and the physical sciences which could lead to more general innovation.
Generally we believe this boils down to two focuses:
- Make Rust programs faster to write (this goal)
- Shorten the iteration cycle of a Rust program (covered in the goal on faster iterative builds)
A fictional scenario: Alex
Let's take the case of "Alex" using a fictional library "Genomix."
Alex is a genomics researcher studying ligand receptor interaction to improve drug therapy for cancer. They work with very large datasets and need to design new algorithms to process genomics data and simulate drug interactions. Alex recently heard that Rust has a great genomics library (Genomix) and decides to try out Rust for their next project. Their goal seems simple: write a program that fetches data from their research lab's S3 bucket, downloads it, cleans it, processes, and then displays it.
Alex creates a new project and starts adding various libraries. To start, they add Polars and Genomix. They also realize they want to wrap their research code in a web frontend and allow remote data, so they add Tokio, Reqwest, Axum, and Dioxus. Before writing any real code, they hit build, and immediately notice the long compilation time to build out these dependencies. They're still excited for Rust, so they get a coffee and wait.
They start banging out code. They are getting a lot of surprising compilation errors around potential failures. They don't really care much about error handling at the moment; some googling reveals that a lot of code just calls unwrap
in this scenario, so they start adding that in, but the code is looking kind of ugly and non-elegant.
They are also getting compilation errors. After some time they come to understand how the borrow checker works. Many of the errors can be resolved by calling clone
, so they are doing that a lot. They even find a few bugs, which they like. But they eventually get down to some core problems that they just can't see how to fix, and where it feels like the compiler is just being obstinate. For example, they'd like to extract a method like fn push_log(&mut self, item: T)
but they can't, because they are iterating over data in self.input_queue
and the compiler gives them errors, even though push_log
doesn't touch input_queue
. "Do I just have to copy and paste my code everywhere?", they wonder. Similarly, when they use closures that spawn threads, the new thread seems to take ownership of the value, they eventually find themselves writing code like let _data = data.clone()
and using that from inside the closure. Irritating.
Eventually they do get the system to work, but it takes them a lot longer than they feel it should, and the code doesn't look nearly as clean as they hoped. They are seriously wondering if Rust is as good as it is made out to be, and nervous about having interns or other newer developers try to use it. "Rust seems to make sense for really serious projects, but for the kind of things I do, it's just so much slower.", they think.
Key points:
- Upfront compile times would be in the order of minutes
- Adding new dependencies would also incur a strong compile time cost
- Iterating on the program would take several seconds per build
- Adding a web UI would be arduous with copious calls
.clone()
to shuffle state around - Lots of explicit unwraps pollute the codebase
- Refactoring to a collection of structs might take much longer than they anticipated
A real world scenario, lightly fictionalized
Major cloud developer is "all in" on Rust. As they build out code, though, they notice some problems leading to copying-and-pasting or awkward code throughout their codebase. Spawning threads and tasks tends to involve a large number of boilerplate feeling "clone" calls to copy out specific handles from data structures -- it's tough to get rid of them, even with macros. There are a few Rust experts at the company, and they're in high demand helping users resolve seemingly simple problems -- many of them have known workaround patterns, but those patterns are non-obvious, and sometimes rather involved. For example, sometimes they have to make "shadow structs" that have all the same fields, but just contain different kinds of references, to avoid conflicting borrows. For the highest impact systems, Rust remains popular, but for a lot of stuff "on the edge", developers shy away from it. "I'd like to use Rust there," they say, "since it would help me find bugs and get higher performance, but it's just too annoying. It's not worth it."
The next six months
For 2024H2 we have identified two key changes that would make Rust significantly easier to write across a wide variety of domains:
- Reducing the frequency of an explicit ".clone()" for cheaply cloneable items
- Partial borrows for structs (especially private
&mut self
methods that only access a few fields)
Reducing .clone()
frequency
Across web, game, UI, app, and even systems development, it's common to share semaphores across scopes. These come in the form of channels, queues, signals, and immutable state typically wrapped in Arc/Rc. In Rust, to use these items across scopes - like tokio::spawn
or 'static
closures, a programmer must explicitly call .clone()
. This is frequently accompanied by a rename of an item:
#![allow(unused)] fn main() { let state = Arc::new(some_state); let _state = state.clone(); tokio::spawn(async move { /*code*/ }); let _state = state.clone(); tokio::spawn(async move { /*code*/ }); let _state = state.clone(); let callback = Callback::new(move |_| { /*code*/ }); }
This can become both noisy - clone
pollutes a codebase - and confusing - what does .clone()
imply on this item? Calling .clone()
could imply an allocation or simply a RefCount increment. In many cases it's not possible to understand the behavior without viewing the clone
implementation directly.
A higher level Rust would provide a mechanism to cut down on these clones entirely, making code terser and potentially clearer about intent:
#![allow(unused)] fn main() { let state = Arc::new(some_state); tokio::spawn(async move { /*code*/ }); tokio::spawn(async move { /*code*/ }); let callback = Callback::new(move |_| { /*code*/ }); }
While we don't necessarily propose any one solution to this problem, we believe Rust can be tweaked in way that makes these explicit calls to .clone()
disappear without significant changes to the language.
Partial borrows for structs
Another complication programmers run into is when designing the architecture of their Rust programs with structs. A programmer might start with code that looks like this:
#![allow(unused)] fn main() { let name = "Foo "; let mut children = vec!["Bar".to_string()]; children.push(name.to_string()) }
And then decide to abstract it into a more traditional struct-like approach:
#![allow(unused)] fn main() { struct Baz { name: String, children: Vec<String> } impl Baz { pub fn push_name(&mut self, new: String) { let name = self.name() self.push(new); println!("{name} pushed item {new}"); } fn push(&mut self, item: &str) { self.children.push(item) } fn name(&self) -> &str { self.name.as_str() } } }
While this code is similar to the original snippet, it no longer compiles. Because self.name
borrows Self
, we can't call self.push
without running into lifetime conflicts. However, semantically, we haven't violated the borrow checker - both push
and name
read and write different fields of the struct.
Interestingly, Rust's disjoint capture mechanism for closures, can perform the same operation and compile.
#![allow(unused)] fn main() { let mut modify_something = || s.name = "modified".to_string(); let read_something = || &s.children.last().unwrap().name; let o2 = read_something(); let o1 = modify_something(); println!("o: {:?}", o2); }
This is a very frequent papercut for both beginner and experienced Rust programmers. A developer might design a valid abstraction for a particular problem, but the Rust compiler rejects it even though said design does obey the axioms of the borrow checker.
As part of the "higher level Rust" effort, we want to reduce the frequency of this papercut, making it easier for developers to model and iterate on their program architecture.
For example, a syntax-free approach to solving this problem might be simply turning on disjoint capture for private methods only. Alternatively, we could implement a syntax or attribute that allows developers to explicitly opt in to the partial borrow system. Again, we don't want to necessarily prescribe a solution here, but the best outcome would be a solution that reduces mental overhead with as little new syntax as possible.
The "shiny future" we are working towards
A "high level Rust" would be a Rust that has a strong focus on iteration speed. Developers would benefit from Rust's performance, safety, and reliability guarantees without the current status quo of long compile times, verbose code, and program architecture limitations.
A "high level" Rust would:
- Compile quickly, even for fresh builds
- Be terse in the common case
- Produce performant programs even in debug mode
- Provide language shortcuts to get to running code faster
In our "shiny future," an aspiring genomics researcher would:
- be able to quickly jump into a new project
- add powerful dependencies with little compile-time cost
- use various procedural macros with little compile-time cost
- cleanly migrate their existing program architecture to Rust with few lifetime issues
- employ various shortcuts like unwrap to get to running code quicker
Revisiting Alex and Genomix
Let's revisit the scenario of "Alex" using a fictional library "Genomix."
Alex is a genomics researcher studying ligand receptor interaction to improve drug therapy for cancer. They work with very large datasets and need to design new algorithms to process genomics data and simulate drug interactions. Alex recently heard that Rust has a great genomics library (Genomix) and decides to try out Rust for their next project.
Alex creates a new project and starts adding various libraries. To start, they add Polars and Genomix. They also realize they want to wrap their research code in a web frontend and allow remote data, so they add Tokio, Reqwest, Axum, and Dioxus. They write a simple program that fetches data from their research lab's S3 bucket, downloads it, cleans it, processes, and then displays it.
For the first time, they type cargo run.
The project builds in 10 seconds and their code starts running. The analysis churns for a bit and then Alex is greeted with a basic webpage visualizing their results. They start working on the visualization interface, adding interactivity with new callbacks and async routines. Thanks to hotreloading, the webpage updates without fully recompiling and losing program state.
Once satisfied, Alex decides to refactor their messy program into different structs so that they can reuse the different pieces for other projects. They add basic improvements like multithreading and swap out the unwrap shortcuts for proper error handling.
Alex heard Rust was difficult to learn, but they're generally happy. Their Rust program is certainly faster than their previous Python work. They didn't need to learn JavaScript to wrap it in a web frontend. The Cargo.toml
is a cool - they can share their work with the research lab without messing with Python installs and dependency management. They heard Rust had long compile times but didn't run into that. Being able to add async and multithreading was easier than they thought - interacting with channels and queues was as easy as it was in Go.
Design axioms
- Preference for minimally invasive changes that have the greatest potential benefit
- No or less syntax is preferable to more syntax for the same goal
- Prototype code should receive similar affordances as production code
- Attention to the end-to-end experience of a Rust developer
- Willingness to make appropriate tradeoffs in favor of implementation speed and intuitiveness
Ownership and team asks
The work here is proposed by Jonathan Kelley on behalf of Dioxus Labs. We have funding for 1-2 engineers depending on the scope of work. Dioxus Labs is willing to take ownership and commit funding to solve these problems.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
.clone() problem | Jonathan Kelley + tbd | |
partial borrows | Jonathan Kelley + tbd | |
.unwrap() problem | Jonathan Kelley + tbd | |
Named/Optional arguments | Jonathan Kelley + tbd |
- The badge indicates a requirement where Team support is needed.
Support needed from the project
- We are happy to author RFCs and/or work with other experienced RFC authors.
- We are happy to host design meetings, facilitate work streams, logistics, and any other administration required to execute. Some subgoals proposed might be contentious or take longer than this goals period, and we're committed to timelines beyond six months.
- We are happy to author code or fund the work for an experienced Rustlang contributor to do the implementation. For the language goals, we expect more design required than actual implementation. For cargo-related goals, we expected more engineering required than design. We are also happy to back any existing efforts as there is ongoing work in cargo itself to add various types of caching.
- We would be excited to write blog posts about this effort. This goals program is a great avenue for us to get more corporate support and see more Rust adoption for higher-level paradigms. Having a blog post talking about this work would be a significant step in changing the perception of Rust for use in high-level codebases.
Outputs and milestones
Outputs
Final outputs that will be produced
Milestones
Milestones you will reach along the way
Frequently asked questions
After these two items, are we done? What comes next?
We will have made significant process, but we won't be done. We have identified two particular items that come up frequently in the "high level app dev" domain but which will require more discussion to reach alignment. These could be candidates for future goals.
Faster Unwrap Syntax (Contentious)
Another common criticism of Rust in prototype-heavy programming subfields is its pervasive verbosity - especially when performing rather simple or innocuous transformations. Admittedly, even as experienced Rust programmers, we find ourselves bogged down by the noisiness of various language constructs. In our opinion, the single biggest polluter of prototype Rust codebase is the need to call .unwrap()
everywhere. While yes, many operations can fail and it's a good idea to handle errors, we've generally found that .unwrap()
drastically hinders development in higher level paradigms.
Whether it be simple operations like getting the last item from a vec:
#![allow(unused)] fn main() { let items = vec![1,2,3,4]; let last = items.last().unwrap(); }
Or slightly more involved operations like fetching from a server:
#![allow(unused)] fn main() { let res = Client::new() .unwrap() .get("https://dog.ceo/api/breeds/list/all") .header("content/text".parse().unwrap()) .send() .unwrap() .await .unwrap() .json::<DogApi>() .await .unwrap(); }
It's clear that .unwrap()
plays a large role in the early steps of every Rust codebase.
A "higher level Rust" would be a Rust that enables programmers to quickly prototype their solution, iterating on architecture and functionality before finally deciding to "productionize" their code. In today's Rust this is equivalent to replacing .unwrap()
with proper error handling (or .expect()
), adding documentation, and adding tests.
Programmers generally understand the difference between prototype code and production code - they don't necessarily need to be so strongly reminded that their code is prototype code by forcing a verbose .unwrap()
at every corner. In many ways, Rust today feels hostile to prototype code. We believe that a "higher level Rust" should be welcoming to prototype code. The easier it is for developers to write prototype code, the more code will likely convert to production code. Prototype code by design is the first step to production code.
When this topic comes up, folks will invariably bring up Result
plus ?
as a solution. In practice, we've not found it to be a suitable bandaid. Adopting question mark syntax requires you to change the signatures of your code at every turn. While prototyping you can no longer think in terms of A -> B
but now you need to think of every A -> B?
as a potentially fallible operation. The final production-ready iteration of your code will likely not be fallible in every method, forcing yet another level of refactoring. Plus, question mark syntax tends to bubble errors without line information, generally making it difficult to locate where the error is occurring in the first place. And finally, question mark syntax doesn't work on Option<T>
, meaning .unwrap()
or pattern matching are the only valid options.
#![allow(unused)] fn main() { let items = vec![1,2,3,4]; let last = items.last().unwrap(); // <--- this can't be question-marked! }
We don't prescribe any particular solution, but ideally Rust would provide a similar shortcut for .unwrap()
as it does for return Err(e)
. Other languages tend to use a !
operator for this case:
#![allow(unused)] fn main() { let items = vec![1,2,3,4]; let last = items.last()!; let res = Client::new()! .get("https://dog.ceo/api/breeds/list/all") .header("content/text".parse()!) .send()! .await! .json::<DogApi>() .await!; }
A "higher level Rust" would provide similar affordances to prototype code that it provides to production code. All production code was once prototype code. Today's Rust makes it harder to write prototype code than it does production code. This language-level opinion is seemingly unique to Rust and arguably a major factor in why Rust has seen slower adoption in higher level programming paradigms.
Named and Optional Arguments or Partial Defaults (Contentious)
Beyond .clone()
and .unwrap()
, the next biggest polluter for "high level" Rust code tends to be the lack of a way to properly supply optional arguments to various operations. This has received lots of discussion already and we don't want to belabor the point anymore than it already has.
The main thing we want to add here is that we believe the builder pattern is not a great solution for this problem, especially during prototyping and in paradigms where iteration time is important.
#![allow(unused)] fn main() { struct PlotCfg { title: Option<String>, height: Option<u32>, width: Option<u32>, dpi: Option<u32>, style: Option<Style> } impl PlotCfg { pub fn title(&mut self, title: Option<u32>) -> &mut self { self.title = title; self } pub fn height(&mut self, height: Option<u32>) -> &mut self { self.height = height; self } pub fn width(&mut self, width: Option<u32>) -> &mut self { self.width = width; self } pub fn dpi(&mut self, dpi: Option<u32>) -> &mut self { self.dpi = dpi; self } pub fn style(&mut self, style: Option<u32>) -> &mut self { self.style = style; self } pub fn build() -> Plot { todo!() } } }
A solution to this problem could in any number of forms:
- Partial Defaults to structs
- Named and optional function arguments
- Anonymous structs
We don't want to specify any particular solution:
- Partial defaults simply feel like an extension of the language
- Named function arguments would be a very welcome change for many high-level interfaces
- Anonymous structs would be useful outside of replacing builders
Generally though, we feel like this is another core problem that needs to be solved for Rust to see more traction in higher-level programming paradigms.
Seamless C support
Metadata | |
---|---|
Owner(s) | Github usernames or other identifying info for goal owners |
Teams | Names of teams being asked to commit to the goal |
Status | Not accepted |
Summary
Using C from Rust should be as easy as using C from C++
Motivation
Using C from Rust should be as easy as using C from C++: completely seamless, as though it's just another module of code. You should be able to drop Rust code into a C project and start compiling and using it in minutes.
The status quo
Today, people who want to use C and Rust together in a project have to put substantial work into infrastructure or manual bindings. Whether by creating build system infrastructure to invoke bindgen/cbindgen (and requiring the installation of those tools), or manually writing C bindings in Rust, projects cannot simply drop Rust code into a C program or C code into a Rust program. This creates a high bar for adopting or experimenting with Rust, and makes it more difficult to provide Rust bindings for a C library.
By contrast, dropping C++ code into a C project or C code into a C++ project is trivial. The same compiler understands both C and C++, and allows compiling both together or separately. The developer does not need to duplicate declarations for the two languages, and can freely call between functions in both languages.
C and C++ are still not the same language. They have different idioms and common types, and a C interface may not be the most ergonomic to use from C++. Using C++ from C involves treating the C as C++, such that it no longer works with a C compiler that has no C++ support. But nonetheless, C++ and C integrate extremely well, and C++ is currently the easiest language to integrate into an established C project.
This is the level of integration we should aspire to for Rust and C.
The next six months
To provide seamless integration between Rust and C, we need a single compiler
to understand both Rust and C. Thus, the first step will be to integrate a C
preprocessor and compiler frontend into the Rust compiler. For at least the
initial experimentation, we could integrate components from LLVM, taking
inspiration from zig cc
. (In the future, we can consider other alternatives,
including a native Rust implementation. We could also consider components from
c2rust or similar.)
We can either generate MIR directly from C (which would be experimental and incomplete but integrate better with the compiler), or bypass MIR and generate LLVM bytecode (which would be simpler but less well integrated).
This first step would provide substantial benefits already: a C compiler that's always available on any system with Rust installed, that generates code for any supported Rust target, and that always supports cross-language optimization.
We can further improve support for calling C from Rust. We can support "importing" C header files, to permit using this support to call external libraries, and to support inline functions.
The "shiny future" we are working towards
Once C support is integrated, we can generate type information for C functions as if they were unsafe Rust functions, and then support treating the C code as a Rust module, adding the ability to import and call C functions from Rust. This would not necessarily even require header files, making it even simpler to use C from Rust. The initial support can be incomplete, supporting the subset of C that has reasonable semantics in Rust.
We will also want to add C features that are missing in Rust, to allow Rust to call any supported C code.
Once we have a C compiler integrated into Rust, we can incrementally add C extensions to support using Rust from C. For instance:
- Support importing Rust modules and calling
extern "C"
functions from them, without requiring a C header file. - Support using
::
for scoping names. - Support simple Rust types (e.g.
Option
andResult
). - Support calling Rust methods on objects.
- Allow annotating C functions with Rust-enhanced type signatures, such as marking them as safe, using Rust references for pointer parameters, or providing simple lifetime information.
We can support mixing Rust and C in a source file, to simplify incremental porting even further.
To provide simpler integration into C build systems, we can accept a
C-compiler-compatible command line (CFLAGS
), and apply that to the C code we
process.
We can also provide a CLI entry point that's sufficiently command-line
compatible to allow using it as CC
in a C project.
Design axioms
-
C code should feel like just another Rust module. Integrating C code into a Rust project, or Rust code into a C project, should be trivial; it should be just as easy as integrating C with C++.
-
This is not primarily about providing safe bindings. This project will primarily make it much easier to access C bindings as unsafe interfaces. There will still be value in wrapping these unsafe C interfaces with safer Rust interfaces.
-
Calling C from Rust should not require writing duplicate information in Rust that's already present in a C header or source file.
-
Integrating C with Rust should not require third-party tools.
-
Compiling C code should not require substantially changing the information normally passed to a C compiler (e.g. compiler arguments).
Ownership and team asks
Owner: TODO
Support needed from the project
- Lang team:
- Design meetings to discuss design changes
- RFC reviews
- Compiler team:
- RFC review
Outputs and milestones
Outputs
The initial output will be a pair of RFCs: one for an experimental integration of a C compiler into rustc, and the other for minimal language features to take advantage of that.
Milestones
- Compiler RFC: Integrated C compiler
- Lang RFC: Rust language support for seamless C integration
Frequently asked questions
General notes
This is a place for the goal slate owner to track notes and ideas for later follow-up.
Candidate goals:
- Track feature stabilization
- Finer-grained infra permissions
- Host Rust contributor event
Areas where Rust is best suited and how it grows
Rust offers particular advantages in two areas:
- Latency sensitive or high scale network services, which benefit from Rust’s lack of garbage collection pauses (in comparison to GC’d languages).
- Low-level systems applications, like kernels and embedded development, benefit from Rust’s memory safety guarantees and high-level productivity (in comparison to C or C++).
- Developer tooling has proven to be an unexpected growth area, with tools ranging from IDEs to build systems being written in Rust.
Who is using Rust
Building on the characters from the async vision doc, we can define at least three groups of Rust users:
- Alan1, an experienced developer in a Garbage Collected language, like Java, Swift, or Python.
- Alan likes the idea of having his code run faster and use less memory without having to deal with memory safety bugs.
- Alan's biggest (pleasant) surprise is that Rust's type system prevents not only memory safety bugs but all kinds of other bugs, like null pointer exceptions or forgetting to close a file handle.
- Alan's biggest frustration with Rust is that it sometimes makes him deal with low-level minutia -- he sometimes finds himself just randomly inserting a
*
orclone
to see if it will build -- or complex errors dealing with features he doesn't know yet.
- Grace2, a low-level, systems programming expert.
- Grace is drawn to Rust by the promise of having memory safety while still being able to work "close to the hardware".
- Her biggest surprise is cargo and the way that it makes reusing code trivial. She doesn't miss
./configure && make
at all. - Her biggest frustration is
- Barbara3
In honor of Alan Kay, inventor of Smalltalk, which gave rise in turn to Java and most of the object-oriented languages we know today.
In honor of Grace Hopper, a computer scientist, mathematician, and rear admiral in the US Navy; inventor of COBOL.
In honor of Barbara Liskov, a computer science professor at MIT who invented of the [CLU](https://en.wikipedia.org/wiki/CLU_(programming_language) programming language.
How Rust adoption grows
The typical pattern is that Rust adoption begins in a system where Rust offers particular advantage. For example, a company building network services may begin with a highly scaled service. In this setting, the need to learn Rust is justified by its advantage.
Once users are past the initial learning curve, they find that Rust helps them to move and iterate quickly. They spend slightly more time getting their program to compile, but they spend a lot less time debugging. Refactorings tend to work "the first time".
Over time, people wind up using Rust for far more programs than they initially expected. They come to appreciate Rust's focus on reliability, quality tooling, and attention to ergonomics. They find that while other languages may have helped them edit code faster, Rust gets them to production more quickly and reduces maintenance over time. And of course using fewer languages is its own advantage.
How Rust adoption stalls
Anecdotally, the most commonly cited reasons to stop using Rust is a feeling that development is "too slow" or "too complex". There is not any one cause for this.
- Language complexity: Most users that get frustrated with Rust do not cite the borrow checker but rather the myriad workarounds needed to overcome various obstacles and inconsistencies. Often "idomatic Rust" involves a number of crates to cover gaps in core functionality (e.g.,
anyhow
as a better error type, orasync_recursion
to permit recursive async functions). Language complexity is a particular problem - Picking crates: Rust intentionally offers a lean standard library, preferring instead to support a rich set of crates. But when getting started users are often overwhelmed by the options available and unsure which one would be best to use. Making matters worse, Rust documentation often doesn't show examples making use of these crates in an effort to avoid picking favorites, making it harder for users to learn how to do things.
- Build times and slow iteration: Being able to make a change and quickly see its effect makes learning and debugging effortless. Despite our best efforts, real-world Rust programs do still have bugs, and finding and resolving those can be frustratingly slow when every change requires waiting minutes and minutes for a build to complete.
Additional concerns faced by companies
For larger users, such as companies, there are additional concerns:
- Uneven support for cross-language invocations: Most companies have large existing codebases in other languages. Rewriting those codebases from scratch is not an option. Sometimes it possible to integrate at a microservice or process boundary, but many would like a way to rewrite individual modules in Rust, passing data structures easily back and forth. Rust's support for this kind of interop is uneven and often requires knowing the right crate to use for any given language.
- Spotty ecosystem support, especially for older things: There are a number of amazing crates in the Rust ecosystem, but there are also a number of notable gaps, particularly for older technologies. Larger companies though often have to interact with legacy systems. Lacking quality libraries makes that harder.
- Supply chain security: Leaning on the ecosystem also means increased concerns about supply chain security and business continuity. In short, crates maintained by a few volunteers rather than being officially supported by Rust are a risk.
- Limited hiring pool: Hiring developers skilled in Rust remains a challenge. Companies have to be ready to onboard new developers and to help them learn Rust. Although there are many strong Rust books available, as well as a number of well regarded Rust training organizations, companies must still pick and choose between them to create a "how to learn Rust" workflow, and many do not have the extra time or skills to do that.
We are in the process of assembling the goal slate.
Summary
This is a draft for the eventual RFC proposing the 2025H1 goals.
Motivation
The 2025H1 goal slate consists of 2 project goals, of which we have selected (TBD) as flagship goals. Flagship goals represent the goals expected to have the broadest overall impact.
How the goal process works
Project goals are proposed bottom-up by an owner, somebody who is willing to commit resources (time, money, leadership) to seeing the work get done. The owner identifies the problem they want to address and sketches the solution of how they want to do so. They also identify the support they will need from the Rust teams (typically things like review bandwidth or feedback on RFCs). Teams then read the goals and provide feedback. If the goal is approved, teams are committing to support the owner in their work.
Project goals can vary in scope from an internal refactoring that affects only one team to a larger cross-cutting initiative. No matter its scope, accepting a goal should never be interpreted as a promise that the team will make any future decision (e.g., accepting an RFC that has yet to be written). Rather, it is a promise that the team are aligned on the contents of the goal thus far (including the design axioms and other notes) and will prioritize giving feedback and support as needed.
Of the proposed goals, a small subset are selected by the roadmap owner as flagship goals. Flagship goals are chosen for their high impact (many Rust users will be impacted) and their shovel-ready nature (the org is well-aligned around a concrete plan). Flagship goals are the ones that will feature most prominently in our public messaging and which should be prioritized by Rust teams where needed.
Rust’s mission
Our goals are selected to further Rust's mission of empowering everyone to build reliable and efficient software. Rust targets programs that prioritize
- reliability and robustness;
- performance, memory usage, and resource consumption; and
- long-term maintenance and extensibility.
We consider "any two out of the three" as the right heuristic for projects where Rust is a strong contender or possibly the best option.
Axioms for selecting goals
We believe that...
- Rust must deliver on its promise of peak performance and high reliability. Rust’s maximum advantage is in applications that require peak performance or low-level systems capabilities. We must continue to innovate and support those areas above all.
- Rust's goals require high productivity and ergonomics. Being attentive to ergonomics broadens Rust impact by making it more appealing for projects that value reliability and maintenance but which don't have strict performance requirements.
- Slow and steady wins the race. For this first round of goals, we want a small set that can be completed without undue stress. As the Rust open source org continues to grow, the set of goals can grow in size.
Guide-level explanation
Flagship goals
The flagship goals proposed for this roadmap are as follows:
(TBD)
Why these particular flagship goals?
(TBD--typically one paragraph per goal)
Project goals
The slate of additional project goals are as follows. These goals all have identified owners who will drive the work forward as well as a viable work plan. The goals include asks from the listed Rust teams, which are cataloged in the reference-level explanation section below.
Some goals here do not yet have an owner (look for the badge). Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these orphaned goals, reach out to the mentor listed in the goal to discuss.
Goal | Owner | Team |
---|
Reference-level explanation
The following table highlights the asks from each affected team. The "owner" in the column is the person expecting to do the design/implementation work that the team will be approving.
compiler team
Goal | Owner | Notes |
---|---|---|
Discussions and moral support | ||
↳ SVE and SME on AArch64 | Rust team at Arm |
lang team
Goal | Owner | Notes |
---|---|---|
Discussions and moral support | ||
↳ SVE and SME on AArch64 | Rust team at Arm |
leadership-council team
Goal | Owner | Notes |
---|---|---|
Approve goal slate for 2025h1 | ||
↳ "Stabilize" the project goal program | nikomatsakis | |
RFC decision | ||
↳ "Stabilize" the project goal program | nikomatsakis |
types team
Goal | Owner | Notes |
---|---|---|
Discussions and moral support | ||
↳ SVE and SME on AArch64 | Rust team at Arm |
Definitions
Definitions for terms used above:
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Goals
This page lists the 2 project goals proposed for 2025h1.
Just because a goal is listed on this list does not mean the goal has been accepted. The owner of the goal process makes the final decisions on which goals to include and prepares an RFC to ask approval from the teams.
Flagship goals
Flagship goals represent the goals expected to have the broadest overall impact.
Goal | Owner | Team |
---|
Other goals
These are the other proposed goals.
Orphaned goals. Some goals here are marked with the badge for their owner. These goals are called "orphaned goals". Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these orphaned goals, reach out to the mentor listed in the goal to discuss.
Goal | Owner | Team |
---|---|---|
"Stabilize" the project goal program | nikomatsakis | leadership-council |
SVE and SME on AArch64 | Rust team at Arm | compiler, lang, types |
"Stabilize" the project goal program
Metadata | |
---|---|
Owner(s) | nikomatsakis |
Teams | [Leadership Council] |
Status | Proposed |
Summary
- Run the second round of the Rust project goal program experiment
- Author an RFC proposing it as a permanent fixture
Motivation
Over the second half of last year we ran the first round of an experimental new Rust Project Goal program to reasonable success. We propose to continue with this program for another 6 months. This second experimental round will validate the project goal system as designed. We will also use this time to author an RFC describing the structure of the project goal program and making it a recurring part of project life.
The status quo
The Rust Project Goal program aims to resolve a number of challenges that the project faces for which having an established roadmap, along with a clarified ownership for particular tasks, would be useful:
- Focusing effort and avoiding burnout:
- One common contributor to burnout is a sense of lack of agency. People have things they would like to get done, but they feel stymied by debate with no clear resolution; feel it is unclear who is empowered to "make the call"; and feel unclear whether their work is a priority.
- Having a defined set of goals, each with clear ownership, will address that uncertainty.
- Helping direct incoming contribution:
- Many would-be contributors are interested in helping, but don't know what help is wanted/needed. Many others may wish to know how to join in on a particular project.
- Identifying the goals that are being worked on, along with owners for them, will help both groups get clarity.
- Helping the Foundation and Project to communicate
- One challenge for the Rust Foundation has been the lack of clarity around project goals. Programs like fellowships, project grants, etc. have struggled to identify what kind of work would be useful in advancing project direction.
- Declaring goals, and especially goals that are desired but lack owners to drive them, can be very helpful here.
- Helping people to get paid for working on Rust
- A challenge for people who are looking to work on Rust as part of their job -- whether that be full-time work, part-time work, or contracting -- is that the employer would like to have some confidence that the work will make progress. Too often, people find that they open RFCs or PRs which do not receive review, or which are misaligned with project priorities. A secondary problem is that there can be a perceived conflict-of-interest because people's job performance will be judged on their ability to finish a task, such as stabilizing a language feature, which can lead them to pressure project teams to make progress.
- Having the project agree before-hand that it is a priority to make progress in an area and in particular to aim for achieving particular goals by particular dates will align the incentives and make it easier for people to make commitments to would-be employers.
For more details, see
- [Blog post on Niko Matsakis's blog about project goals](https://smallcultfollowing.com/babysteps/blog/2023/11/28/project-goals/)
- [Blog post on Niko Matsakis's blog about goal ownership](https://smallcultfollowing.com/babysteps/blog/2024/04/05/ownership-in-rust/)
- nikomatsakis's slides from the Rust leadership summit
- Zulip topic in #council stream. This proposal was also discussed at the leadership council meeting on 2024-04-12, during which meeting the council recommended opening an RFC.
The next 6 months
- Publish monthly status updates on the goals selected for 2025h1
- Author an RFC describing the final form of the project goals program
The "shiny future" we are working towards
We envision the Rust Project Goal program as a permanent and ongoing part of Rust development. People looking to learn more about what Rust is doing will be able to visit the Rust Project Goal website and get an overview; individual tracking issues will give them a detailed rundown of what's been happening.
Rust Project Goals also serve as a "front door" to Rust, giving would-be contributors (particularly more prolific contributors, contractors, or companies) a clear way to bring ideas to Rust and get them approved and tracked.
Running the Rust Project Goals program will be a relatively scalable task that can be executed by a single individual.
Design axioms
- Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
- Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
- Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
- Nothing good happens without an owner. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
- To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.
Ownership and team asks
Owner: Niko Matsakis
- Niko Matsakis can commit 20% time (avg of 1 days per week) to pursue this task, which he estimates to be sufficient.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Begin soliciting goals in Nov 2024 | nikomatsakis | |
Approve goal slate for 2025h1 | leadership-council | |
Top-level Rust blog post for 2025h1 goals | nikomatsakis | |
January goal update | nikomatsakis | |
February goal update | nikomatsakis | |
Author RFC | nikomatsakis | |
March goal update | nikomatsakis | |
RFC decision | leadership-council | |
Begin soliciting goals for 2025h2 | nikomatsakis | |
April goal update | nikomatsakis | |
May goal update | nikomatsakis | |
June goal update | nikomatsakis |
Definitions
Definitions for terms used above:
- Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
- Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Frequently asked questions
None.
SVE and SME on AArch64
Arm's Rust team is David Wood, Adam Gemmell, Jacob Bramley, Jamie Cunliffe and James, as well as Kajetan Puchalski and Harry Moulton as graduates on rotation. This goal will be primarily worked on by David Wood and Jamie Cunliffe.
Summary
Over the next six months, we will aim to merge nightly support for SVE and establish a path towards stabilisation:
- propose language changes which will enable scalable vector types to be represented in Rust's type system
- land an experimental nightly implementation of SVE
- identify remaining blockers for SVE stabilisation and plan their resolution
- gain a better understanding of SME's implications for Rust and identify first steps towards design and implementation
Motivation
AArch64 is an important architecture for Rust, with two tier 1 targets and over thirty targets in lower tiers. It is widely used by some of Rust's largest stakeholders and as a systems language, it is important that Rust is able to leverage all of the hardware capabilities provided by the architecture, including new SIMD extensions: SVE and SME.
The status quo
SIMD types and instructions are a crucial element of high-performance Rust applications and allow for operating on multiple values in a single instruction. Many processors have SIMD registers of a known fixed length and provide intrinsics which operate on these registers. Arm's Neon extension is well-supported by Rust and provides 128-bit registers and a wide range of intrinsics.
Instead of releasing more extensions with ever increasing register bit widths, recent versions of AArch64 have a Scalable Vector Extension (SVE), with vector registers whose width depends on the CPU implementation and bit-width-agnostic intrinsics for operating on these registers. By using SVE, code won't need to be re-written using new architecture extensions with larger registers, new types and intrinsics, but instead will work on newer processors with different vector register lengths and performance characteristics.
SVE has interesting and challenging implications for Rust, introducing value types with sizes that can only be known at compilation time, requiring significant work on the language and compiler. Arm has since introduced Scalable Matrix Extensions (SME), building on SVE to add new capabilities to efficiently process matrices, with even more interesting implications for Rust.
Hardware is generally available with SVE, and key Rust stakeholders want to be able to use these architecture features from Rust. In a recent discussion on SVE, Amanieu, co-lead of the library team, said:
I've talked with several people in Google, Huawei and Microsoft, all of whom have expressed a rather urgent desire for the ability to use SVE intrinsics in Rust code, especially now that SVE hardware is generally available.
While SVE is specifically an AArch64 extension, the infrastructure for scalable vectors in Rust should also enable Rust to support for RISC-V's "V" Vector Extension, and this goal will endeavour to extend Rust in an architecture-agnostic way. SVE is supported in C through Arm's C Language Extensions (ACLE) but requires a change to the C standard (documented in pages 122-126 of the 2024Q3 ACLE), so Rust has an opportunity to be the first systems programming language with native support for these hardware capabilities.
SVE is currently entirely unsupported by Rust. There is a long-standing RFC for the feature which proposes special-casing SVE types in the type system, and a experimental implementation on this RFC. While these efforts have been very valuable in understanding the challenges involved in implementing SVE in Rust, and providing an experimental forever-unstable implementation, they will not be able to be stabilised as-is.
This goal's owners have an nearly-complete RFC proposing language changes which will allow scalable vectors to fit into Rust's type system - this pre-RFC has been informally discussed with members of the language and compiler teams and will be submitted alongside this project goal.
The next 6 months
The primary objective of this initial goal is to land a nightly experiment with SVE and establish a path towards stabilisation:
- Landing a nightly experiment is nearing completion, having been in progress for some time. Final review comments are being addressed and both RFC and implementation will be updated shortly.
- A comprehensive RFC proposing extensions to the type system will be opened alongside this goal.
It will primarily focus on extending the
Sized
trait so that SVE types, which are value types with a static size known at runtime, but unknown at compilation time, can implementCopy
despite not implementingSized
.
The "shiny future" we are working towards
Adding support for Scalable Matrix Extensions in Rust is the next logical step following SVE support. There are still many unknowns regarding what this will involve and part of this goal or the next goal will be understanding these unknowns better.
Design axioms
- Avoid overfitting. It's important that whatever extensions to Rust's type system are proposed are not narrowly tailored to support for SVE/SME, can be used to support similar extensions from other architectures, and unblocks or enables other desired Rust features wherever possible and practical.
- Low-level control. Rust should be able to leverage the full capabilities and performance of the underlying hardware features and should strive to avoid inherent limitations in its support.
- Rusty-ness. Extensions to Rust to support these hardware capabilities should align with Rust's design axioms and feel like natural extensions of the type system.
Ownership and team asks
Here is a detailed list of the work to be done and who is expected to do it. This table includes the work to be done by owners and the work to be done by Rust teams (subject to approval by the team in an RFC/FCP).
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussions and moral support | lang, types, compiler | |
Land nightly experiment for SVE types | Jamie Cunliffe | |
↳ Author RFC | Update rfcs#3268, will still rely on exceptions in the type system | |
↳ RFC decision | lang, types | |
↳ Implementation | Update rust#118917 | |
↳ Standard reviews | compiler | |
Upstream SVE types and intrinsics | Jamie Cunliffe | Using repr(scalable) from previous work, upstream the nightly intrinsics and types |
Extending type system to support scalable vectors | David Wood | |
↳ Author RFC | ||
↳ RFC decision | lang, types | |
↳ Implementation | ||
↳ Standard reviews | compiler | |
Stabilize SVE types | Jamie Cunliffe, David Wood | |
↳ Implementation | Jamie Cunliffe | Update existing implementation to use new type system features |
↳ Stabilisations | lang | |
↳ Blog post announcing feature | David Wood | |
Investigate SME support | Jamie Cunliffe, David Wood | |
↳ Discussions and moral support | lang, types, compiler | |
↳ Draft next goal | David Wood |
Definitions
Definitions for terms used above:
- Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
- Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Frequently asked questions
None yet.
Not accepted
This section contains goals that were proposed but ultimately not accepted, either for want of resources or consensus. In many cases, narrower versions of these goals were proposed instead.
Goal | Owner | Team |
---|
Goal motivations
The motivation section for a goal should sell the reader on why this is an important problem to address.
The first paragraph should begin with a concise summary of the high-level goal and why it's important. This is a kind of summary that helps the reader to get oriented.
The status quo section then goes into deeper detail on the problem and how things work today. It should answer the following questions
- Who is the target audience? Is it a particular group of Rust users, such as those working in a specific domain? Contributors?
- What do these users do now when they have this problem, and what are the shortcomings of that?
The next few steps can explain how you plan to change that and why these are the right next steps to take.
Finally, the shiny future section can put the goal into a longer term context. It's ok if the goal doesn't have a shiny future beyond the next few steps.
Owners
To be fully accepted, a goal must have a designated owner. This is ideally a single, concrete person, though it can be a small group.
Goals without owners can only be accepted in provisional form.
Owners keep things moving
Owners are the ones ultimately responsible for the goal being completed. They stay on top of the current status and make sure that things keep moving. When there is disagreement about the best path forward, owners are expected to make sure they understand the tradeoffs involved and then to use their good judgment to resolve it in the best way.
When concerns are raised with their design, owners are expected to embody not only the letter but also the spirit of the Rust Code of Conduct. They treasure dissent as an opportunity to improve their design. But they also know that every good design requires compromises and tradeoffs and likely cannot meet every need.
Owners own the proposal, teams own the decision
Even though owners are the ones who author the proposal, Rust teams are the ones to make the final decision. Teams can ultimately overrule an owner: they can ask the owner to come back with a modified proposal that weighs the tradeoffs differently. This is right and appropriate, because teams are the ones we recognize as having the best broad understanding of the domain they maintain. But teams should use their power judiciously, because the owner is typically the one who understands the tradeoffs for this particular goal most deeply.
Owners report regularly on progress
One of the key responsibilities of the owner is regular status reporting. Each active project goal is given a tracking issue. Owners are expected to post updates on that tracking issue when they are pinged by the bot. The project will be posting regular blog posts that are generated in a semi-automated fashion from these updates: if the post doesn't have new information, then we will simply report that the owner has not provided an update. We will also reach out to owners who are not providing updates to see whether the goal is in fact stalled and should be removed from the active list.
Ownership is a position of trust
Giving someone ownership of a goal is an act of faith — it means that we consider them to be an individual of high judgment who understands Rust and its values and will act accordingly. This implies to me that we are unlikely to take a goal if the owner is not known to the project. They don’t necessarily have to have worked on Rust, but they have to have enough of a reputation that we can evaluate whether they’re going to do a good job.’
The project goal template includes a number of elements designed to increase trust:
- The "shiny future" and design axioms give a "preview" of how owner is thinking about the problem and the way that tradeoffs will be resolved.
- The milestones section indicates the rough order in which they will approach the problem.
Provisional goals
Provisional goals are goals accepted without an assigned owner. Teams sometimes accept provisional goals to represent work that they feel is important but for which they have not yet found a suitable owner. Interested individuals who wish to improve Rust can talk to the teams about becoming the owner for a provisional goal. The existence of a provisional goal can also be helpful when applying for funding or financial support.
See the provisionally accepted goals for 2024H2.
Design axioms
Each project goal includes a design axioms section. Design axioms capture the guidelines you will use to drive your design. Since goals generally come early in the process, the final design is not known -- axioms are a way to clarify the constraints you will be keeping in mind as you work on your design. Axioms will also help you operate more efficiently, since you can refer back to them to help resolve tradeoffs more quickly.
Examples
Axioms about axioms
- Axioms capture constraints. Axioms capture the things you are trying to achieve. The goal ultimately is that your design satisfies all of them as much as possible.
- Axioms express tradeoffs. Axioms are ordered, and -- in case of conflict -- the axioms that come earlier in the list take precedence. Since axioms capture constraints, this doesn't mean you just ignore the axioms that take lower precedence, but it usually means you meet them in a "less good" way. For example, maybe consider a lint instead of a hard error?
- Axioms should be specific to your goal. Rust has general design axioms that
- Axioms are short and memorable. The structure of an axiom should begin with a short, memorable bolded phrase -- something you can recite in meetings. Then a few sentences that explain in more detail or elaborate.
Axioms about the project goal program
- Goals are a contract. Goals are meant to be a contract between the owner and project teams. The owner commits to doing the work. The project commits to supporting that work.
- Goals aren't everything, but they are our priorities. Goals are not meant to cover all the work the project will do. But goals do get prioritized over other work to ensure the project meets its commitments.
- Goals cover a problem, not a solution. As much as possible, the goal should describe the problem to be solved, not the precise solution. This also implies that accepting a goal means the project is committing that the problem is a priority: we are not committing to accept any particular solution.
- Owners are first-among-equals. Rust endeavors to run an open, participatory process, but ultimately achieving any concrete goal requires someone (or a small set of people) to take ownership of that goal. Owners are entrusted to listen, take broad input, and steer a well-reasoned course in the tradeoffs they make towards implementing the goal. But this power is not unlimited: owners make proposals, but teams are ultimately the ones that decide whether to accept them.
- To everything, there is a season. While there will be room for accepting new goals that come up during the year, we primarily want to pick goals during a fixed time period and use the rest of the year to execute.
Axioms about Rust itself
Still a work in progress! See the Rust design axioms repository.
Frequently asked questions
Where can I read more about axioms?
Axioms are very similar to approaches used in a number of places...
- AWS tenets
- ... dig up the other links ...
RFC
The RFC proposing the goal program has been opened. See RFC #3614.
Propose a new goal
What steps do I take to submit a goal?
Goal proposed are submitted as Pull Requests:
- Fork the GitHub repository and clone it locally
- Copy the
src/TEMPLATE.md
to a file likesrc/2025h1/your-goal-name.md
. Don't forget to rungit add
. - Fill out the
your-goal-name.md
file with details, using the template and other goals as an example.- The goal text does not have to be complete. It can be missing details.
- Open a PR.
Who should propose a goal?
Opening a goal is an indication that you (or your company, etc) is willing to put up the resources needed to make it happen, at least if you get the indicated support from the teams. These resources are typically development time and effort, but they could be funding (in that case, we'd want to identify someone to take up the goal). If you pass that bar, then by all means, yes, open a goal.
Note though that controversial goals are likely to not be accepted. If you have an idea that you think people won't like, then you should find ways to lower the ask of the teams. For example, maybe the goal should be to perform experiments to help make the case for the idea, rather than jumping straight to implementation.
Can I still do X, even if I don't submit a goal for it?
Yes. Goals are not mandatory for work to proceed. They are a tracking mechanism to help stay on course.
TEMPLATE (replace with title of your goal)
Instructions: Copy this template to a fresh file with a name based on your plan. Give it a title that describes what you plan to get done in the next 6 months (e.g., "stabilize X" or "nightly support for X" or "gather data about X"). Feel free to replace any text with anything, but there are placeholders designed to help you get started.
Metadata | |
---|---|
Owner(s) | Github usernames or other identifying info for goal owners |
Teams | Names of teams being asked to commit to the goal |
Status | Proposed |
Summary
Short description of what you will do over the next 6 months.
Motivation
Begin with a few sentences summarizing the problem you are attacking and why it is important.
The status quo
Elaborate in more detail about the problem you are trying to solve. This section is making the case for why this particular problem is worth prioritizing with project bandwidth. A strong status quo section will (a) identify the target audience and (b) give specifics about the problems they are facing today. Sometimes it may be useful to start sketching out how you think those problems will be addressed by your change, as well, though it's not necessary.
The next 6 months
Sketch out the specific things you are trying to achieve in this goal period. This should be short and high-level -- we don't want to see the design!
The "shiny future" we are working towards
If this goal is part of a larger plan that will extend beyond this goal period, sketch out the goal you are working towards. It may be worth adding some text about why these particular goals were chosen as the next logical step to focus on.
This text is NORMATIVE, in the sense that teams should review this and make sure they are aligned. If not, then the shiny future should be moved to frequently asked questions with a title like "what might we do next".
Design axioms
This section is optional, but including design axioms can help you signal how you intend to balance constraints and tradeoffs (e.g., "prefer ease of use over performance" or vice versa). Teams should review the axioms and make sure they agree. Read more about design axioms.
Ownership and team asks
Owner: Identify a specific person or small group of people if possible, else the group that will provide the owner. Github user names are commonly used to remove ambiguity.
This section defines the specific work items that are planned and who is expected to do them. It should also include what will be needed from Rust teams. The table below shows some common sets of asks and work, but feel free to adjust it as needed. Every row in the table should either correspond to something done by a contributor or something asked of a team. For items done by a contributor, list the contributor, or ![Heap wanted][] if you don't yet know who will do it. For things asked of teams, list and the name of the team. The things typically asked of teams are defined in the Definitions section below.
Subgoal | Owner(s) or team(s) | Notes |
---|---|---|
Discussion and moral support | cargo | |
Stabilize Feature X (typical language feature) | ||
↳ Author RFC | Goal owner, typically | |
↳ Implementation | Goal owner, typically | |
↳ Standard reviews | compiler | |
↳ Design meeting | lang | |
↳ RFC decision | lang | |
↳ Secondary RFC review | lang | |
↳ Author stabilization report | Goal owner, typically | |
↳ Stabilization decision | lang | |
Nightly experiment for X | pla | |
↳ Lang-team experiment | lang | |
↳ Author RFC | Goal owner, typically | |
↳ Implementation | Goal owner, typically | |
↳ Standard reviews | compiler | |
Inside Rust blog post inviting feedback | (any team) | |
Top-level Rust blog post inviting feedback | leadership-council |
Definitions
Definitions for terms used above:
- Discussion and moral support is the lowest level offering, basically committing the team to nothing but good vibes and general support for this endeavor.
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Prioritized nominations refers to prioritized lang-team response to nominated issues, with the expectation that there will be some response from the next weekly triage meeting.
- Dedicated review means identifying an individual (or group of individuals) who will review the changes, as they're expected to require significant context.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Frequently asked questions
What do I do with this space?
This is a good place to elaborate on your reasoning above -- for example, why did you put the design axioms in the order that you did? It's also a good place to put the answers to any questions that come up during discussion. The expectation is that this FAQ section will grow as the goal is discussed and eventually should contain a complete summary of the points raised along the way.
Report status
Every accepted project goal has an associated tracking issue. These are created automatically by the project-goals admin tool. Your job as a project goal owner is to provide regular status updates in the form of a comment indicating how things are going. These will be collected into regular blog posts on the Rust blog as well as being promoted in other channels.
Updating the progress bar
When we display the status of goals, we include a progress bar based on your documented plan. We recommend you keep this up to date. You can mix and match any of the following ways to list steps.
Checkboxes
The first option is to add checkboxes into the top comment on the tracking issue. Simply add boxes like * [ ]
or * [x]
for a completed item. The tool will count the number of checkboxes and use that to reflect progress. Your tracking issue will be pre-propulated with checkboxes based on the goal doc, but feel free to edit them.
Best practice is to start with a high level list of tasks:
* [ ] Author code
* [ ] Author RFC
* [ ] Accept RFC
* [ ] Test
* [ ] Stabilize
each time you provide a status update, check off the items that are done, and add new items with more detailed to-do items that represent your next steps.
Search queries
For larger project goals, it can be more convenient to track progress via github issues. You can do that by removing all the checkboxes from your issue and instead adding a "Tracked issues" line into the metadata table on your tracking issue. It should look like this:
| Metadata | |
| -------- | --- |
| Owner(s) | ... |
| Team(s) | ... |
| Goal document | ... |
| Tracked issues | [rust-lang/rust label:A-edition-2024 label:C-tracking-issue -label:t-libs](...) |
The first 3 lines should already exist. The last line is the one you have to add. The "value" column should have a markdown link, the contents of which begin with a repo name and then search parameters in Github's format. The tool will conduct the search and count the number of open vs closed issues. The (...)
part of the link should be to github so that users can click to do the search on their own.
You can find an example on the Rust 2024 Edition tracking issue.
Use "See also" to refer to other tracking issues
If you already have a tracking issue elsewhere, just add a "See also" line into your metadata. The value should be a comma-or-space-separated list of URLs or org/repo#issue
github references:
| Metadata | |
| -------- | --- |
| Owner(s) | ... |
| Team(s) | ... |
| Goal document | ... |
| See also | rust-lang/rust#123 |
We will recursively open up the "see also" issue and extract checkboxes (or search queries / see-also tags) from there.
Binary issues
If we don't find any of the above, we will consider your issue either 0% done if it is not yet closed or 100% done if it is.
Status update comments
Status updates are posted as comments on the Github tracking issue. You will receive regular pings on Zulip to author status updates periodically. It's a good idea to take the opportunity to update your progress checkboxes as well.
There is no strict format for these updates but we recommend including the following information:
- What happened since the last update? Were any key decisions made or milestones achieved?
- What is the next step to get done?
- Are you blocked on anyone or anything?
- Is there any opportunity to others to pitch in and help out?
Closing the issue
Closing the tracking issue is a signal that you are no longer working on it. This can be because you've achieved your goal or because you have decided to focus on other things. Also, tracking issues will automatically be closed at the end of the project goal period.
When you close an issue, the state of your checkboxes makes a difference. If they are 100% finished, the goal will be listed as completed. If there are unchecked items, the assumption is that the goal is only partly done, and it will be listed as unfinished. So make sure to check the boxes if the goal is done!
Running the program
So... somebody suckered you into serving as the owner of the goals program. Congratulations! This page will walk you through the process.
Call for proposals
Each goal milestone corresponds to six months, designated in the format YYYYHN, e.g., 2024H2 or 2025H1. To launch a new goal season, you should get started a month or two before the new season starts:
- For an H1 season, start around mid Oct or November of the year before.
- For an H2 season, start around mid April or May of the year before.
This is the checklist of steps to starting accepting goal proposals:
-
Prepare a Call For Proposals blog post on the Inside Rust blog based on this sample.
- We use Inside Rust and not the Main blog because the target audience is would-be Rust contributors and maintainers.
- Update the main README page to indicate that the next round of goals is begin accepted.
-
Create a new directory
src/YYYYhN
, e.g.,src/2025h1
, with the following files. Note that the sample files below include<!-- XXX -->
directives that are detected by the mdbook plugin and replaced with appropriate content automatically.- A
src/YYYYhN/README.md
file that contains the draft RFC. - A
src/YYYYhN/goals.md
file containing the draft goal listing. - A
src/YYYYhN/not_accepted.md
file containing the list of goals that were not accepted.
- A
- Modify SUMMARY.md to include your new milestone with some text like what is shown below.
Sample SUMMARY.md
comments from 2025H1:
# ⏳ 2025H1 goal process
- [Overview](./2025h1/README.md)
- [Proposed goals](./2025h1/goals.md)
- [Goals not accepted](./2025h1/not_accepted.md)
Receiving PRs
to be written
Preparing the RFC
to be written
Opening the RFC
to be written
Once the RFC is accepted
to be written
Running the goal season
- Every Monday go to Zulip and run:
[TriageBot][] ping-goals 14 Oct-21
- To prepare the monthly blog post:
- Run the
cargo run -- updates
command - Create a new post in main Rust blog found at the blog.rust-lang.org repository.
- Run the
Finalizing the goal season
Call for PRs: YYYYHN goals
NOTE: This is a sample blog post you can use as a starting point. To begin a new goal season (e.g., 2222H1), do the following:
- Copy this file to to the
blog.rust-lang.org
repository as a new post.- Search and replace
YYYYHN
with2222H1
and delete this section.- Look for other "TBD" sections, you'll want to replace those eventually.
As of today, we are officially accepting proposals for Rust Project Goals targeting YYYYHN (the (TBD) half of YYYY). If you'd like to particiapte in the process, or just to follow along, please check out the YYYYHN goal page. It includes listings of the goals currently under consideration , more details about the goals program, and instructions for how to submit a goal.
What is the project goals program and how does it work?
Every six months, the Rust project commits to a set of goals for the upcoming half-year. The process involves:
- the owner of the goal program (currently me) posts a call for proposals (this post);
- would-be goal owners open PRs against the rust-project-goals repository;
- the goal-program owner gathers feedback on these goals and chooses some of them to be included in the RFC proposing the final slate of goals.
To get an idea what the final slate of goals looks like, check out the RFC from the previous round of goals, RFC (TBD). The RFC describes a set of goals, designates a few of them as flagship goals, and summarizes the work expected from each team. The RFC is approved by (at least) the leads of each team, effectively committing their team to prove the support that is described.
Should I submit a goal?
Opening a goal is an indication that you (or your company, etc) is willing to put up the resources needed to make it happen, at least if you get the indicated support from the teams. These resources are typically development time and effort, but they could be funding (in that case, we'd want to identify someone to take up the goal). If you pass that bar, then by all means, yes, open a goal.
Note though that controversial goals are likely to not be accepted. If you have an idea that you think people won't like, then you should find ways to lower the ask of the teams. For example, maybe the goal should be to perform experiments to help make the case for the idea, rather than jumping straight to implementation.
Can I still do X, even if I don't submit a goal for it?
Yes. Goals are not mandatory for work to proceed. They are a tracking mechanism to help stay on course.
Conclusion
The Rust Project Goals program is driving progress, increasing transparency, and energizing the community. As we enter the second round, we invite you to contribute your ideas and help shape Rust's future. Whether you're proposing a goal or following along, your engagement is vital to Rust's continued growth and success. Join us in making Rust even better in 2025!
Sample: Text for the main README
NOTE: This is a sample section you can use as a starting point.
- Copy and paste the markdown below into the main README.
- Replace
YYYYHN
with2222h1
or whatever.(Note that the links on this page are relative to the main README, not its current location.)
Next goal period (YYYYHN)
The next goal period will be YYYYHN, running from MM 1 to MM 30. We are currently in the process of assembling goals. Click here to see the current list. If you'd like to propose a goal, instructions can be found here.
Sample RFC
NOTE: This is a sample RFC you can use as a starting point. To begin a new goal season (e.g., 2222H1), do the following:
- Copy this file to
src/2222H1/README.md
.- Search and replace
YYYYHN
with2222H1
and delete this section.- Look for other "TBD" sections, you'll want to replace those eventually.
- Customize anything else that seems relevant.
Summary
We are in the process of assembling the goal slate.
This is a draft for the eventual RFC proposing the YYYYHN goals.
Motivation
The YYYYHN goal slate consists of 0 project goals, of which we have selected (TBD) as flagship goals. Flagship goals represent the goals expected to have the broadest overall impact.
How the goal process works
Project goals are proposed bottom-up by an owner, somebody who is willing to commit resources (time, money, leadership) to seeing the work get done. The owner identifies the problem they want to address and sketches the solution of how they want to do so. They also identify the support they will need from the Rust teams (typically things like review bandwidth or feedback on RFCs). Teams then read the goals and provide feedback. If the goal is approved, teams are committing to support the owner in their work.
Project goals can vary in scope from an internal refactoring that affects only one team to a larger cross-cutting initiative. No matter its scope, accepting a goal should never be interpreted as a promise that the team will make any future decision (e.g., accepting an RFC that has yet to be written). Rather, it is a promise that the team are aligned on the contents of the goal thus far (including the design axioms and other notes) and will prioritize giving feedback and support as needed.
Of the proposed goals, a small subset are selected by the roadmap owner as flagship goals. Flagship goals are chosen for their high impact (many Rust users will be impacted) and their shovel-ready nature (the org is well-aligned around a concrete plan). Flagship goals are the ones that will feature most prominently in our public messaging and which should be prioritized by Rust teams where needed.
Rust’s mission
Our goals are selected to further Rust's mission of empowering everyone to build reliable and efficient software. Rust targets programs that prioritize
- reliability and robustness;
- performance, memory usage, and resource consumption; and
- long-term maintenance and extensibility.
We consider "any two out of the three" as the right heuristic for projects where Rust is a strong contender or possibly the best option.
Axioms for selecting goals
We believe that...
- Rust must deliver on its promise of peak performance and high reliability. Rust’s maximum advantage is in applications that require peak performance or low-level systems capabilities. We must continue to innovate and support those areas above all.
- Rust's goals require high productivity and ergonomics. Being attentive to ergonomics broadens Rust impact by making it more appealing for projects that value reliability and maintenance but which don't have strict performance requirements.
- Slow and steady wins the race. For this first round of goals, we want a small set that can be completed without undue stress. As the Rust open source org continues to grow, the set of goals can grow in size.
Guide-level explanation
Flagship goals
The flagship goals proposed for this roadmap are as follows:
(TBD)
Why these particular flagship goals?
(TBD--typically one paragraph per goal)
Project goals
The slate of additional project goals are as follows. These goals all have identified owners who will drive the work forward as well as a viable work plan. The goals include asks from the listed Rust teams, which are cataloged in the reference-level explanation section below.
Some goals here do not yet have an owner (look for the badge). Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these orphaned goals, reach out to the mentor listed in the goal to discuss.
Goal | Owner | Team |
---|
Reference-level explanation
The following table highlights the asks from each affected team. The "owner" in the column is the person expecting to do the design/implementation work that the team will be approving.
Definitions
Definitions for terms used above:
- Author RFC and Implementation means actually writing the code, document, whatever.
- Design meeting means holding a synchronous meeting to review a proposal and provide feedback (no decision expected).
- RFC decisions means reviewing an RFC and deciding whether to accept.
- Org decisions means reaching a decision on an organizational or policy matter.
- Secondary review of an RFC means that the team is "tangentially" involved in the RFC and should be expected to briefly review.
- Stabilizations means reviewing a stabilization and report and deciding whether to stabilize.
- Standard reviews refers to reviews for PRs against the repository; these PRs are not expected to be unduly large or complicated.
- Other kinds of decisions:
- Lang team experiments are used to add nightly features that do not yet have an RFC. They are limited to trusted contributors and are used to resolve design details such that an RFC can be written.
- Compiler Major Change Proposal (MCP) is used to propose a 'larger than average' change and get feedback from the compiler team.
- Library API Change Proposal (ACP) describes a change to the standard library.
Goals
NOTE: This is a sample starting point for the
goals.md
page in a milestone directory.
- Search and replace
YYYYHN
with2222H1
and delete this section.
This page lists the 0 project goals proposed for YYYYHN.
Just because a goal is listed on this list does not mean the goal has been accepted. The owner of the goal process makes the final decisions on which goals to include and prepares an RFC to ask approval from the teams.
Flagship goals
Flagship goals represent the goals expected to have the broadest overall impact.
Goal | Owner | Team |
---|
Other goals
These are the other proposed goals.
Orphaned goals. Some goals here are marked with the badge for their owner. These goals are called "orphaned goals". Teams have reserved some capacity to pursue these goals but until an appropriate owner is found they are only considered provisionally accepted. If you are interested in serving as the owner for one of these orphaned goals, reach out to the mentor listed in the goal to discuss.
Goal | Owner | Team |
---|
Overall setup
The rust-project-goals repository is set up as follows
- an mdbook for generating the main content
- a Rust binary in
src
that serves as- a runnable utility for doing various admin functions on the CLI (e.g., generating a draft RFC)
- an mdbook preprocessor for generating content like the list of goals
- a utility invoked in CI that can query github and produce a JSON with the goal status
- pages on the Rust website that fetches JSON data from rust-project-goals repo to generate content
- the JSON data is generated by the Rust binary
- tracking issues for each active project goal:
- tagged with
C-tracking-issue
- and added to the appropriate milestone
- tagged with
- triagebot modifications to link Zulip and the repo tracking issues
- the command
@triagebot ping-goals N D
will ping all active goal owners to ask them to add updates- N is a threshold number of days; if people have posted an update within the last N days, we won't bother them. Usually I do this as the current date + 7, so that people who posted during the current month or the last week of the previous month don't get any pings.
- D is a word like
Sep-22
that indicates the day
- the bot monitors for comments on github and forwards them to Zulip
- the command
Mdbook Plugin
The mdbook is controlled by the mdbook-goals
plugin in this repo.
This plugin makes various edits to the source:
- Linking usernames like
@foo
to their github page and replacing them with their display name. - Linking GH references lke rust-lang/rust#123.
- Collating goals, creating tables, etc.
The plugin can also be used from the commmand line.
Expected book structure
The plugin is designed for the book to have a directory per phase of the goal program, e.g., src/2024h2
, src/2025h1
, etc.
Within this directory there should be:
- A
README.md
file that will contain the draft slate RFC. - One file per goal. Each goal file must follow the TEMPLATE structure and in particular must have
- a metadata table in its first section
- a Summary section
- a Ownership and team asks containing the subgoal table
- One file per "phase" of the program, e.g.,
proposed.md
etc. (These are not mandatory.)
Plugin replacement text
The plugin will replace the following placeholder texts.
Each placeholder is enclosed within an html comment <!-- -->
.
Goal count
The placeholder <-- #GOALS -->
will be replaced with the total number of goals under consideration
(this count excludes goals with the status Not accepted
).
Goal listing
The placeholder <-- GOALS '$Status' -->
will insert a goal table listing goals of the given status $Status
, e.g., <-- GOALS 'Flagship' -->
. You can also list multiple status items, e.g., <-- GOALS 'Accepted,Orphaned' -->
Commands
The mdbook-goals
plugin can also be run with cargo run --
and it has a number of useful commands:
cargo run -- --help
Creating tracking issues
Usage:
> cargo run -- issues
The issues
command is used to create tracking issues at the start of a project goal session. When you first run it, it will simply tell you what actions it plans to take.
To actually commit and create the issues, supply the --commit
flag:
> cargo run -- issues --commit
This will also edit the goal documents to include a link to each created tracking issue. You should commit those edits.
You can later re-run the command and it will not repeat actions it has already taken.c
Summarize updates for the monthly blog post
Usage:
> cargo run -- updates --help
The updates
command generates the starting point for a monthly blog post. The output is based on the handlebars templates found in the templates
directory. The command you probably want most often is something like this
> cargo run -- updates YYYYhN --vscode
which will open the blogpost in a tab in VSCode. This makes it easy to copy-and-paste over to the main Rust blog.
Blog post starting point
The blog post starting point is based on the handlebars template in templates/updates.hbs
.
Configuring the LLM
The updates
command makes use of an LLM hosted on AWS Bedrock to summarize people's comments. You will need to run aws configure
and login with some default credentials. You can skip the LLM by providing the --quick
command-line option, but then you have to generate your own text, which can be pretty tedious.