Rust RFCs - RFC Book - Active RFC List

The “RFC” (request for comments) process is intended to provide a consistent and controlled path for changes to Rust (such as new features) so that all stakeholders can be confident about the direction of the project.

Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow.

Some changes though are “substantial”, and we ask that these be put through a bit of a design process and produce a consensus among the Rust community and the sub-teams.

Table of Contents

When you need to follow this process

You need to follow this process if you intend to make “substantial” changes to Rust, Cargo, Crates.io, or the RFC process itself. What constitutes a “substantial” change is evolving based on community norms and varies depending on what part of the ecosystem you are proposing to change, but may include the following.

  • Any semantic or syntactic change to the language that is not a bugfix.
  • Removing language features, including those that are feature-gated.
  • Changes to the interface between the compiler and libraries, including lang items and intrinsics.
  • Additions to std.

Some changes do not require an RFC:

  • Rephrasing, reorganizing, refactoring, or otherwise “changing shape does not change meaning”.
  • Additions that strictly improve objective, numerical quality criteria (warning removal, speedup, better platform coverage, more parallelism, trap more errors, etc.)
  • Additions only likely to be noticed by other developers-of-rust, invisible to users-of-rust.

If you submit a pull request to implement a new feature without going through the RFC process, it may be closed with a polite request to submit an RFC first.

Sub-team specific guidelines

For more details on when an RFC is required for the following areas, please see the Rust community’s sub-team specific guidelines for:

Before creating an RFC

A hastily-proposed RFC can hurt its chances of acceptance. Low quality proposals, proposals for previously-rejected features, or those that don’t fit into the near-term roadmap, may be quickly rejected, which can be demotivating for the unprepared contributor. Laying some groundwork ahead of the RFC can make the process smoother.

Although there is no single way to prepare for submitting an RFC, it is generally a good idea to pursue feedback from other project developers beforehand, to ascertain that the RFC may be desirable; having a consistent impact on the project requires concerted effort toward consensus-building.

The most common preparations for writing and submitting an RFC include talking the idea over on our official Zulip server, discussing the topic on our developer discussion forum, and occasionally posting “pre-RFCs” on the developer forum. You may file issues on this repo for discussion, but these are not actively looked at by the teams.

As a rule of thumb, receiving encouraging feedback from long-standing project developers, and particularly members of the relevant sub-team is a good indication that the RFC is worth pursuing.

What the process is

In short, to get a major feature added to Rust, one must first get the RFC merged into the RFC repository as a markdown file. At that point the RFC is “active” and may be implemented with the goal of eventual inclusion into Rust.

  • Fork the RFC repo RFC repository
  • Copy 0000-template.md to text/0000-my-feature.md (where “my-feature” is descriptive). Don’t assign an RFC number yet; This is going to be the PR number and we’ll rename the file accordingly if the RFC is accepted.
  • Fill in the RFC. Put care into the details: RFCs that do not present convincing motivation, demonstrate lack of understanding of the design’s impact, or are disingenuous about the drawbacks or alternatives tend to be poorly-received.
  • Submit a pull request. As a pull request the RFC will receive design feedback from the larger community, and the author should be prepared to revise it in response.
  • Now that your RFC has an open pull request, use the issue number of the PR to rename the file: update your 0000- prefix to that number. Also update the “RFC PR” link at the top of the file.
  • Each pull request will be labeled with the most relevant sub-team, which will lead to its being triaged by that team in a future meeting and assigned to a member of the subteam.
  • Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those that don’t receive any comments. Feel free to reach out to the RFC assignee in particular to get help identifying stakeholders and obstacles.
  • The sub-team will discuss the RFC pull request, as much as possible in the comment thread of the pull request itself. Offline discussion will be summarized on the pull request comment thread.
  • RFCs rarely go through this process unchanged, especially as alternatives and drawbacks are shown. You can make edits, big and small, to the RFC to clarify or change the design, but make changes as new commits to the pull request, and leave a comment on the pull request explaining your changes. Specifically, do not squash or rebase commits after they are visible on the pull request.
  • At some point, a member of the subteam will propose a “motion for final comment period” (FCP), along with a disposition for the RFC (merge, close, or postpone).
    • This step is taken when enough of the tradeoffs have been discussed that the subteam is in a position to make a decision. That does not require consensus amongst all participants in the RFC thread (which is usually impossible). However, the argument supporting the disposition on the RFC needs to have already been clearly articulated, and there should not be a strong consensus against that position outside of the subteam. Subteam members use their best judgment in taking this step, and the FCP itself ensures there is ample time and notification for stakeholders to push back if it is made prematurely.
    • For RFCs with lengthy discussion, the motion to FCP is usually preceded by a summary comment trying to lay out the current state of the discussion and major tradeoffs/points of disagreement.
    • Before actually entering FCP, all members of the subteam must sign off; this is often the point at which many subteam members first review the RFC in full depth.
  • The FCP lasts ten calendar days, so that it is open for at least 5 business days. It is also advertised widely, e.g. in This Week in Rust. This way all stakeholders have a chance to lodge any final objections before a decision is reached.
  • In most cases, the FCP period is quiet, and the RFC is either merged or closed. However, sometimes substantial new arguments or ideas are raised, the FCP is canceled, and the RFC goes back into development mode.

The RFC life-cycle

Once an RFC becomes “active” then authors may implement it and submit the feature as a pull request to the Rust repo. Being “active” is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it does mean that in principle all the major stakeholders have agreed to the feature and are amenable to merging it.

Furthermore, the fact that a given RFC has been accepted and is “active” implies nothing about what priority is assigned to its implementation, nor does it imply anything about whether a Rust developer has been assigned the task of implementing the feature. While it is not necessary that the author of the RFC also write the implementation, it is by far the most effective way to see an RFC through to completion: authors should not expect that other project developers will take on responsibility for implementing their accepted feature.

Modifications to “active” RFCs can be done in follow-up pull requests. We strive to write each RFC in a manner that it will reflect the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually reflect what the end result will be at the time of the next major release.

In general, once accepted, RFCs should not be substantially changed. Only very minor changes should be submitted as amendments. More substantial changes should be new RFCs, with a note added to the original RFC. Exactly what counts as a “very minor change” is up to the sub-team to decide; check Sub-team specific guidelines for more details.

Reviewing RFCs

While the RFC pull request is up, the sub-team may schedule meetings with the author and/or relevant stakeholders to discuss the issues in greater detail, and in some cases the topic may be discussed at a sub-team meeting. In either case a summary from the meeting will be posted back to the RFC pull request.

A sub-team makes final decisions about RFCs after the benefits and drawbacks are well understood. These decisions can be made at any time, but the sub-team will regularly issue decisions. When a decision is made, the RFC pull request will either be merged or closed. In either case, if the reasoning is not clear from the discussion in thread, the sub-team will add a comment describing the rationale for the decision.

Implementing an RFC

Some accepted RFCs represent vital features that need to be implemented right away. Other accepted RFCs can represent features that can wait until some arbitrary developer feels like doing the work. Every accepted RFC has an associated issue tracking its implementation in the Rust repository; thus that associated issue can be assigned a priority via the triage process that the team uses for all issues in the Rust repository.

The author of an RFC is not obligated to implement it. Of course, the RFC author (like any other developer) is welcome to post an implementation for review after the RFC has been accepted.

If you are interested in working on the implementation for an “active” RFC, but cannot determine if someone else is already working on it, feel free to ask (e.g. by leaving a comment on the associated issue).

RFC Postponement

Some RFC pull requests are tagged with the “postponed” label when they are closed (as part of the rejection process). An RFC closed with “postponed” is marked as such because we want neither to think about evaluating the proposal nor about implementing the described feature until some time in the future, and we believe that we can afford to wait until then to do so. Historically, “postponed” was used to postpone features until after 1.0. Postponed pull requests may be re-opened when the time is right. We don’t have any formal process for that, you should ask members of the relevant sub-team.

Usually an RFC pull request marked as “postponed” has already passed an informal first round of evaluation, namely the round of “do we think we would ever possibly consider making this change, as outlined in the RFC pull request, or some semi-obvious variation of it.” (When the answer to the latter question is “no”, then the appropriate response is to close the RFC, not postpone it.)

Help this is all too informal!

The process is intended to be as lightweight as reasonable for the present circumstances. As usual, we are trying to let the process be driven by consensus and community norms, not impose more structure than necessary.

License

This repository is currently in the process of being licensed under either of:

  • Apache License, Version 2.0, (LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0)
  • MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)

at your option. Some parts of the repository are already licensed according to those terms. For more see RFC 2044 and its tracking issue.

Contributions

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

RFC policy - the compiler

Compiler RFCs will be managed by the compiler sub-team, and tagged T-compiler. The compiler sub-team will do an initial triage of new PRs within a week of submission. The result of triage will either be that the PR is assigned to a member of the sub-team for shepherding, the PR is closed because the sub-team believe it should be done without an RFC, or closed because the sub-team feel it should clearly not be done and further discussion is not necessary. We’ll follow the standard procedure for shepherding, final comment period, etc.

Most compiler decisions that go beyond the scope of a simple PR are done using MCPs, not RFCs. It is therefore likely that you should file an MCP instead of an RFC for your problem.

Changes which need an RFC

  • Significant user-facing changes to the compiler with a complex design space, especially if they involve other teams as well (for example, path sanitization).
  • Any other change which causes significant backwards incompatible changes to stable behaviour of the compiler, language, or libraries

Changes which don’t need an RFC

  • Bug fixes, improved error messages, etc.
  • Minor refactoring/tidying up
  • Large internal refactorings or redesigns of the compiler (needs an MCP)
  • Implementing language features which have an accepted RFC.
  • New lints (these fall under the lang team). Lints are best first tried out in clippy and then uplifted later.
  • Changing the API presented to syntax extensions or other compiler plugins in non-trivial ways
  • Adding, removing, or changing a stable compiler flag (needs an FCP somewhere, like on an MCP or just on a PR)
  • Adding unstable API for tools (note that all compiler API is currently unstable)
  • Adding, removing, or changing an unstable compiler flag (if the compiler flag is widely used there should be at least some discussion on discuss, or an RFC in some cases)

If in doubt it is probably best to just announce the change you want to make to the compiler subteam on Zulip, and see if anyone feels it needs an RFC.

RFC policy - language design

Pretty much every change to the language needs an RFC. Note that new lints (or major changes to an existing lint) are considered changes to the language.

Language RFCs are managed by the language sub-team, and tagged T-lang. The language sub-team will do an initial triage of new PRs within a week of submission. The result of triage will either be that the PR is assigned to a member of the sub-team for shepherding, the PR is closed as postponed because the subteam believe it might be a good idea, but is not currently aligned with Rust’s priorities, or the PR is closed because the sub-team feel it should clearly not be done and further discussion is not necessary. In the latter two cases, the sub-team will give a detailed explanation. We’ll follow the standard procedure for shepherding, final comment period, etc.

Amendments

Sometimes in the implementation of an RFC, changes are required. In general these don’t require an RFC as long as they are very minor and in the spirit of the accepted RFC (essentially bug fixes). In this case implementers should submit an RFC PR which amends the accepted RFC with the new details. Although the RFC repository is not intended as a reference manual, it is preferred that RFCs do reflect what was actually implemented. Amendment RFCs will go through the same process as regular RFCs, but should be less controversial and thus should move more quickly.

When a change is more dramatic, it is better to create a new RFC. The RFC should be standalone and reference the original, rather than modifying the existing RFC. You should add a comment to the original RFC with referencing the new RFC as part of the PR.

Obviously there is some scope for judgment here. As a guideline, if a change affects more than one part of the RFC (i.e., is a non-local change), affects the applicability of the RFC to its motivating use cases, or there are multiple possible new solutions, then the feature is probably not ‘minor’ and should get a new RFC.

RFC guidelines - libraries sub-team

Motivation

  • RFCs are heavyweight:

    • RFCs generally take at minimum 2 weeks from posting to land. In practice it can be more on the order of months for particularly controversial changes.
    • RFCs are a lot of effort to write; especially for non-native speakers or for members of the community whose strengths are more technical than literary.
    • RFCs may involve pre-RFCs and several rewrites to accommodate feedback.
    • RFCs require a dedicated shepherd to herd the community and author towards consensus.
    • RFCs require review from a majority of the subteam, as well as an official vote.
    • RFCs can’t be downgraded based on their complexity. Full process always applies. Easy RFCs may certainly land faster, though.
    • RFCs can be very abstract and hard to grok the consequences of (no implementation).
  • PRs are low overhead but potentially expensive nonetheless:

    • Easy PRs can get insta-merged by any rust-lang contributor.
    • Harder PRs can be easily escalated. You can ping subject-matter experts for second opinions. Ping the whole team!
    • Easier to grok the full consequences. Lots of tests and Crater to save the day.
    • PRs can be accepted optimistically with bors, buildbot, and the trains to guard us from major mistakes making it into stable. The size of the nightly community at this point in time can still mean major community breakage regardless of trains, however.
    • HOWEVER: Big PRs can be a lot of work to make only to have that work rejected for details that could have been hashed out first.
  • RFCs are only meaningful if a significant and diverse portion of the community actively participates in them. The official teams are not sufficiently diverse to establish meaningful community consensus by agreeing amongst themselves.

  • If there are tons of RFCs – especially trivial ones – people are less likely to engage with them. Official team members are super busy. Domain experts and industry professionals are super busy and have no responsibility to engage in RFCs. Since these are exactly the most important people to get involved in the RFC process, it is important that we be maximally friendly towards their needs.

Is an RFC required?

The overarching philosophy is: do whatever is easiest. If an RFC would be less work than an implementation, that’s a good sign that an RFC is necessary. That said, if you anticipate controversy, you might want to short-circuit straight to an RFC. For instance new APIs almost certainly merit an RFC. Especially as std has become more conservative in favour of the much more agile cargoverse.

  • Submit a PR if the change is a:
    • Bugfix
    • Docfix
    • Obvious API hole patch, such as adding an API from one type to a symmetric type. e.g. Vec<T> -> Box<[T]> clearly motivates adding String -> Box<str>
    • Minor tweak to an unstable API (renaming, generalizing)
    • Implementing an “obvious” trait like Clone/Debug/etc
  • Submit an RFC if the change is a:
    • New API
    • Semantic Change to a stable API
    • Generalization of a stable API (e.g. how we added Pattern or Borrow)
    • Deprecation of a stable API
    • Nontrivial trait impl (because all trait impls are insta-stable)
  • Do the easier thing if uncertain. (choosing a path is not final)

Non-RFC process

  • A (non-RFC) PR is likely to be closed if clearly not acceptable:

    • Disproportionate breaking change (small inference breakage may be acceptable)
    • Unsound
    • Doesn’t fit our general design philosophy around the problem
    • Better as a crate
    • Too marginal for std
    • Significant implementation problems
  • A PR may also be closed because an RFC is appropriate.

  • A (non-RFC) PR may be merged as unstable. In this case, the feature should have a fresh feature gate and an associated tracking issue for stabilisation. Note that trait impls and docs are insta-stable and thus have no tracking issue. This may imply requiring a higher level of scrutiny for such changes.

However, an accepted RFC is not a rubber-stamp for merging an implementation PR. Nor must an implementation PR perfectly match the RFC text. Implementation details may merit deviations, though obviously they should be justified. The RFC may be amended if deviations are substantial, but are not generally necessary. RFCs should favour immutability. The RFC + Issue + PR should form a total explanation of the current implementation.

  • Once something has been merged as unstable, a shepherd should be assigned to promote and obtain feedback on the design.

  • Every time a release cycle ends, the libs teams assesses the current unstable APIs and selects some number of them for potential stabilization during the next cycle. These are announced for FCP at the beginning of the cycle, and (possibly) stabilized just before the beta is cut.

  • After the final comment period, an API should ideally take one of two paths:

    • Stabilize if the change is desired, and consensus is reached
    • Deprecate is the change is undesired, and consensus is reached
    • Extend the FCP is the change cannot meet consensus
      • If consensus still can’t be reached, consider requiring a new RFC or just deprecating as “too controversial for std”.
  • If any problems are found with a newly stabilized API during its beta period, strongly favour reverting stability in order to prevent stabilizing a bad API. Due to the speed of the trains, this is not a serious delay (~2-3 months if it’s not a major problem).

Summary

This is an RFC to make all struct fields private by default. This includes both tuple structs and structural structs.

Motivation

Reasons for default private visibility

  • Visibility is often how soundness is achieved for many types in rust. These types are normally wrapping unsafe behavior of an FFI type or some other rust-specific behavior under the hood (such as the standard Vec type). Requiring these types to opt-in to being sound is unfortunate.

  • Forcing tuple struct fields to have non-overridable public visibility greatly reduces the utility of such types. Tuple structs cannot be used to create abstraction barriers as they can always be easily destructed.

  • Private-by-default is more consistent with the rest of the Rust language. All other aspects of privacy are private-by-default except for enum variants. Enum variants, however, are a special case in that they are inserted into the parent namespace, and hence naturally inherit privacy.

  • Public fields of a struct must be considered as part of the API of the type. This means that the exact definition of all structs is by default the API of the type. Structs must opt-out of this behavior if the priv keyword is required. By requiring the pub keyword, structs must opt-in to exposing more surface area to their API.

Reasons for inherited visibility (today’s design)

  • Public definitions like pub struct Point { x: int, y: int } are concise and easy to read.
  • Private definitions certainly want private fields (to hide implementation details).

Detailed design

Currently, rustc has two policies for dealing with the privacy of struct fields:

  • Tuple structs have public fields by default (including “newtype structs”)
  • Fields of structural structs (struct Foo { ... }) inherit the same privacy of the enclosing struct.

This RFC is a proposal to unify the privacy of struct fields with the rest of the language by making them private by default. This means that both tuple variants and structural variants of structs would have private fields by default. For example, the program below is accepted today, but would be rejected with this RFC.

mod inner {
    pub struct Foo(u64);
    pub struct Bar { field: u64 }
}

fn main() {
    inner::Foo(10);
    inner::Bar { field: 10 };
}

Refinements to structural structs

Public fields are quite a useful feature of the language, so syntax is required to opt out of the private-by-default semantics. Structural structs already allow visibility qualifiers on fields, and the pub qualifier would make the field public instead of private.

Additionally, the priv visibility will no longer be allowed to modify struct fields. Similarly to how a priv fn is a compiler error, a priv field will become a compiler error.

Refinements on tuple structs

As with their structural cousins, it’s useful to have tuple structs with public fields. This RFC will modify the tuple struct grammar to:

tuple_struct := 'struct' ident '(' fields ')' ';'
fields := field | field ',' fields
field := type | visibility type

For example, these definitions will be added to the language:

// a "newtype wrapper" struct with a private field
struct Foo(u64);

// a "newtype wrapper" struct with a public field
struct Bar(pub u64);

// a tuple struct with many fields, only the first and last of which are public
struct Baz(pub u64, u32, f32, pub int);

Public fields on tuple structs will maintain the semantics that they currently have today. The structs can be constructed, destructed, and participate in pattern matches.

Private fields on tuple structs will prevent the following behaviors:

  • Private fields cannot be bound in patterns (both in irrefutable and refutable contexts, i.e. let and match statements).
  • Private fields cannot be specified outside of the defining module when constructing a tuple struct.

These semantics are intended to closely mirror the behavior of private fields for structural structs.

Statistics gathered

A brief survey was performed over the entire mozilla/rust repository to gather these statistics. While not representative of all projects, this repository should give a good indication of what most structs look like in the real world. The repository has both libraries (libstd) as well as libraries which don’t care much about privacy (librustc).

These numbers tally up all structs from all locations, and only take into account structural structs, not tuple structs.

Inherited privacyPrivate-by-default
Private fields14181852
Public fields20361602
All-private structs551 (52.23%)671 (63.60%)
All-public structs468 (44.36%)352 (33.36%)
Mixed privacy structs36 ( 3.41%)32 ( 3.03%)

The numbers clearly show that the predominant pattern is to have all-private structs, and that there are many public fields today which can be private (and perhaps should!). Additionally, there is on the order of 1418 instances of the word priv today, when in theory there should be around 1852. With this RFC, there would need to be 1602 instances of the word pub. A very large portion of structs requiring pub fields are FFI structs defined in the libc module.

Impact on enums

This RFC does not impact enum variants in any way. All enum variants will continue to inherit privacy from the outer enum type. This includes both the fields of tuple variants as well as fields of struct variants in enums.

Alternatives

The main alternative to this design is what is currently implemented today, where fields inherit the privacy of the outer structure. The pros and cons of this strategy are discussed above.

Unresolved questions

As the above statistics show, almost all structures are either all public or all private. This RFC provides an easy method to make struct fields all private, but it explicitly does not provide a method to make struct fields all public. The statistics show that pub will be written less often than priv is today, and it’s always possible to add a method to specify a struct as all-public in the future in a backwards-compatible fashion.

That being said, it’s an open question whether syntax for an “all public struct” is necessary at this time.

Summary

The “RFC” (request for comments) process is intended to provide a consistent and controlled path for new features to enter the language and standard libraries, so that all stakeholders can be confident about the direction the language is evolving in.

Motivation

The freewheeling way that we add new features to Rust has been good for early development, but for Rust to become a mature platform we need to develop some more self-discipline when it comes to changing the system. This is a proposal for a more principled RFC process to make it a more integral part of the overall development process, and one that is followed consistently to introduce features to Rust.

Detailed design

Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow.

Some changes though are “substantial”, and we ask that these be put through a bit of a design process and produce a consensus among the Rust community and the core team.

When you need to follow this process

You need to follow this process if you intend to make “substantial” changes to the Rust distribution. What constitutes a “substantial” change is evolving based on community norms, but may include the following.

  • Any semantic or syntactic change to the language that is not a bugfix.
  • Removing language features, including those that are feature-gated.
  • Changes to the interface between the compiler and libraries, including lang items and intrinsics.
  • Additions to std

Some changes do not require an RFC:

  • Rephrasing, reorganizing, refactoring, or otherwise “changing shape does not change meaning”.
  • Additions that strictly improve objective, numerical quality criteria (warning removal, speedup, better platform coverage, more parallelism, trap more errors, etc.)
  • Additions only likely to be noticed by other developers-of-rust, invisible to users-of-rust.

If you submit a pull request to implement a new feature without going through the RFC process, it may be closed with a polite request to submit an RFC first.

What the process is

In short, to get a major feature added to Rust, one must first get the RFC merged into the RFC repo as a markdown file. At that point the RFC is ‘active’ and may be implemented with the goal of eventual inclusion into Rust.

  • Fork the RFC repo https://github.com/rust-lang/rfcs
  • Copy 0000-template.md to text/0000-my-feature.md (where ‘my-feature’ is descriptive. don’t assign an RFC number yet).
  • Fill in the RFC
  • Submit a pull request. The pull request is the time to get review of the design from the larger community.
  • Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those that don’t receive any comments.

Eventually, somebody on the core team will either accept the RFC by merging the pull request, at which point the RFC is ‘active’, or reject it by closing the pull request.

Whomever merges the RFC should do the following:

  • Assign an id, using the PR number of the RFC pull request. (If the RFC has multiple pull requests associated with it, choose one PR number, preferably the minimal one.)
  • Add the file in the text/ directory.
  • Create a corresponding issue on Rust repo
  • Fill in the remaining metadata in the RFC header, including links for the original pull request(s) and the newly created Rust issue.
  • Add an entry in the Active RFC List of the root README.md.
  • Commit everything.

Once an RFC becomes active then authors may implement it and submit the feature as a pull request to the Rust repo. An ‘active’ is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it does mean that in principle all the major stakeholders have agreed to the feature and are amenable to merging it.

Modifications to active RFC’s can be done in followup PR’s. An RFC that makes it through the entire process to implementation is considered ‘complete’ and is removed from the Active RFC List; an RFC that fails after becoming active is ‘inactive’ and moves to the ‘inactive’ folder.

Alternatives

Retain the current informal RFC process. The newly proposed RFC process is designed to improve over the informal process in the following ways:

  • Discourage unactionable or vague RFCs
  • Ensure that all serious RFCs are considered equally
  • Give confidence to those with a stake in Rust’s development that they understand why new features are being merged

As an alternative, we could adopt an even stricter RFC process than the one proposed here. If desired, we should likely look to Python’s PEP process for inspiration.

Unresolved questions

  1. Does this RFC strike a favorable balance between formality and agility?
  2. Does this RFC successfully address the aforementioned issues with the current informal RFC process?
  3. Should we retain rejected RFCs in the archive?

Summary

Rust currently has an attribute usage lint but it does not work particularly well. This RFC proposes a new implementation strategy that should make it significantly more useful.

Motivation

The current implementation has two major issues:

  • There are very limited warnings for valid attributes that end up in the wrong place. Something like this will be silently ignored:
#[deriving(Clone)]; // Shouldn't have put a ; here
struct Foo;

#[ignore(attribute-usage)] // Should have used #[allow(attribute-usage)] instead!
mod bar {
    //...
}
  • ItemDecorators can now be defined outside of the compiler, and there’s no way to tag them and associated attributes as valid. Something like this requires an #[allow(attribute-usage)]:
#[feature(phase)];
#[phase(syntax, link)]
extern crate some_orm;

#[ormify]
pub struct Foo {
    #[column(foo_)]
    #[primary_key]
    foo: int
}

Detailed design

The current implementation is implemented as a simple fold over the AST, comparing attributes against a whitelist. Crate-level attributes use a separate whitelist, but no other distinctions are made.

This RFC would change the implementation to actually track which attributes are used during the compilation process. syntax::ast::Attribute_ would be modified to add an ID field:

pub struct AttrId(uint);

pub struct Attribute_ {
    id: AttrId,
    style: AttrStyle,
    value: @MetaItem,
    is_sugared_doc: bool,
}

syntax::ast::parse::ParseSess will generate new AttrIds on demand. I believe that attributes will only be created during parsing and expansion, and the ParseSess is accessible in both.

The AttrIds will be used to create a side table of used attributes. This will most likely be a thread local to make it easily accessible during all stages of compilation by calling a function in syntax::attr:

fn mark_used(attr: &Attribute) { }

The attribute-usage lint would run at the end of compilation and warn on all attributes whose ID does not appear in the side table.

One interesting edge case is attributes like doc that are used, but not in the normal compilation process. There could either be a separate fold pass to mark all doc attributes as used or doc could simply be whitelisted in the attribute-usage lint.

Attributes in code that has been eliminated with #[cfg()] will not be linted, but I feel that this is consistent with the way #[cfg()] works in general (e.g. the code won’t be type-checked either).

Alternatives

An alternative would be to rewrite rustc::middle::lint to robustly check that attributes are used where they’re supposed to be. This will be fairly complex and be prone to failure if/when more nodes are added to the AST. This also doesn’t solve motivation #2, which would require externally loaded lint support.

Unresolved questions

  • This implementation doesn’t allow for a distinction between “unused” and “unknown” attributes. The #[phase(syntax)] crate loading infrastructure could be extended to pull a list of attributes from crates to use in the lint pass, but I’m not sure if the extra complexity is worth it.
  • The side table could be threaded through all of the compilation stages that need to use it instead of being a thread local. This would probably require significantly more work than the thread local approach, however. The thread local approach should not negatively impact any future parallelization work as each thread can keep its own side table, which can be merged into one for the lint pass.

Note: this RFC was never implemented and has been retired. The design may still be useful in the future, but before implementing we would prefer to revisit it so as to be sure it is up to date.

Summary

The way our intrinsics work forces them to be wrapped in order to behave like normal functions. As a result, rustc is forced to inline a great number of tiny intrinsic wrappers, which is bad for both compile-time performance and runtime performance without optimizations. This proposal changes the way intrinsics are surfaced in the language so that they behave the same as normal Rust functions by removing the “rust-intrinsic” foreign ABI and reusing the “Rust” ABI.

Motivation

A number of commonly-used intrinsics, including transmute, forget, init, uninit, and move_val_init, are accessed through wrappers whose only purpose is to present the intrinsics as normal functions. As a result, rustc is forced to inline a great number of tiny intrinsic wrappers, which is bad for both compile-time performance and runtime performance without optimizations.

Intrinsics have a differently-named ABI from Rust functions (“rust-intrinsic” vs. “Rust”) though the actual ABI implementation is identical. As a result one can’t take the value of an intrinsic as a function:

// error: the type of transmute is `extern "rust-intrinsic" fn ...`
let transmute: fn(int) -> uint = intrinsics::transmute;

This incongruity means that we can’t just expose the intrinsics directly as part of the public API.

Detailed design

extern "Rust" fn is already equivalent to fn, so if intrinsics have the “Rust” ABI then the problem is solved.

Under this scheme intrinsics will be declared as extern "Rust" functions and identified as intrinsics with the #[lang = "..."] attribute:

extern "Rust" {
    #[lang = "transmute"]
    fn transmute<T, U>(T) -> U;
}

The compiler will type check and translate intrinsics the same as today. Additionally, when trans sees a “Rust” extern tagged as an intrinsic it will not emit a function declaration to LLVM bitcode.

Because intrinsics will be lang items, they can no longer be redeclared arbitrary number of times. This will require a small amount of existing library code to be refactored, and all intrinsics to be exposed through public abstractions.

Currently, “Rust” foreign functions may not be generic; this change will require a special case that allows intrinsics to be generic.

Alternatives

  1. Instead of making intrinsics lang items we could create a slightly different mechanism, like an #[intrinsic] attribute, that would continue letting intrinsics to be redeclared.

  2. While using lang items to identify intrinsics, intrinsic lang items could be allowed to be redeclared.

  3. We could also make “rust-intrinsic” coerce or otherwise be the same as “Rust” externs and normal Rust functions.

Unresolved questions

None.

Summary

Allow attributes on more places inside functions, such as statements, blocks and expressions.

Motivation

One sometimes wishes to annotate things inside functions with, for example, lint #[allow]s, conditional compilation #[cfg]s, and even extra semantic (or otherwise) annotations for external tools.

For the lints, one can currently only activate lints at the level of the function which is possibly larger than one needs, and so may allow other “bad” things to sneak through accidentally. E.g.

#[allow(uppercase_variable)]
let L = List::new(); // lowercase looks like one or capital i

For the conditional compilation, the work-around is duplicating the whole containing function with a #[cfg], or breaking the conditional code into a its own function. This does mean that any variables need to be explicitly passed as arguments.

The sort of things one could do with other arbitrary annotations are

#[allowed_unsafe_actions(ffi)]
#[audited="2014-04-22"]
unsafe { ... }

and then have an external tool that checks that that unsafe block’s only unsafe actions are FFI, or a tool that lists blocks that have been changed since the last audit or haven’t been audited ever.

The minimum useful functionality would be supporting attributes on blocks and let statements, since these are flexible enough to allow for relatively precise attribute handling.

Detailed design

Normal attribute syntax on let statements, blocks and expressions.

fn foo() {
    #[attr1]
    let x = 1;

    #[attr2]
    {
        // code
    }

    #[attr3]
    unsafe {
        // code
    }
    #[attr4] foo();

    let x = #[attr5] 1;

    qux(3 + #[attr6] 2);

    foo(x, #[attr7] y, z);
}

Attributes bind tighter than any operator, that is #[attr] x op y is always parsed as (#[attr] x) op y.

cfg

It is definitely an error to place a #[cfg] attribute on a non-statement expressions, that is, attr1attr4 can possibly be #[cfg(foo)], but attr5attr7 cannot, since it makes little sense to strip code down to let x = ;.

However, like #ifdef in C/C++, widespread use of #[cfg] may be an antipattern that makes code harder to read. This RFC is just adding the ability for attributes to be placed in specific places, it is not mandating that #[cfg] actually be stripped in those places (although it should be an error if it is ignored).

Inner attributes

Inner attributes can be placed at the top of blocks (and other structure incorporating a block) and apply to that block.

{
    #![attr11]

    foo()
}

match bar {
    #![attr12]

    _ => {}
}

// are the same as

#[attr11]
{
    foo()
}

#[attr12]
match bar {
    _ => {}
}

if

Attributes would be disallowed on if for now, because the interaction with if/else chains are funky, and can be simulated in other ways.

#[cfg(not(foo))]
if cond1 {
} else #[cfg(not(bar))] if cond2 {
} else #[cfg(not(baz))] {
}

There is two possible interpretations of such a piece of code, depending on if one regards the attributes as attaching to the whole if ... else chain (“exterior”) or just to the branch on which they are placed (“interior”).

  • --cfg foo: could be either removing the whole chain (exterior) or equivalent to if cond2 {} else {} (interior).
  • --cfg bar: could be either if cond1 {} (e) or if cond1 {} else {} (i)
  • --cfg baz: equivalent to if cond1 {} else if cond2 {} (no subtlety).
  • --cfg foo --cfg bar: could be removing the whole chain (e) or the two if branches (leaving only the else branch) (i).

(This applies to any attribute that has some sense of scoping, not just #[cfg], e.g. #[allow] and #[warn] for lints.)

As such, to avoid confusion, attributes would not be supported on if. Alternatives include using blocks:

#[attr] if cond { ... } else ...
// becomes, for an exterior attribute,
#[attr] {
    if cond { ... } else ...
}
// and, for an interior attribute,
if cond {
    #[attr] { ... }
} else ...

And, if the attributes are meant to be associated with the actual branching (e.g. a hypothetical #[cold] attribute that indicates a branch is unlikely), one can annotate match arms:

match cond {
    #[attr] true => { ... }
    #[attr] false => { ... }
}

Drawbacks

This starts mixing attributes with nearly arbitrary code, possibly dramatically restricting syntactic changes related to them, for example, there was some consideration for using @ for attributes, this change may make this impossible (especially if @ gets reused for something else, e.g. Python is using it for matrix multiplication). It may also make it impossible to use # for other things.

As stated above, allowing #[cfg]s everywhere can make code harder to reason about, but (also stated), this RFC is not for making such #[cfg]s be obeyed, it just opens the language syntax to possibly allow it.

Alternatives

These instances could possibly be approximated with macros and helper functions, but to a low degree degree (e.g. how would one annotate a general unsafe block).

Only allowing attributes on “statement expressions” that is, expressions at the top level of a block, this is slightly limiting; but we can expand to support other contexts backwards compatibly in the future.

The if/else issue may be able to be resolved by introducing explicit “interior” and “exterior” attributes on if: by having #[attr] if cond { ... be an exterior attribute (applying to the whole if/else chain) and if cond #[attr] { ... be an interior attribute (applying to only the current if branch). There is no difference between interior and exterior for an else { branch, and so else #[attr] { is sufficient.

Unresolved questions

Are the complications of allowing attributes on arbitrary expressions worth the benefits?

Note: The Share trait described in this RFC was later renamed to Sync.

Summary

The high-level idea is to add language features that simultaneously achieve three goals:

  1. move Send and Share out of the language entirely and into the standard library, providing mechanisms for end users to easily implement and use similar “marker” traits of their own devising;
  2. make “normal” Rust types sendable and sharable by default, without the need for explicit opt-in; and,
  3. continue to require “unsafe” Rust types (those that manipulate unsafe pointers or implement special abstractions) to “opt-in” to sendability and sharability with an unsafe declaration.

These goals are achieved by two changes:

  1. Unsafe traits: An unsafe trait is a trait that is unsafe to implement, because it represents some kind of trusted assertion. Note that unsafe traits are perfectly safe to use. Send and Share are examples of unsafe traits: implementing these traits is effectively an assertion that your type is safe for threading.

  2. Default and negative impls: A default impl is one that applies to all types, except for those types that explicitly opt out. For example, there would be a default impl for Send, indicating that all types are Send “by default”.

    To counteract a default impl, one uses a negative impl that explicitly opts out for a given type T and any type that contains T. For example, this RFC proposes that unsafe pointers *T will opt out of Send and Share. This implies that unsafe pointers cannot be sent or shared between threads by default. It also implies that any structs which contain an unsafe pointer cannot be sent. In all examples encountered thus far, the set of negative impls is fixed and can easily be declared along with the trait itself.

    Safe wrappers like Arc, Atomic, or Mutex can opt to implement Send and Share explicitly. This will then make them be considered sendable (or sharable) even though they contain unsafe pointers etc.

Based on these two mechanisms, we can remove the notion of Send and Share as builtin concepts. Instead, these would become unsafe traits with default impls (defined purely in the library). The library would explicitly opt out of Send/Share for certain types, like unsafe pointers (*T) or interior mutability (Unsafe<T>). Any type, therefore, which contains an unsafe pointer would be confined (by default) to a single thread. Safe wrappers around those types, like Arc, Atomic, or Mutex, can then opt back in by explicitly implementing Send (these impls would have to be designed as unsafe).

Motivation

Since proposing opt-in builtin traits, I have become increasingly concerned about the notion of having Send and Share be strictly opt-in. There are two main reasons for my concern:

  1. Rust is very close to being a language where computations can be parallelized by default. Making Send, and especially Share, opt-in makes that harder to achieve.
  2. The model followed by Send/Share cannot easily be extended to other traits in the future nor can it be extended by end-users with their own similar traits. It is worrisome that I have come across several use cases already which might require such extension (described below).

To elaborate on those two points: With respect to parallelization: for the most part, Rust types are threadsafe “by default”. To make something non-threadsafe, you must employ unsynchronized interior mutability (e.g., Cell, RefCell) or unsynchronized shared ownership (Rc). In both cases, there are also synchronized variants available (Mutex, Arc, etc). This implies that we can make APIs to enable intra-task parallelism and they will work ubiquitously, so long as people avoid Cell and Rc when not needed. Explicit opt-in threatens that future, however, because fewer types will implement Share, even if they are in fact threadsafe.

With respect to extensibility, it is particularly worrisome that if a library forgets to implement Send or Share, downstream clients are stuck. They cannot, for example, use a newtype wrapper, because it would be illegal to implement Send on the newtype. This implies that all libraries must be vigilant about implementing Send and Share (even more so than with other pervasive traits like Eq or Ord). The current plan is to address this via lints and perhaps some convenient deriving syntax, which may be adequate for Send and Share. But if we wish to add new “classification” traits in the future, these new traits won’t have been around from the start, and hence won’t be implemented by all existing code.

Another concern of mine is that end users cannot define classification traits of their own. For example, one might like to define a trait for “tainted” data, and then test to ensure that tainted data doesn’t pass through some generic routine. There is no particular way to do this today.

More examples of classification traits that have come up recently in various discussions:

  • Snapshot (nee Freeze), which defines logical immutability rather than physical immutability. Rc<int>, for example, would be considered Snapshot. Snapshot could be useful because Snapshot+Clone indicates a type whose value can be safely “preserved” by cloning it.
  • NoManaged, a type which does not contain managed data. This might be useful for integrating garbage collection with custom allocators which do not wish to serve as potential roots.
  • NoDrop, a type which does not contain an explicit destructor. This can be used to avoid nasty GC quandries.

All three of these (Snapshot, NoManaged, NoDrop) can be easily defined using traits with default impls.

A final, somewhat weaker, motivator is aesthetics. Ownership has allowed us to move threading almost entirely into libraries. The one exception is that the Send and Share types remain built-in. Opt-in traits makes them less built-in, but still requires custom logic in the “impl matching” code as well as special safety checks when Safe or Share are implemented.

After the changes I propose, the only traits which would be specifically understood by the compiler are Copy and Sized. I consider this acceptable, since those two traits are intimately tied to the core Rust type system, unlike Send and Share.

Detailed design

Unsafe traits

Certain traits like Send and Share are critical to memory safety. Nonetheless, it is not feasible to check the thread-safety of all types that implement Send and Share. Therefore, we introduce a notion of an unsafe trait – this is a trait that is unsafe to implement, because implementing it carries semantic guarantees that, if compromised, threaten memory safety in a deep way.

An unsafe trait is declared like so:

unsafe trait Foo { ... }

To implement an unsafe trait, one must mark the impl as unsafe:

unsafe impl Foo for Bar { ... }

Designating an impl as unsafe does not automatically mean that the body of the methods is an unsafe block. Each method in the trait must also be declared as unsafe if it to be considered unsafe.

Unsafe traits are only unsafe to implement. It is always safe to reference an unsafe trait. For example, the following function is safe:

fn foo<T:Send>(x: T) { ... }

It is also safe to opt out of an unsafe trait (as discussed in the next section).

Default and negative impls

We add a notion of a default impl, written:

impl Trait for .. { }

Default impls are subject to various limitations:

  1. The default impl must appear in the same module as Trait (or a submodule).
  2. Trait must not define any methods.

We further add the notion of a negative impl, written:

impl !Trait for Foo { }

Negative impls are only permitted if Trait has a default impl. Negative impls are subject to the usual orphan rules, but they are permitting to be overlapping. This makes sense because negative impls are not providing an implementation and hence we are not forced to select between them. For similar reasons, negative impls never need to be marked unsafe, even if they reference an unsafe trait.

Intuitively, to check whether a trait Foo that contains a default impl is implemented for some type T, we first check for explicit (positive) impls that apply to T. If any are found, then T implements Foo. Otherwise, we check for negative impls. If any are found, then T does not implement Foo. If neither positive nor negative impls were found, we proceed to check the component types of T (i.e., the types of a struct’s fields) to determine whether all of them implement Foo. If so, then Foo is considered implemented by T.

Oe non-obvious part of the procedure is that, as we recursively examine the component types of T, we add to our list of assumptions that T implements Foo. This allows recursive types like

struct List<T> { data: T, next: Option<List<T>> }

to be checked successfully. Otherwise, we would recursive infinitely. (This procedure is directly analogous to what the existing TypeContents code does.)

Note that there exist types that expand to an infinite tree of types. Such types cannot be successfully checked with a recursive impl; they will simply overflow the builtin depth checking. However, such types also break code generation under monomorphization (we cannot create a finite set of LLVM types that correspond to them) and are in general not supported. Here is an example of such a type:

struct Foo<A> {
    data: Option<Foo<Vec<A>>>
}

The difference between Foo and List above is that Foo<A> references Foo<Vec<A>>, which will then in turn reference Foo<Vec<Vec<A>>> and so on.

Modeling Send and Share using default traits

The Send and Share traits will be modeled entirely in the library as follows. First, we declare the two traits as follows:

unsafe trait Send { }
unsafe impl Send for .. { }

unsafe trait Share { }
unsafe impl Share for .. { }

Both traits are declared as unsafe because declaring that a type if Send and Share has ramifications for memory safety (and data-race freedom) that the compiler cannot, itself, check.

Next, we will add opt out impls of Send and Share for the various unsafe types:

impl<T> !Send for *T { }
impl<T> !Share for *T { }

impl<T> !Send for *mut T { }
impl<T> !Share for *mut T { }

impl<T> !Share for Unsafe<T> { }

Note that it is not necessary to write unsafe to opt out of an unsafe trait, as that is the default state.

Finally, we will add opt in impls of Send and Share for the various safe wrapper types as needed. Here I give one example, which is Mutex. Mutex is interesting because it has the property that it converts a type T from being Sendable to something Sharable:

unsafe impl<T:Send> Send for Mutex<T> { }
unsafe impl<T:Send> Share for Mutex<T> { }

The Copy and Sized traits

The final two builtin traits are Copy and Share. This RFC does not propose any changes to those two traits but rather relies on the specification from the original opt-in RFC.

Controlling copy vs move with the Copy trait

The Copy trait is “opt-in” for user-declared structs and enums. A struct or enum type is considered to implement the Copy trait only if it implements the Copy trait. This means that structs and enums would move by default unless their type is explicitly declared to be Copy. So, for example, the following code would be in error:

struct Point { x: int, y: int }
...
let p = Point { x: 1, y: 2 };
let q = p;  // moves p
print(p.x); // ERROR

To allow that example, one would have to impl Copy for Point:

struct Point { x: int, y: int }
impl Copy for Point { }
...
let p = Point { x: 1, y: 2 };
let q = p;  // copies p, because Point is Pod
print(p.x); // OK

Effectively, there is a three step ladder for types:

  1. If you do nothing, your type is linear, meaning that it moves from place to place and can never be copied in any way. (We need a better name for that.)
  2. If you implement Clone, your type is cloneable, meaning that it moves from place to place, but it can be explicitly cloned. This is suitable for cases where copying is expensive.
  3. If you implement Copy, your type is copyable, meaning that it is just copied by default without the need for an explicit clone. This is suitable for small bits of data like ints or points.

What is nice about this change is that when a type is defined, the user makes an explicit choice between these three options.

Determining whether a type is Sized

Per the DST specification, the array types [T] and object types like Trait are unsized, as are any structs that embed one of those types. The Sized trait can never be explicitly implemented and membership in the trait is always automatically determined.

Matching and coherence for the builtin types Copy and Sized

In general, determining whether a type implements a builtin trait can follow the existing trait matching algorithm, but it will have to be somewhat specialized. The problem is that we are somewhat limited in the kinds of impls that we can write, so some of the implementations we would want must be “hard-coded”.

Specifically we are limited around tuples, fixed-length array types, proc types, closure types, and trait types:

  • Fixed-length arrays: A fixed-length array [T, ..n] is Copy if T is Copy. It is always Sized as T is required to be Sized.
  • Tuples: A tuple (T_0, ..., T_n) is Copy/Sized depending if, for all i, T_i is Copy/Sized.
  • Trait objects (including procs and closures): A trait object type Trait:K (assuming DST here ;) is never Copy nor Sized.

We cannot currently express the above conditions using impls. We may at some point in the future grow the ability to express some of them. For now, though, these “impls” will be hardcoded into the algorithm as if they were written in libstd.

Per the usual coherence rules, since we will have the above impls in libstd, and we will have impls for types like tuples and fixed-length arrays baked in, the only impls that end users are permitted to write are impls for struct and enum types that they define themselves. Although this rule is in the general spirit of the coherence checks, it will have to be written specially.

Design discussion

Why unsafe traits

Without unsafe traits, it would be possible to create data races without using the unsafe keyword:

struct MyStruct { foo: Cell<int> }
impl Share for MyStruct { }

Balancing abstraction, safety, and convenience.

In general, the existence of default traits is anti-abstraction, in the sense that it exposes implementation details a library might prefer to hide. Specifically, adding new private fields can cause your types to become non-sendable or non-sharable, which may break downstream clients without your knowing. This is a known challenge with parallelism: knowing whether it is safe to parallelize relies on implementation details we have traditionally tried to keep secret from clients (often it is said that parallelism is “anti-modular” or “anti-compositional” for this reason).

I think this risk must be weighed against the limitations of requiring total opt in. Requiring total opt in not only means that some types will accidentally fail to implement send or share when they could, but it also means that libraries which wish to employ marker traits cannot be composed with other libraries that are not aware of those marker traits. In effect, opt-in is anti-modular in its own way.

To be more specific, imagine that library A wishes to define a Untainted trait, and it specifically opts out of Untainted for some base set of types. It then wishes to have routines that only operate on Untainted data. Now imagine that there is some other library B that defines a nifty replacement for Vector, NiftyVector. Finally, some library C wishes to use a NiftyVector<uint>, which should not be considered tainted, because it doesn’t reference any tainted strings. However, NiftyVector<uint> does not implement Untainted (nor can it, without either library A or library B knowing about one another). Similar problems arise for any trait, of course, due to our coherence rules, but often they can be overcome with new types. Not so with Send and Share.

Other use cases

Part of the design involves making space for other use cases. I’d like to sketch out how some of those use cases can be implemented briefly. This is not included in the Detailed design section of the RFC because these traits generally concern other features and would be added under RFCs of their own.

Isolating snapshot types. It is useful to be able to identify types which, when cloned, result in a logical snapshot. That is, a value which can never be mutated. Note that there may in fact be mutation under the covers, but this mutation is not visible to the user. An example of such a type is Rc<T> – although the ref count on the Rc may change, the user has no direct access and so Rc<T> is still logically snapshotable. However, not all Rc instances are snapshottable – in particular, something like Rc<Cell<int>> is not.

trait Snapshot { }
impl Snapshot for .. { }

// In general, anything that can reach interior mutability is not
// snapshotable.
impl<T> !Snapshot for Unsafe<T> { }

// But it's ok for Rc<T>.
impl<T:Snapshot> Snapshot for Rc<T> { }

Note that these definitions could all occur in a library. That is, the Rc type itself doesn’t need to know about the Snapshot trait.

Preventing access to managed data. As part of the GC design, we expect it will be useful to write specialized allocators or smart pointers that explicitly do not support tracing, so as to avoid any kind of GC overhead. The general idea is that there should be a bound, let’s call it NoManaged, that indicates that a type cannot reach managed data and hence does not need to be part of the GC’s root set. This trait could be implemented as follows:

unsafe trait NoManaged { }
unsafe impl NoManaged for .. { }
impl<T> !NoManaged for Gc<T> { }

Preventing access to destructors. It is generally recognized that allowing destructors to escape into managed data – frequently referred to as finalizers – is a bad idea. Therefore, we would generally like to ensure that anything is placed into a managed box does not implement the drop trait. Instead, we would prefer to regular the use of drop through a guardian-like API, which basically means that destructors are not asynchronously executed by the GC, as they would be in Java, but rather enqueued for the mutator thread to run synchronously at its leisure. In order to handle this, though, we presumably need some sort of guardian wrapper types that can take a value which has a destructor and allow it to be embedded within managed data. We can summarize this in a trait GcSafe as follows:

unsafe trait GcSafe { }
unsafe impl GcSafe for .. { }

// By default, anything which has drop trait is not GcSafe.
impl<T:Drop> !GcSafe for T { }

// But guardians are, even if `T` has drop.
impl<T> GcSafe for Guardian<T> { }

Why are Copy and Sized different?

The Copy and Sized traits remain builtin to the compiler. This makes sense because they are intimately tied to analyses the compiler performs. For example, the running of destructors and tracking of moves requires knowing which types are Copy. Similarly, the allocation of stack frames need to know whether types are fully Sized. In contrast, sendability and sharability has been fully exported to libraries at this point.

In addition, opting in to Copy makes sense for several reasons:

  • Experience has shown that “data-like structs”, for which Copy is most appropriate, are a very small percentage of the total.
  • Changing a public API from being copyable to being only movable has a outsized impact on users of the API. It is common however that as APIs evolve they will come to require owned data (like a Vec), even if they do not initially, and hence will change from being copyable to only movable. Opting in to Copy is a way of saying that you never foresee this coming to pass.
  • Often it is useful to create linear “tokens” that do not themselves have data but represent permissions. This can be done today using markers but it is awkward. It becomes much more natural under this proposal.

Drawbacks

API stability. The main drawback of this approach over the existing opt-in approach seems to be that a type may be “accidentally” sendable or sharable. I discuss this above under the heading of “balancing abstraction, safety, and convenience”. One point I would like to add here, as it specifically pertains to API stability, is that a library may, if they choose, opt out of Send and Share pre-emptively, in order to “reserve the right” to add non-sendable things in the future.

Alternatives

  • The existing opt-in design is of course an alternative.

  • We could also simply add the notion of unsafe traits and not default impls and then allow types to unsafely implement Send or Share, bypassing the normal safety guidelines. This gives an escape valve for a downstream client to assert that something is sendable which was not declared as sendable. However, such a solution is deeply unsatisfactory, because it rests on the downstream client making an assertion about the implementation of the library it uses. If that library should be updated, the client’s assumptions could be invalidated, but no compilation errors will result (the impl was already declared as unsafe, after all).

Phasing

Many of the mechanisms described in this RFC are not needed immediately. Therefore, we would like to implement a minimal “forwards compatible” set of changes now and then leave the remaining work for after the 1.0 release. The builtin rules that the compiler currently implements for send and share are quite close to what is proposed in this RFC. The major change is that unsafe pointers and the UnsafeCell type are currently considered sendable.

Therefore, to be forwards compatible in the short term, we can use the same hybrid of builtin and explicit impls for Send and Share that we use for Copy, with the rule that unsafe pointers and UnsafeCell are not considered sendable. We must also implement the unsafe trait and unsafe impl concept.

What this means in practice is that using *const T, *mut T, and UnsafeCell will make a type T non-sendable and non-sharable, and T must then explicitly implement Send or Share.

Unresolved questions

  • The terminology of “unsafe trait” seems somewhat misleading, since it seems to suggest that “using” the trait is unsafe, rather than implementing it. One suggestion for an alternate keyword was trusted trait, which might dovetail with the use of trusted to specify a trusted block of code. If we did use trusted trait, it seems that all impls would also have to be trusted impl.
  • Perhaps we should declare a trait as a “default trait” directly, rather than using the impl Drop for .. syntax. I don’t know precisely what syntax to use, though.
  • Currently, there are special rules relating to object types and the builtin traits. If the “builtin” traits are no longer builtin, we will have to generalize object types to be simply a set of trait references. This is already planned but merits a second RFC. Note that no changes here are required for the 1.0, since the phasing plan dictates that builtin traits remain special until after 1.0.

Summary

This RFC is a proposal to remove the usage of the keyword priv from the Rust language.

Motivation

By removing priv entirely from the language, it significantly simplifies the privacy semantics as well as the ability to explain it to newcomers. The one remaining case, private enum variants, can be rewritten as such:

// pub enum Foo {
//     Bar,
//     priv Baz,
// }

pub enum Foo {
    Bar,
    Baz(BazInner)
}

pub struct BazInner(());

// pub enum Foo2 {
//     priv Bar2,
//     priv Baz2,
// }

pub struct Foo2 {
    variant: FooVariant
}

enum FooVariant {
    Bar2,
    Baz2,
}

Private enum variants are a rarely used feature of the language, and are generally not regarded as a strong enough feature to justify the priv keyword entirely.

Detailed design

There remains only one use case of the priv visibility qualifier in the Rust language, which is to make enum variants private. For example, it is possible today to write a type such as:

pub enum Foo {
    Bar,
    priv Baz
}

In this example, the variant Bar is public, while the variant Baz is private. This RFC would remove this ability to have private enum variants.

In addition to disallowing the priv keyword on enum variants, this RFC would also forbid visibility qualifiers in front of enum variants entirely, as they no longer serve any purpose.

Status of the identifier priv

This RFC would demote the identifier priv from being a keyword to being a reserved keyword (in case we find a use for it in the future).

Alternatives

  • Allow private enum variants, as-is today.
  • Add a new keyword for enum which means “my variants are all private” with controls to make variants public.

Unresolved questions

  • Is the assertion that private enum variants are rarely used true? Are there legitimate use cases for keeping the priv keyword?

Summary

Check all types for well-formedness with respect to the bounds of type variables.

Allow bounds on formal type variable in structs and enums. Check these bounds are satisfied wherever the struct or enum is used with actual type parameters.

Motivation

Makes type checking saner. Catches errors earlier in the development process. Matches behaviour with built-in bounds (I think).

Currently formal type variables in traits and functions may have bounds and these bounds are checked whenever the item is used against the actual type variables. Where these type variables are used in types, these types should be checked for well-formedness with respect to the type definitions. E.g.,

trait U {}
trait T<X: U> {}
trait S<Y> {
    fn m(x: ~T<Y>) {}  // Should be flagged as an error
}

Formal type variables in structs and enums may not have bounds. It is possible to use these type variables in the types of fields, and these types cannot be checked for well-formedness until the struct is instantiated, where each field must be checked.

struct St<X> {
    f: ~T<X>, // Cannot be checked
}

Likewise, impls of structs are not checked. E.g.,

impl<X> St<X> {  // Cannot be checked
    ...
}

Here, no struct can exist where X is replaced by something implementing U, so in the impl, X can be assumed to have the bound U. But the impl does not indicate this. Note, this is sound, but does not indicate programmer intent very well.

Detailed design

Whenever a type is used it must be checked for well-formedness. For polymorphic types we currently check only that the type exists. I would like to also check that any actual type parameters are valid. That is, given a type T<U> where T is declared as T<X: B>, we currently only check that T does in fact exist somewhere (I think we also check that the correct number of type parameters are supplied, in this case one). I would also like to check that U satisfies the bound B.

Work on built-in bounds is (I think) in the process of adding this behaviour for built-in bounds. I would like to apply this to user-specified bounds too.

I think no fewer programs can be expressed. That is, any errors we catch with this new check would have been caught later in the existing scheme, where exactly would depend on where the type was used. The only exception would be if the formal type variable was not used.

We would allow bounds on type variable in structs and enums. Wherever a concrete struct or enum type appears, check the actual type variables against the bounds on the formals (the type well-formedness check).

From the above examples:

trait U {}
trait T<X: U> {}
trait S1<Y> {
    fn m(x: ~T<Y>) {}  //~ ERROR
}
trait S2<Y: U> {
    fn m(x: ~T<Y>) {}
}

struct St<X: U> {
    f: ~T<X>,
}

impl<X: U> St<X> {
    ...
}

Alternatives

Keep the status quo.

We could add bounds on structs, etc. But not check them in impls. This is safe since the implementation is more general than the struct. It would mean we allow impls to be un-necessarily general.

Unresolved questions

Do we allow and check bounds in type aliases? We currently do not. We should probably continue not to since these type variables (and indeed the type aliases) are substituted away early in the type checking process. So if we think of type aliases as almost macro-like, then not checking makes sense. OTOH, it is still a little bit inconsistent.

Summary

Split the current libstd into component libraries, rebuild libstd as a facade in front of these component libraries.

Motivation

Rust as a language is ideal for usage in constrained contexts such as embedding in applications, running on bare metal hardware, and building kernels. The standard library, however, is not quite as portable as the language itself yet. The standard library should be as usable as it can be in as many contexts as possible, without compromising its usability in any context.

This RFC is meant to expand the usability of the standard library into these domains where it does not currently operate easily

Detailed design

In summary, the following libraries would make up part of the standard distribution. Each library listed after the colon are the dependent libraries.

  • libmini
  • liblibc
  • liballoc: libmini liblibc
  • libcollections: libmini liballoc
  • libtext: libmini liballoc libcollections
  • librustrt: libmini liballoc liblibc
  • libsync: libmini liballoc liblibc librustrt
  • libstd: everything above

libmini

Note: The name libmini warrants bikeshedding. Please consider it a placeholder for the name of this library.

This library is meant to be the core component of all rust programs in existence. This library has very few external dependencies, and is entirely self contained.

Current modules in std which would make up libmini would include the list below. This list was put together by actually stripping down libstd to these modules, so it is known that it is possible for libmini to compile with these modules.

  • atomics
  • bool
  • cast
  • char
  • clone
  • cmp
  • container
  • default
  • finally
  • fmt
  • intrinsics
  • io, stripped down to its core
  • iter
  • kinds
  • mem
  • num (and related modules), no float support
  • ops
  • option
  • ptr
  • raw
  • result
  • slice, but without any ~[T] methods
  • tuple
  • ty
  • unit

This list may be a bit surprising, and it’s makeup is discussed below. Note that this makeup is selected specifically to eliminate the need for the dreaded “one off extension trait”. This pattern, while possible, is currently viewed as subpar due to reduced documentation benefit and sharding implementation across many locations.

Strings

In a post-DST world, the string type will actually be a library-defined type, Str (or similarly named). Strings will no longer be a language feature or a language-defined type. This implies that any methods on strings must be in the same crate that defined the Str type, or done through extension traits.

In the spirit of reducing extension traits, the Str type and module were left out of libmini. It’s impossible for libmini to support all methods of Str, so it was entirely removed.

This decision does have ramifications on the implementation of libmini.

  • String literals are an open question. In theory, making a string literal would require the Str lang item to be present, but is not present in libmini. That being said, libmini would certainly create many literal strings (for error messages and such). This may be adequately circumvented by having literal strings create a value of type &'static [u8] if the string lang item is not present. While difficult to work with, this may get us 90% of the way there.

  • The fmt module must be tweaked for the removal of strings. The only major user-facing detail is that the pad function on Formatter would take a byte-slice and a character length, and then not handle the precision (which truncates the byte slice with a number of characters). This may be overcome by possibly having an extension trait could be added for a Formatter adding a real pad function that takes strings, or just removing the function altogether in favor of str.fmt(formatter).

  • The IoError type suffers from the removal of strings. Currently, this type is inhabited with three fields, an enum, a static description string, and an optionally allocated detail string. Removal of strings would imply the IoError type would be just the enum itself. This may be an acceptable compromise to make, defining the IoError type upstream and providing easy constructors from the enum to the struct. Additionally, the OtherIoError enum variant would be extended with an i32 payload representing the error code (if it came from the OS).

  • The ascii module is omitted, but it would likely be defined in the crate that defines Str.

Formatting

While not often thought of as “ultra-core” functionality, this module may be necessary because printing information about types is a fundamental problem that normally requires no dependencies.

Inclusion of this module is the reason why I/O is included in the module as well (or at least a few traits), but the module can otherwise be included with little to no overhead required in terms of dependencies.

Neither print! nor format! macros to be a part of this library, but the write! macro would be present.

I/O

The primary reason for defining the io module in the libmini crate would be to implement the fmt module. The ramification of removing strings was previously discussed for IoError, but there are further modifications that would be required for the io module to exist in libmini:

  • The Buffer, Listener, Seek, and Acceptor traits would all be defined upstream instead of in libmini. Very little in libstd uses these traits, and nothing in libmini requires them. They are of questionable utility when considering their applicability to all rust code in existence.

  • Some extension methods on the Reader and Writer traits would need to be removed. Methods such as push_exact, read_exact, read_to_end, write_line, etc., all require owned vectors or similar unimplemented runtime requirements. These can likely be moved to extension traits upstream defined for all readers and writers. Note that this does not apply to the integral reading and writing methods. These are occasionally overwritten for performance, but removal of some extension methods would strongly suggest to me that these methods should be removed. Regardless, the remaining methods could live in essentially any location.

Slices

The only method lost on mutable slices would currently be the sorting method. This can be circumvented by implementing a sorting algorithm that doesn’t require allocating a temporary buffer. If intensive use of a sorting algorithm is required, Rust can provide a libsort crate with a variety of sorting algorithms apart from the default sorting algorithm.

FromStr

This trait and module are left out because strings are left out. All types in libmini can have their implementation of FromStr in the crate which implements strings

Floats

This current design excludes floats entirely from libmini (implementations of traits and such). This is another questionable decision, but the current implementation of floats heavily leans on functions defined in libm, so it is unacceptable for these functions to exist in libmini.

Either libstd or a libfloat crate will define floating point traits and such.

Failure

It is unacceptable for Option to reside outside of libmini, but it is also also unacceptable for unwrap to live outside of the Option type. Consequently, this means that it must be possible for libmini to fail.

While impossible for libmini to define failure, it should simply be able to declare failure. While currently not possible today, this extension to the language is possible through “weak lang items”.

Implementation-wise, the failure lang item would have a predefined symbol at which it is defined, and libraries which declare but to not define failure are required to only exist in the rlib format. This implies that libmini can only be built as an rlib. Note that today’s linkage rules do not allow for this (because building a dylib with rlib dependencies is not possible), but the rules could be tweaked to allow for this use case.

tl;dr; The implementation of libmini can use failure, but it does not define failure. All usage of libmini would require an implementation of failure somewhere.

liblibc

This library will exist to provide bindings to libc. This will be a highly platform-specific library, containing an entirely separate api depending on which platform it’s being built for.

This crate will be used to provide bindings to the C language in all forms, and would itself essentially be a giant metadata blob. It conceptually represents the inclusion of all C header files.

Note that the funny name of the library is to allow extern crate libc; to be the form of declaration rather than extern crate c; which is consider to be too short for its own good.

Note that this crate can only exist in rlib or dylib form.

liballoc

Note: This name liballoc is questionable, please consider it a placeholder.

This library would define the allocator traits as well as bind to libc malloc/free (or jemalloc if we decide to include it again). This crate would depend on liblibc and libmini.

Pointers such as ~ and Rc would move into this crate using the default allocator. The current Gc pointers would move to libgc if possible, or otherwise librustrt for now (they’re feature gated currently, not super pressing).

Primarily, this library assumes that an allocation failure should trigger a failure. This makes the library not suitable for use in a kernel, but it is suitable essentially everywhere else.

With today’s libstd, this crate would likely mostly be made up by the global_heap module. Its purpose is to define the allocation lang items required by the compiler.

Note that this crate can only exist in rlib form.

libcollections

This crate would not depend on libstd, it would only depend on liballoc and libmini. These two foundational crates should provide all that is necessary to provide a robust set of containers (what you would expect today). Each container would likely have an allocator parameter, and the default would be the default allocator provided by liballoc.

When using the containers from libcollections, it is implicitly assumed that all allocation succeeds, and this will be reflected in the api of each collection.

The contents of this crate would be the entirety of libcollections as it is today, as well as the vec module from the standard library. This would also implement any relevant traits necessary for ~[T].

Note that this crate can only exist in rlib form.

libtext

This crate would define all functionality in rust related to strings. This would contain the definition of the Str type, as well as implementations of the relevant traits from libmini for the string type.

The crucial assumption of this crate is that allocation does not fail, and the rest of the string functionality could be built on top of this. Note that this crate will depend on libcollections for the Vec type as the underlying building block for string buffers and the string type.

This crate would be composed of the str, ascii, and unicode modules which live in libstd today, but would allow for the extension of other text-related functionality.

librustrt

This library would be the crate where the rt module is almost entirely implemented. It will assume that allocation succeeds, and it will assume a libc implementation to run on.

The current libstd modules which would be implemented as part of this crate would be:

  • rt
  • task
  • local_data

Note that comm is not on this list. This crate will additionally define failure (as unwinding for each task). This crate can exist in both rlib and dylib form.

libsync

This library will largely remain what it is today, with the exception that the comm implementation would move into this crate. The purpose of doing so would be to consolidate all concurrency-related primitives in this crate, leaving none out.

This crate would depend on the runtime for task management (scheduling and descheduling).

The libstd facade

A new standard library would be created that would primarily be a facade which would expose the underlying crates as a stable API. This library would depend on all of the above libraries, and would predominately be a grouping of pub use statements.

This library would also be the library to contain the prelude which would include types from the previous crates. All remaining functionality of the standard library would be filled in as part of this crate.

Note that all rust programs will by default link to libstd, and hence will transitively link to all of the upstream crates mentioned above. Many more apis will be exposed through libstd directly, however, such as HashMap, Arc, etc.

The exact details of the makeup of this crate will change over time, but it can be considered as “the current libstd plus more”, and this crate will be the source of the “batteries included” aspect of the rust standard library. The API (reexported paths) of the standard library would not change over time. Once a path is reexported and a release is made, all the path will be forced to remain constant over time.

One of the primary reasons for this facade is to provide freedom to restructure the underlying crates. Once a facade is established, it is the only stable API. The actual structure and makeup of all the above crates will be fluid until an acceptable design is settled on. Note that this fluidity does not apply to libstd, only to the structure of the underlying crates.

Updates to rustdoc

With today’s incarnation of rustdoc, the documentation for this libstd facade would not be as high quality as it is today. The facade would just provide hyperlinks back to the original crates, which would have reduced quantities of documentation in terms of navigation, implemented traits, etc. Additionally, these reexports are meant to be implementation details, not facets of the api. For this reason, rustdoc would have to change in how it renders documentation for libstd.

First, rustdoc would consider a cross-crate reexport as inlining of the documentation (similar to how it inlines reexports of private types). This would allow all documentation in libstd to remain in the same location (even the same urls!). This would likely require extensive changes to rustdoc for when entire module trees are reexported.

Secondly, rustdoc will have to be modified to collect implementors of reexported traits all in one location. When libstd reexports trait X, rustdoc will have to search libstd and all its dependencies for implementors of X, listing them out explicitly.

These changes to rustdoc should place it in a much more presentable space, but it is an open question to what degree these modifications will suffice and how much further rustdoc will have to change.

Remaining crates

There are many more crates in the standard distribution of rust, all of which currently depend on libstd. These crates would continue to depend on libstd as most rust libraries would.

A new effort would likely arise to reduce dependence on the standard library by cutting down to the core dependencies (if necessary). For example, the libnative crate currently depend on libstd, but it in theory doesn’t need to depend on much other than librustrt and liblibc. By cutting out dependencies, new use cases will likely arise for these crates.

Crates outside of the standard distribution of rust will like to link to the above crates as well (and specifically not libstd). For example, crates which only depend on libmini are likely candidates for being used in kernels, whereas crates only depending on liballoc are good candidates for being embedded into other languages. Having a clear delineation for the usability of a crate in various environments seems beneficial.

Alternatives

  • There are many alternatives to the above sharding of libstd and its dependent crates. The one that is most rigid is likely libmini, but the contents of all other crates are fairly fluid and able to shift around. To this degree, there are quite a few alternatives in how the remaining crates are organized. The ordering proposed is simply one of many.

  • Compilation profiles. Instead of using crate dependencies to encode where a crate can be used, crates could instead be composed of cfg(foo) attributes. In theory, there would be one libstd crate (in terms of source code), and this crate could be compiled with flags such as --cfg libc, --cfg malloc, etc. This route has may have the problem of “multiple standard libraries” in that code compatible with the “libc libstd” is not necessarily compatible with the “no libc libstd”. Asserting that a crate is compatible with multiple profiles would involve requiring multiple compilations.

  • Removing libstd entirely. If the standard library is simply a facade, the compiler could theoretically only inject a select number of crates into the prelude, or possibly even omit the prelude altogether. This works towards elimination the question of “does this belong in libstd”, but it would possibly be difficult to juggle the large number of crates to choose from where one could otherwise just look at libstd.

Unresolved questions

  • Compile times. It’s possible that having so many upstream crates for each rust crate will increase compile times through reading metadata and invoking the system linker. Would sharding crates still be worth it? Could possible problems that arise be overcome? Would extra monomorphization in all these crates end up causing more binary bloat?

  • Binary bloat. Another possible side effect of having many upstream crates would be increasing binary bloat of each rust program. Our current linkage model means that if you use anything from a crate that you get everything in that crate (in terms of object code). It is unknown to what degree this will become a concern, and to what degree it can be overcome.

  • Should floats be left out of libmini? This is largely a question of how much runtime support is required for floating point operations. Ideally functionality such as formatting a float would live in libmini, whereas trigonometric functions would live in an external crate with a dependence on libm.

  • Is it acceptable for strings to be left out of libmini? Many common operations on strings don’t require allocation. This is currently done out of necessity of having to define the Str type elsewhere, but this may be seen as too limiting for the scope of libmini.

  • Does liblibc belong so low in the dependency tree? In the proposed design, only the libmini crate doesn’t depend on liblibc. Crates such as libtext and libcollections, however, arguably have no dependence on libc itself, they simply require some form of allocator. Answering this question would be figuring how how to break liballoc’s dependency on liblibc, but it’s an open question as to whether this is worth it or not.

  • Reexporting macros. Currently the standard library defines a number of useful macros which are used throughout the implementation of libstd. There is no way to reexport a macro, so multiple implementations of the same macro would be required for the core libraries to all use the same macro. Is there a better solution to this situation? How much of an impact does this have?

Summary

Add a regexp crate to the Rust distribution in addition to a small regexp_macros crate that provides a syntax extension for compiling regular expressions during the compilation of a Rust program.

The implementation that supports this RFC is ready to receive feedback: https://github.com/BurntSushi/regexp

Documentation for the crate can be seen here: http://burntsushi.net/rustdoc/regexp/index.html

regex-dna benchmark (vs. Go, Python): https://github.com/BurntSushi/regexp/tree/master/benchmark/regex-dna

Other benchmarks (vs. Go): https://github.com/BurntSushi/regexp/tree/master/benchmark

(Perhaps the links should be removed if the RFC is accepted, since I can’t guarantee they will always exist.)

Motivation

Regular expressions provide a succinct method of matching patterns against search text and are frequently used. For example, many programming languages include some kind of support for regular expressions in its standard library.

The outcome of this RFC is to include a regular expression library in the Rust distribution and resolve issue #3591.

Detailed design

(Note: This is describing an existing design that has been implemented. I have no idea how much of this is appropriate for an RFC.)

The first choice that most regular expression libraries make is whether or not to include backreferences in the supported syntax, as this heavily influences the implementation and the performance characteristics of matching text.

In this RFC, I am proposing a library that closely models Russ Cox’s RE2 (either its C++ or Go variants). This means that features like backreferences or generalized zero-width assertions are not supported. In return, we get O(mn) worst case performance (with m being the size of the search text and n being the number of instructions in the compiled expression).

My implementation currently simulates an NFA using something resembling the Pike VM. Future work could possibly include adding a DFA. (N.B. RE2/C++ includes both an NFA and a DFA, but RE2/Go only implements an NFA.)

The primary reason why I chose RE2 was that it seemed to be a popular choice in issue #3591, and its worst case performance characteristics seemed appealing. I was also drawn to the limited set of syntax supported by RE2 in comparison to other regexp flavors.

With that out of the way, there are other things that inform the design of a regexp library.

Unicode

Given the already existing support for Unicode in Rust, this is a no-brainer. Unicode literals should be allowed in expressions and Unicode character classes should be included (e.g., general categories and scripts).

Case folding is also important for case insensitive matching. Currently, this is implemented by converting characters to their uppercase forms and then comparing them. Future work includes applying at least a simple fold, since folding one Unicode character can produce multiple characters.

Normalization is another thing to consider, but like most other regexp libraries, the one I’m proposing here does not do any normalization. (It seems the recommended practice is to do normalization before matching if it’s needed.)

A nice implementation strategy to support Unicode is to implement a VM that matches characters instead of bytes. Indeed, my implementation does this. However, the public API of a regular expression library should expose byte indices corresponding to match locations (which ought to be guaranteed to be UTF8 codepoint boundaries by construction of the VM). My reason for this is that byte indices result in a lower cost abstraction. If character indices are desired, then a mapping can be maintained by the client at their discretion.

Additionally, this makes it consistent with the std::str API, which also exposes byte indices.

Word boundaries, word characters and Unicode

At least Python and D define word characters, word boundaries and space characters with Unicode character classes. My implementation does the same by augmenting the standard Perl character classes \d, \s and \w with corresponding Unicode categories.

Leftmost-first

As of now, my implementation finds the leftmost-first match. This is consistent with PCRE style regular expressions.

I’ve pretty much ignored POSIX, but I think it’s very possible to add leftmost-longest semantics to the existing VM. (RE2 supports this as a parameter, but I believe still does not fully comply with POSIX with respect to picking the correct submatches.)

Public API

There are three main questions that can be asked when searching text:

  1. Does the string match this expression?
  2. If so, where?
  3. Where are its submatches?

In principle, an API could provide a function to only answer (3). The answers to (1) and (2) would immediately follow. However, keeping track of submatches is expensive, so it is useful to implement an optimization that doesn’t keep track of them if it doesn’t have to. For example, submatches do not need to be tracked to answer questions (1) and (2).

The rabbit hole continues: answering (1) can be more efficient than answering (2) because you don’t have to keep track of any capture groups ((2) requires tracking the position of the full match). More importantly, (1) enables early exit from the VM. As soon as a match is found, the VM can quit instead of continuing to search for greedy expressions.

Therefore, it’s worth it to segregate these operations. The performance difference can get even bigger if a DFA were implemented (which can answer (1) and (2) quickly and even help with (3)). Moreover, most other regular expression libraries provide separate facilities for answering these questions separately.

Some libraries (like Python’s re and RE2/C++) distinguish between matching an expression against an entire string and matching an expression against part of the string. My implementation favors simplicity: matching the entirety of a string requires using the ^ and/or $ anchors. In all cases, an implicit .*? is added the beginning and end of each expression evaluated. (Which is optimized out in the presence of anchors.)

Finally, most regexp libraries provide facilities for splitting and replacing text, usually making capture group names available with some sort of $var syntax. My implementation provides this too. (These are a perfect fit for Rust’s iterators.)

This basically makes up the entirety of the public API, in addition to perhaps a quote function that escapes a string so that it may be used as a literal in an expression.

The regexp! macro

With syntax extensions, it’s possible to write an regexp! macro that compiles an expression when a Rust program is compiled. This includes translating the matching algorithm to Rust code specific to the expression given. This “ahead of time” compiling results in a performance increase. Namely, it elides all heap allocation.

I’ve called these “native” regexps, whereas expressions compiled at runtime are “dynamic” regexps. The public API need not impose this distinction on users, other than requiring the use of a syntax extension to construct a native regexp. For example:

let re = regexp!("a*");

After construction, re is indistinguishable from an expression created dynamically:

let re = Regexp::new("a*").unwrap();

In particular, both have the same type. This is accomplished with a representation resembling:

enum MaybeNative {
    Dynamic(~[Inst]),
    Native(fn(MatchKind, &str, uint, uint) -> ~[Option<uint>]),
}

This syntax extension requires a second crate, regexp_macros, where the regexp! macro is defined. Technically, this could be provided in the regexp crate, but this would introduce a runtime dependency on libsyntax for any use of the regexp crate.

@alexcrichton remarks that this state of affairs is a wart that will be corrected in the future.

Untrusted input

Given worst case O(mn) time complexity, I don’t think it’s worth worrying about unsafe search text.

Untrusted regular expressions are another matter. For example, it’s very easy to exhaust a system’s resources with nested counted repetitions. For example, ((a{100}){100}){100} tries to create 100^3 instructions. My current implementation does nothing to mitigate against this, but I think a simple hard limit on the number of instructions allowed would work fine. (Should it be configurable?)

Name

The name of the crate being proposed is regexp and the type describing a compiled regular expression is Regexp. I think an equally good name would be regex (and Regex). Either name seems to be frequently used, e.g., “regexes” or “regexps” in colloquial use. I chose regexp over regex because it matches the name used for the corresponding package in Go’s standard library.

Other possible names are regexpr (and Regexpr) or something with underscores: reg_exp (and RegExp). However, I perceive these to be more ugly and less commonly used than either regexp or regex.

Finally, we could use re (like Python), but I think the name could be ambiguous since it’s so short. regexp (or regex) unequivocally identifies the crate as providing regular expressions.

For consistency’s sake, I propose that the syntax extension provided be named the same as the crate. So in this case, regexp!.

Summary

My implementation is pretty much a port of most of RE2. The syntax should be identical or almost identical. I think matching an existing (and popular) library has benefits, since it will make it easier for people to pick it up and start using it. There will also be (hopefully) fewer surprises. There is also plenty of room for performance improvement by implementing a DFA.

Alternatives

I think the single biggest alternative is to provide a backtracking implementation that supports backreferences and generalized zero-width assertions. I don’t think my implementation precludes this possibility. For example, a backtracking approach could be implemented and used only when features like backreferences are invoked in the expression. However, this gives up the blanket guarantee of worst case O(mn) time. I don’t think I have the wisdom required to voice a strong opinion on whether this is a worthwhile endeavor.

Another alternative is using a binding to an existing regexp library. I think this was discussed in issue #3591 and it seems like people favor a native Rust implementation if it’s to be included in the Rust distribution. (Does the regexp! macro require it? If so, that’s a huge advantage.) Also, a native implementation makes it maximally portable.

Finally, it is always possible to persist without a regexp library.

Unresolved questions

The public API design is fairly simple and straight-forward with no surprises. I think most of the unresolved stuff is how the backend is implemented, which should be changeable without changing the public API (sans adding features to the syntax).

I can’t remember where I read it, but someone had mentioned defining a trait that declared the API of a regexp engine. That way, anyone could write their own backend and use the regexp interface. My initial thoughts are YAGNI—since requiring different backends seems like a super specialized case—but I’m just hazarding a guess here. (If we go this route, then we might want to expose the regexp parser and AST and possibly the compiler and instruction set to make writing your own backend easier. That sounds restrictive with respect to making performance improvements in the future.)

I personally think there’s great value in keeping the standard regexp implementation small, simple and fast. People who have more specialized needs can always pick one of the existing C or C++ libraries.

For now, we could mark the API as #[unstable] or #[experimental].

Future work

I think most of the future work for this crate is to increase the performance, either by implementing different matching algorithms (e.g., a DFA) or by improving the code generator that produces native regexps with regexp!.

If and when a DFA is implemented, care must be taken when creating a code generator, as the size of the code required can grow rapidly.

Other future work (that is probably more important) includes more Unicode support, specifically for simple case folding.

Summary

Cleanup the trait, method, and operator semantics so that they are well-defined and cover more use cases. A high-level summary of the changes is as follows:

  1. Generalize explicit self types beyond &self and &mut self etc, so that self-type declarations like self: Rc<Self> become possible.
  2. Expand coherence rules to operate recursively and distinguish orphans more carefully.
  3. Revise vtable resolution algorithm to be gradual.
  4. Revise method resolution algorithm in terms of vtable resolution.

This RFC excludes discussion of associated types and multidimensional type classes, which will be the subject of a follow-up RFC.

Motivation

The current trait system is ill-specified and inadequate. Its implementation dates from a rather different language. It should be put onto a surer footing.

Use cases

Poor interaction with overloadable deref and index

Addressed by: New method resolution algorithm.

The deref operator * is a flexible one. Imagine a pointer p of type ~T. This same * operator can be used for three distinct purposes, depending on context.

  1. Create an immutable referent to the referent: &*p.
  2. Create a mutable reference to the referent: &mut *p.
  3. Copy/move the contents of the referent: consume(*p).

Not all of these operations are supported by all types. In fact, because most smart pointers represent aliasable data, they will only support the creation of immutable references (e.g., Rc, Gc). Other smart pointers (e.g., the RefMut type returned by RefCell) support mutable or immutable references, but not moves. Finally, a type that owns its data (like, indeed, ~T) might support #3.

To reflect this, we use distinct traits for the various operators. (In fact, we don’t currently have a trait for copying/moving the contents, this could be a distinct RFC (ed., I’m still thinking this over myself, there are non-trivial interactions)).

Unfortunately, the method call algorithm can’t really reliably choose mutable vs immutable deref. The challenge is that the proper choice will sometimes not be apparent until quite late in the process. For example, imagine the expression p.foo(): if foo() is defined with &self, we want an immutable deref, otherwise we want a mutable deref.

Note that in this RFC I do not completely address this issue. In particular, in an expression like (*p).foo(), where the dereference is explicit and not automatically inserted, the sense of the dereference is not inferred. For the time being, the sense can be manually specified by making the receiver type fully explicit: (&mut *p).foo() vs (&*p).foo(). I expect in a follow-up RFC to possibly address this problem, as well as the question of how to handle copies and moves of the referent (use #3 in my list above).

Lack of backtracking

Addressed by: New method resolution algorithm.

Issue #XYZ. When multiple traits define methods with the same name, it is ambiguous which trait is being used:

trait Foo { fn method(&self); }
trait Bar { fn method(&self); }

In general, so long as a given type only implements Foo or Bar, these ambiguities don’t present a problem (and ultimately Universal Function Call Syntax or UFCS will present an explicit resolution). However, this is not guaranteed. Sometimes we see “blanket” impls like the following:

impl<A:Base> Foo for A { }

This impl basically says “any type T that implements Base automatically implements Foo”. Now, we expect an ambiguity error if we have a type T that implements both Base and Bar. But in fact, we’ll get an ambiguity error even if a type only implements Bar. The reason for this is that the current method resolution doesn’t “recurse” and check additional dependencies when deciding if an impl is applicable. So it will decide, in this case, that the type T could implement Foo and then record for later that T must implement Base. This will lead to weird errors.

Overly conservative coherence

Addressed by: Expanded coherence rules.

The job of coherence is to ensure that, for any given set of type parameters, a given trait is implemented at most once (it may of course not be implemented at all). Currently, however, coherence is more conservative that it needs to be. This is partly because it doesn’t take into account the very property that it itself is enforcing.

The problems arise due to the “blanket impls” I discussed in the previous section. Consider the following two traits and a blanket impl:

trait Base { }
trait Derived { }
impl<A:Base> Derived for A { }

Here we have two traits Base and Derived, and a blanket impl which implements the Derived trait for any type A that also implements Base.

This implies that if you implement Base for a type S, then S automatically implements Derived:

struct S;
impl Base for S { } // Implement Base => Implements Derived

On a related note, it’d be an error to implement both Base and Derived for the same type T:

// Illegal
struct T;
impl Base for T { }
impl Derived for T { }

This is illegal because now there are two implements of Derived for T. There is the direct one, but also an indirect one. We do not assign either higher precedence, we just report it as an error.

So far, all is in agreement with the current rules. However, problems arise if we imagine a type U that only implements Derived:

struct U;
impl Derived for U { } // Should be OK, currently not.

In this scenario, there is only one implementation of Derived. But the current coherence rules still report it as an error.

Here is a concrete example where a rule like this would be useful. We currently have the Copy trait (aka Pod), which states that a type can be memcopied. We also have the Clone trait, which is a more heavyweight version for types where copying requires allocation. It’d be nice if all types that could be copied could also be cloned – it’d also be nice if we knew for sure that copying a value had the same semantics as cloning it, in that case. We can guarantee both using a blanket impl like the following:

impl<T:Copy> Clone for T {
    fn clone(&self) -> T {
        *self
    }
}

Unfortunately, writing such an impl today would imply that no other types could implement Clone. Obviously a non-starter.

There is one not especially interesting ramification of this. Permitting this rule means that adding impls to a type could cause coherence errors. For example, if I had a type which implements Copy, and I add an explicit implementation of Clone, I’d get an error due to the blanket impl. This could be seen as undesirable (perhaps we’d like to preserve that property that one can always add impls without causing errors).

But of course we already don’t have the property that one can always add impls, since method calls could become ambiguous. And if we were to add “negative bounds”, which might be nice, we’d lose that property. And the popularity and usefulness of blanket impls cannot be denied. Therefore, I think this property (“always being able to add impls”) is not especially useful or important.

Hokey implementation

Addressed by: Gradual vtable resolution algorithm

In an effort to improve inference, the current implementation has a rather ad-hoc two-pass scheme. When performing a method call, it will immediately attempt “early” trait resolution and – if that fails – defer checking until later. This helps with some particular scenarios, such as a trait like:

trait Map<E> {
    fn map(&self, op: |&E| -> E) -> Self;
}

Given some higher-order function like:

fn some_mapping<E,V:Map<E>>(v: &V, op: |&E| -> E) { ... }

If we were then to see a call like:

some_mapping(vec, |elem| ...)

the early resolution would be helpful in connecting the type of elem with the type of vec. The reason to use two phases is that often we don’t need to resolve each trait bound to a specific impl, and if we wait till the end then we will have more type information available.

In my proposed solution, we eliminate the phase distinction. Instead, we simply track pending constraints. We are free to attempt to resolve pending constraints whenever desired. In particular, whenever we find we need more type information to proceed with some type-overloaded operation, rather than reporting an error we can try and resolve pending constraints. If that helps give more information, we can carry on. Once we reach the end of the function, we must then resolve all pending constraints that have not yet been resolved for some other reason.

Note that there is some interaction with the distinction between input and output type parameters discussed in the previous example. Specifically, we must never infer the value of the Self type parameter based on the impls in scope. This is because it would cause crate concatenation to potentially lead to compilation errors in the form of inference failure.

Properties

There are important properties I would like to guarantee:

  • Coherence or No Overlapping Instances: Given a trait and values for all of its type parameters, there should always be at most one applicable impl. This should remain true even when unknown, additional crates are loaded.
  • Crate concatenation: It should always be possible to take two creates and combine them without causing compilation errors. This property

Here are some properties I do not intend to guarantee:

  • Crate divisibility: It is not always possible to divide a crate into two crates. Specifically, this may incur coherence violations due to the orphan rules.
  • Decidability: Haskell has various sets of rules aimed at ensuring that the compiler can decide whether a given trait is implemented for a given type. All of these rules wind up preventing useful implementations and thus can be turned off with the undecidable-instances flag. I don’t think decidability is especially important. The compiler can simply keep a recursion counter and report an error if that level of recursion is exceeded. This counter can be adjusted by the user on a crate-by-crate basis if some bizarre impl pattern happens to require a deeper depth to be resolved.

Detailed design

In general, I won’t give a complete algorithmic specification. Instead, I refer readers to the prototype implementation. I would like to write out a declarative and non-algorithmic specification for the rules too, but that is work in progress and beyond the scope of this RFC. Instead, I’ll try to explain in “plain English”.

Method self-type syntax

Currently methods must be declared using the explicit-self shorthands:

fn foo(self, ...)
fn foo(&self, ...)
fn foo(&mut self, ...)
fn foo(~self, ...)

Under this proposal we would keep these shorthands but also permit any function in a trait to be used as a method, so long as the type of the first parameter is either Self or something derefable Self:

fn foo(self: Gc<Self>, ...)
fn foo(self: Rc<Self>, ...)
fn foo(self: Self, ...)      // equivalent to `fn foo(self, ...)
fn foo(self: &Self, ...)     // equivalent to `fn foo(&self, ...)

It would not be required that the first parameter be named self, though it seems like it would be useful to permit it. It’s also possible we can simply make self not be a keyword (that would be my personal preference, if we can achieve it).

Coherence

The coherence rules fall into two categories: the orphan restriction and the overlapping implementations restriction.

Orphan check: Every implementation must meet one of the following conditions:

  1. The trait being implemented (if any) must be defined in the current crate.

  2. The Self type parameter must meet the following grammar, where C is a struct or enum defined within the current crate:

    T = C
      | [T]
      | [T, ..n]
      | &T
      | &mut T
      | ~T
      | (..., T, ...)
      | X<..., T, ...> where X is not bivariant with respect to T
    

Overlapping instances: No two implementations of the same trait can be defined for the same type (note that it is only the Self type that matters). For this purpose of this check, we will also recursively check bounds. This check is ultimately defined in terms of the RESOLVE algorithm discussed in the implementation section below: it must be able to conclude that the requirements of one impl are incompatible with the other.

Here is a simple example that is OK:

trait Show { ... }
impl Show for int { ... }
impl Show for uint { ... }

The first impl implements Show for int and the case implements Show for uint. This is ok because the type int cannot be unified with uint.

The following example is NOT OK:

trait Iterator<E> { ... }
impl Iterator<char> for ~str  { ... }
impl Iterator<u8> for ~str { ... }

Even though E is bound to two distinct types, E is an output type parameter, and hence we get a coherence violation because the input type parameters are the same in each case.

Here is a more complex example that is also OK:

trait Clone { ... }
impl<A:Copy> Clone for A { ... }
impl<B:Clone> Clone for ~B { ... }

These two impls are compatible because the resolution algorithm is able to see that the type ~B will never implement Copy, no matter what B is. (Note that our ability to do this check relies on the orphan checks: without those, we’d never know if some other crate might add an implementation of Copy for ~B.)

Since trait resolution is not fully decidable, it is possible to concoct scenarios in which coherence can neither confirm nor deny the possibility that two impls are overlapping. One way for this to happen is when there are two traits which the user knows are mutually exclusive; mutual exclusion is not currently expressible in the type system [7] however, and hence the coherence check will report errors. For example:

trait Even { } // Naturally can't be Even and Odd at once!
trait Odd { }
impl<T:Even> Foo for T { }
impl<T:Odd> Foo for T { }

Another possible scenario is infinite recursion between impls. For example, in the following scenario, the coherence checked would be unable to decide if the following impls overlap:

impl<A:Foo> Bar for A { ... }
impl<A:Bar> Foo for A { ... }

In such cases, the recursion bound is exceeded and an error is conservatively reported. (Note that recursion is not always so easily detected.)

Method resolution

Let us assume the method call is r.m(...) and the type of the receiver r is R. We will resolve the call in two phases. The first phase checks for inherent methods [4] and the second phase for trait methods. Both phases work in a similar way, however. We will just describe how trait method search works and then express the inherent method search in terms of traits.

The core method search looks like this:

METHOD-SEARCH(R, m):
    let TRAITS = the set consisting of any in-scope trait T where:
        1. T has a method m and
        2. R implements T<...> for any values of Ty's type parameters

    if TRAITS is an empty set:
        if RECURSION DEPTH EXCEEDED:
            return UNDECIDABLE
        if R implements Deref<U> for some U:
            return METHOD-SEARCH(U, m)
        return NO-MATCH

    if TRAITS is the singleton set {T}:
        RECONCILE(R, T, m)

    return AMBIGUITY(TRAITS)

Basically, we will continuously auto-dereference the receiver type, searching for some type that implements a trait that offers the method m. This gives precedence to implementations that require fewer autodereferences. (There exists the possibility of a cycle in the Deref chain, so we will only autoderef so many times before reporting an error.)

Receiver reconciliation

Once we find a trait that is implemented for the (adjusted) receiver type R and which offers the method m, we must reconcile the receiver with the self type declared in m. Let me explain by example.

Consider a trait Mob (anyone who ever hacked on the MUD source code will surely remember Mobs!):

trait Mob {
    fn hit_points(&self) -> int;
    fn take_damage(&mut self, damage: int) -> int;
    fn move_to_room(self: GC<Self>, room: &Room);
}

Let’s say we have a type Monster, and Monster implements Mob:

struct Monster { ... }
impl Mob for Monster { ... }

And now we see a call to hit_points() like so:

fn attack(victim: &mut Monster) {
    let hp = victim.hit_points();
    ...
}

Our method search algorithm above will proceed by searching for an implementation of Mob for the type &mut Monster. It won’t find any. It will auto-deref &mut Monster to yield the type Monster and search again. Now we find a match. Thus far, then, we have a single autoderef *victims, yielding the type Monster – but the method hit_points() actually expects a reference (&Monster) to be given to it, not a by-value Monster.

This is where self-type reconciliation steps in. The reconciliation process works by unwinding the adjustments and adding auto-refs:

RECONCILE(R, T, m):
    let E = the expected self type of m in trait T;

    // Case 1.
    if R <: E:
      we're done.

    // Case 2.
    if &R <: E:
      add an autoref adjustment, we're done.

    // Case 3.
    if &mut R <: E:
      adjust R for mutable borrow (if not possible, error).
      add a mut autoref adjustment, we're done.

    // Case 4.
    unwind one adjustment to yield R' (if not possible, error).
    return RECONCILE(R', T, m)

In this case, the expected self type E would be &Monster. We would first check for case 1: is Monster <: &Monster? It is not. We would then proceed to case 2. Is &Monster <: &Monster? It is, and hence add an autoref. The final result then is that victim.hit_points() becomes transformed to the equivalent of (using UFCS notation) Mob::hit_points(&*victim).

To understand case 3, let’s look at a call to take_damage:

fn attack(victim: &mut Monster) {
    let hp = victim.hit_points(); // ...this is what we saw before
    let damage = hp / 10;         // 1/10 of current HP in damage
    victim.take_damage(damage);
    ...
}

As before, we would auto-deref once to find the type Monster. This time, though, the expected self type is &mut Monster. This means that both cases 1 and 2 fail and we wind up at case 3, the test for which succeeds. Now we get to this statement: “adjust R for mutable borrow”.

At issue here is the overloading of the deref operator that was discussed earlier. In this case, the end result we want is Mob::hit_points(&mut *victim), which means that * is being used for a mutable borrow, which is indicated by the DerefMut trait. However, while doing the autoderef loop, we always searched for impls of the Deref trait, since we did not yet know which trait we wanted. [2] We need to patch this up. So this loop will check whether the type &mut Monster implements DerefMut, in addition to just Deref (it does).

This check for case 3 could fail if, e.g., victim had a type like Gc<Monster> or Rc<Monster>. You’d get a nice error message like “the type Rc does not support mutable borrows, and the method take_damage() requires a mutable receiver”.

We still have not seen an example of cases 1 or 4. Let’s use a slightly modified example:

fn flee_if_possible(victim: Gc<Monster>, room: &mut Room) {
  match room.find_random_exit() {
    None => { }
    Some(exit) => {
      victim.move_to_room(exit);
    }
  }
}

As before, we’ll start out with a type of Monster, but this type the method move_to_room() has a receiver type of Gc<Monster>. This doesn’t match cases 1, 2, or 3, so we proceed to case 4 and unwind by one adjustment. Since the most recent adjustment was to deref from Gc<Monster> to Monster, we are left with a type of Gc<Monster>. We now search again. This time, we match case 1. So the final result is Mob::move_to_room(victim, room). This last case is sort of interesting because we had to use the autoderef to find the method, but once resolution is complete we do not wind up dereferencing victim at all.

Finally, let’s see an error involving case 4. Imagine we modified the type of victim in our previous example to be &Monster and not Gc<Monster>:

fn flee_if_possible(victim: &Monster, room: &mut Room) {
  match room.find_random_exit() {
    None => { }
    Some(exit) => {
      victim.move_to_room(exit);
    }
  }
}

In this case, we would again unwind an adjustment, going from Monster to &Monster, but at that point we’d be stuck. There are no more adjustments to unwind and we never found a type Gc<Monster>. Therefore, we report an error like “the method move_to_room() expects a Gc<Monster> but was invoked with an &Monster”.

Inherent methods

Inherent methods can be “desugared” into traits by assuming a trait per struct or enum. Each impl like impl Foo is effectively an implementation of that trait, and all those traits are assumed to be imported and in scope.

Differences from today

Today’s algorithm isn’t really formally defined, but it works very differently from this one. For one thing, it is based purely on subtyping checks, and does not rely on the generic trait matching. This is a crucial limitation that prevents cases like those described in lack of backtracking from working. It also results in a lot of code duplication and a general mess.

Interaction with vtables and type inference

One of the goals of this proposal is to remove the hokey distinction between early and late resolution. The way that this will work now is that, as we execute, we’ll accumulate a list of pending trait obligations. Each obligation is the combination of a trait and set of types. It is called an obligation because, for the method to be correctly typed, we must eventually find an implementation of that trait for those types. Due to type inference, though, it may not be possible to do this right away, since some of the types may not yet be fully known.

The semantics of trait resolution mean that, at any point in time, the type checker is free to stop what it’s doing and try to resolve these pending obligations, so long as none of the input type parameters are unresolved (see below). If it is able to definitely match an impl, this may in turn affect some type variables which are output type parameters. The basic idea then is to always defer doing resolution until we either (a) encounter a point where we need more type information to proceed or (b) have finished checking the function. At those times, we can go ahead and try to do resolution. If, after type checking the function in its entirety, there are still obligations that cannot be definitely resolved, that’s an error.

Ensuring crate concatenation

To ensure crate concentanability, we must only consider the Self type parameter when deciding when a trait has been implemented (more generally, we must know the precise set of input type parameters; I will cover an expanded set of rules for this in a subsequent RFC).

To see why this matters, imagine a scenario like this one:

trait Produce<R> {
    fn produce(&self: Self) -> R;
}

Now imagine I have two crates, C and D. Crate C defines two types, Vector and Real, and specifies a way to combine them:

struct Vector;
impl Produce<int> for Vector { ... }

Now imagine crate C has some code like:

fn foo() {
    let mut v = None;
    loop {
        if v.is_some() {
            let x = v.get().produce(); // (*)
            ...
        } else {
            v = Some(Vector);
        }
    }
}

At the point (*) of the call to produce() we do not yet know the type of the receiver. But the inferencer might conclude that, since it can only see one impl of Produce for Vector, v must have type Vector and hence x must have the type int.

However, then we might find another crate D that adds a new impl:

struct Other;
struct Real;
impl Combine<Real> for Other { ... }

This definition passes the orphan check because at least one of the types (Real, in this case) in the impl is local to the current crate. But what does this mean for our previous inference result? In general, it looks risky to decide types based on the impls we can see, since there could always be more impls we can’t actually see.

It seems clear that this aggressive inference breaks the crate concatenation property. If we combined crates C and D into one crate, then inference would fail where it worked before.

If x were never used in any way that forces it to be an int, then it’s even plausible that the type Real would have been valid in some sense. So the inferencer is influencing program execution to some extent.

Implementation details

The “resolve” algorithm

The basis for the coherence check, method lookup, and vtable lookup algorithms is the same function, called RESOLVE. The basic idea is that it takes a set of obligations and tries to resolve them. The result is four sets:

  • CONFIRMED: Obligations for which we were able to definitely select a specific impl.
  • NO-IMPL: Obligations which we know can NEVER be satisfied, because there is no specific impl. The only reason that we can ever say this for certain is due to the orphan check.
  • DEFERRED: Obligations that we could not definitely link to an impl, perhaps because of insufficient type information.
  • UNDECIDABLE: Obligations that were not decidable due to excessive recursion.

In general, if we ever encounter a NO-IMPL or UNDECIDABLE, it’s probably an error. DEFERRED obligations are ok until we reach the end of the function. For details, please refer to the prototype.

Alternatives and downsides

Autoderef and ambiguity

The addition of a Deref trait makes autoderef complicated, because we may encounter situations where the smart pointer and its reference both implement a trait, and we cannot know what the user wanted.

The current rule just decides in favor of the smart pointer; this is somewhat unfortunate because it is likely to not be what the user wanted. It also means that adding methods to smart pointer types is a potentially breaking change. This is particularly problematic because we may want the smart pointer to implement a trait that requires the method in question!

An interesting thought would be to change this rule and say that we always autoderef first and only resolve the method against the innermost reference. Note that UFCS provides an explicit “opt-out” if this is not what was desired. This should also have the (beneficial, in my mind) effect of quelling the over-eager use of Deref for types that are not smart pointers.

This idea appeals to me but I think belongs in a separate RFC. It needs to be evaluated.

Footnotes

Note 1: when combining with DST, the in keyword goes first, and then any other qualifiers. For example, in unsized RHS or in type RHS etc. (The precise qualifier in use will depend on the DST proposal.)

Note 2: Note that the DerefMut<T> trait extends Deref<T>, so if a type supports mutable derefs, it must also support immutable derefs.

Note 3: The restriction that inputs must precede outputs is not strictly necessary. I added it to keep options open concerning associated types and so forth. See the Alternatives section, specifically the section on associated types.

Note 4: The prioritization of inherent methods could be reconsidered after DST has been implemented. It is currently needed to make impls like impl Trait for ~Trait work.

Note 5: The set of in-scope traits is currently defined as those that are imported by name. PR #37 proposes possible changes to this rule.

Note 6: In the section on autoderef and ambiguity, I discuss alternate rules that might allow us to lift the requirement that the receiver be named self.

Note 7: I am considering introducing mechanisms in a subsequent RFC that could be used to express mutual exclusion of traits.

Summary

Allow attributes on match arms.

Motivation

One sometimes wishes to annotate the arms of match statements with attributes, for example with conditional compilation #[cfg]s or with branch weights (the latter is the most important use).

For the conditional compilation, the work-around is duplicating the whole containing function with a #[cfg]. A case study is sfackler’s bindings to OpenSSL, where many distributions remove SSLv2 support, and so that portion of Rust bindings needs to be conditionally disabled. The obvious way to support the various different SSL versions is an enum

pub enum SslMethod {
    #[cfg(sslv2)]
    /// Only support the SSLv2 protocol
    Sslv2,
    /// Only support the SSLv3 protocol
    Sslv3,
    /// Only support the TLSv1 protocol
    Tlsv1,
    /// Support the SSLv2, SSLv3 and TLSv1 protocols
    Sslv23,
}

However, all matchs can only mention Sslv2 when the cfg is active, i.e. the following is invalid:

fn name(method: SslMethod) -> &'static str {
    match method {
        Sslv2 => "SSLv2",
        Sslv3 => "SSLv3",
        _ => "..."
    }
}

A valid method would be to have two definitions: #[cfg(sslv2)] fn name(...) and #[cfg(not(sslv2)] fn name(...). The former has the Sslv2 arm, the latter does not. Clearly, this explodes exponentially for each additional cfg’d variant in an enum.

Branch weights would allow the careful micro-optimiser to inform the compiler that, for example, a certain match arm is rarely taken:

match foo {
    Common => {}
    #[cold]
    Rare => {}
}

Detailed design

Normal attribute syntax, applied to a whole match arm.

match x {
    #[attr]
    Thing => {}

    #[attr]
    Foo | Bar => {}

    #[attr]
    _ => {}
}

Alternatives

There aren’t really any general alternatives; one could probably hack around matching on conditional enum variants with some macros and helper functions to share as much code as possible; but in general this won’t work.

Unresolved questions

Nothing particularly.

Summary

Asserts are too expensive for release builds and mess up inlining. There must be a way to turn them off. I propose macros debug_assert! and assert!. For test cases, assert! should be used.

Motivation

Asserts are too expensive in release builds.

Detailed design

There should be two macros, debug_assert!(EXPR) and assert!(EXPR). In debug builds (without --cfg ndebug), debug_assert!() is the same as assert!(). In release builds (with --cfg ndebug), debug_assert!() compiles away to nothing. The definition of assert!() is if (!EXPR) { fail!("assertion failed ({}, {}): {}", file!(), line!(), stringify!(expr) }

Alternatives

Other designs that have been considered are using debug_assert! in test cases and not providing assert!, but this doesn’t work with separate compilation.

The impact of not doing this is that assert! will be expensive, prompting people will write their own local debug_assert! macros, duplicating functionality that should have been in the standard library.

Unresolved questions

None.

Summary

The tilde (~) operator and type construction do not support allocators and therefore should be removed in favor of the box keyword and a language item for the type.

Motivation

  • There will be a unique pointer type in the standard library, Box<T,A> where A is an allocator. The ~T type syntax does not allow for custom allocators. Therefore, in order to keep ~T around while still supporting allocators, we would need to make it an alias for Box<T,Heap>. In the spirit of having one way to do things, it seems better to remove ~ entirely as a type notation.

  • ~EXPR and box EXPR are duplicate functionality; the former does not support allocators. Again in the spirit of having one and only one way to do things, I would like to remove ~EXPR.

  • Some people think ~ is confusing, as it is less self-documenting than Box.

  • ~ can encourage people to blindly add sigils attempting to get their code to compile instead of consulting the library documentation.

Drawbacks

~T may be seen as convenient sugar for a common pattern in some situations.

Detailed design

The ~EXPR production is removed from the language, and all such uses are converted into box.

Add a lang item, box. That lang item will be defined in liballoc (NB: not libmetal/libmini, for bare-metal programming) as follows:

#[lang="box"]
pub struct Box<T,A=Heap>(*T);

All parts of the compiler treat instances of Box<T> identically to the way it treats ~T today.

The destructuring form for Box<T> will be box PAT, as follows:

let box(x) = box(10);
println!("{}", x); // prints 10

Alternatives

The other possible design here is to keep ~T as sugar. The impact of doing this would be that a common pattern would be terser, but I would like to not do this for the reasons stated in “Motivation” above.

Unresolved questions

The allocator design is not yet fully worked out.

It may be possible that unforeseen interactions will appear between the struct nature of Box<T> and the built-in nature of ~T when merged.

Summary

StrBuf should be renamed to String.

Motivation

Since StrBuf is so common, it would benefit from a more traditional name.

Drawbacks

It may be that StrBuf is a better name because it mirrors Java StringBuilder or C# StringBuffer. It may also be that String is confusing because of its similarity to &str.

Detailed design

Rename StrBuf to String.

Alternatives

The impact of not doing this would be that StrBuf would remain StrBuf.

Unresolved questions

None.

Summary

The rules about the places mod foo; can be used are tightened to only permit its use in a crate root and in mod.rs files, to ensure a more sane correspondence between module structure and file system hierarchy. Most notably, this prevents a common newbie error where a module is loaded multiple times, leading to surprising incompatibility between them. This proposal does not take away one’s ability to shoot oneself in the foot should one really desire to; it just removes almost all of the rope, leaving only mixed metaphors.

Motivation

It is a common newbie mistake to write things like this:

lib.rs:

mod foo;
pub mod bar;

foo.rs:

mod baz;

pub fn foo(_baz: baz::Baz) { }

bar.rs:

mod baz;
use foo::foo;

pub fn bar(baz: baz::Baz) {
    foo(baz)
}

baz.rs:

pub struct Baz;

This fails to compile because foo::foo() wants a foo::baz::Baz, while bar::bar() is giving it a bar::baz::Baz.

Such a situation, importing one file multiple times, is exceedingly rarely what the user actually wanted to do, but the present design allows it to occur without warning the user. The alterations contained herein ensure that there is no situation where such double loading can occur without deliberate intent via #[path = "….rs"].

Drawbacks

None known.

Detailed design

When a mod foo; statement is used, the compiler attempts to find a suitable file. At present, it just blindly seeks for foo.rs or foo/mod.rs (relative to the file under parsing).

The new behaviour will only permit mod foo; if at least one of the following conditions hold:

  • The file under parsing is the crate root, or

  • The file under parsing is a mod.rs, or

  • #[path] is specified, e.g. #[path = "foo.rs"] mod foo;.

In layman’s terms, the file under parsing must “own” the directory, so to speak.

Alternatives

The rationale is covered in the summary. This is the simplest repair to the current lack of structure; all alternatives would be more complex and invasive.

One non-invasive alternative is a lint which would detect double loads. This is less desirable than the solution discussed in this RFC as it doesn’t fix the underlying problem which can, fortunately, be fairly easily fixed.

Unresolved questions

None.

Summary

Temporaries live for the enclosing block when found in a let-binding. This only holds when the reference to the temporary is taken directly. This logic should be extended to extend the cleanup scope of any temporary whose lifetime ends up in the let-binding.

For example, the following doesn’t work now, but should:

use std::os;

fn main() {
	let x = os::args().slice_from(1);
	println!("{}", x);
}

Motivation

Temporary lifetimes are a bit confusing right now. Sometimes you can keep references to them, and sometimes you get the dreaded “borrowed value does not live long enough” error. Sometimes one operation works but an equivalent operation errors, e.g. autoref of ~[T] to &[T] works but calling .as_slice() doesn’t. In general it feels as though the compiler is simply being overly restrictive when it decides the temporary doesn’t live long enough.

Drawbacks

I can’t think of any drawbacks.

Detailed design

When a reference to a temporary is passed to a function (either as a regular argument or as the self argument of a method), and the function returns a value with the same lifetime as the temporary reference, the lifetime of the temporary should be extended the same way it would if the function was not invoked.

For example, ~[T].as_slice() takes &'a self and returns &'a [T]. Calling as_slice() on a temporary of type ~[T] will implicitly take a reference &'a ~[T] and return a value &'a [T] This return value should be considered to extend the lifetime of the ~[T] temporary just as taking an explicit reference (and skipping the method call) would.

Alternatives

Don’t do this. We live with the surprising borrowck errors and the ugly workarounds that look like

let x = os::args();
let x = x.slice_from(1);

Unresolved questions

None that I know of.

Summary

Rename *T to *const T, retain all other semantics of unsafe pointers.

Motivation

Currently the T* type in C is equivalent to *mut T in Rust, and the const T* type in C is equivalent to the *T type in Rust. Noticeably, the two most similar types, T* and *T have different meanings in Rust and C, frequently causing confusion and often incorrect declarations of C functions.

If the compiler is ever to take advantage of the guarantees of declaring an FFI function as taking T* or const T* (in C), then it is crucial that the FFI declarations in Rust are faithful to the declaration in C.

The current difference in Rust unsafe pointers types with C pointers types is proving to be too error prone to realistically enable these optimizations at a future date. By renaming Rust’s unsafe pointers to closely match their C brethren, the likelihood for erroneously transcribing a signature is diminished.

Detailed design

This section will assume that the current unsafe pointer design is forgotten completely, and will explain the unsafe pointer design from scratch.

There are two unsafe pointers in rust, *mut T and *const T. These two types are primarily useful when interacting with foreign functions through a FFI. The *mut T type is equivalent to the T* type in C, and the *const T type is equivalent to the const T* type in C.

The type &mut T will automatically coerce to *mut T in the normal locations that coercion occurs today. It will also be possible to explicitly cast with an as expression. Additionally, the &T type will automatically coerce to *const T. Note that &mut T will not automatically coerce to *const T.

The two unsafe pointer types will be freely castable among one another via as expressions, but no coercion will occur between the two. Additionally, values of type uint can be casted to unsafe pointers.

When is a coercion valid?

When coercing from &'a T to *const T, Rust will guarantee that the memory will remain valid for the lifetime 'a and the memory will be immutable up to memory stored in Unsafe<U>. It is the responsibility of the code working with the *const T that the pointer is only dereferenced in the lifetime 'a.

When coercing from &'a mut T to *mut T, Rust will guarantee that the memory will stay valid during 'a and that the memory will not be accessed during 'a. Additionally, Rust will consume the &'a mut T during the coercion. It is the responsibility of the code working with the *mut T to guarantee that the unsafe pointer is only dereferenced in the lifetime 'a, and that the memory is “valid again” after 'a.

Note: Rust will consume &mut T coercions with both implicit and explicit coercions.

The term “valid again” is used to represent that some types in Rust require internal invariants, such as Box<T> never being NULL. This is often a per-type invariant, so it is the responsibility of the unsafe code to uphold these invariants.

When is a safe cast valid?

Unsafe code can convert an unsafe pointer to a safe pointer via dereferencing inside of an unsafe block. This section will discuss when this action is valid.

When converting *mut T to &'a mut T, it must be guaranteed that the memory is initialized to start out with and that nobody will access the memory during 'a except for the converted pointer.

When converting *const T to &'a T, it must be guaranteed that the memory is initialized to start out with and that nobody will write to the pointer during 'a except for memory within Unsafe<U>.

Drawbacks

Today’s unsafe pointers design is consistent with the borrowed pointers types in Rust, using the mut qualifier for a mutable pointer, and no qualifier for an “immutable” pointer. Renaming the pointers would be divergence from this consistency, and would also introduce a keyword that is not used elsewhere in the language, const.

Alternatives

  • The current *mut T type could be removed entirely, leaving only one unsafe pointer type, *T. This will not allow FFI calls to take advantage of the const T* optimizations on the caller side of the function. Additionally, this may not accurately express to the programmer what a FFI API is intending to do. Note, however, that other variants of unsafe pointer types could likely be added in the future in a backwards-compatible way.

  • More effort could be invested in auto-generating bindings, and hand-generating bindings could be greatly discouraged. This would maintain consistency with Rust pointer types, and it would allow APIs to usually being transcribed accurately by automating the process. It is unknown how realistic this solution is as it is currently not yet implemented. There may still be confusion as well that *T is not equivalent to C’s T*.

Unresolved questions

  • How much can the compiler help out when coercing &mut T to *mut T? As previously stated, the source pointer &mut T is consumed during the coercion (it’s already a linear type), but this can lead to some unexpected results:

    extern {
        fn bar(a: *mut int, b: *mut int);
    }
    
    fn foo(a: &mut int) {
        unsafe {
            bar(&mut *a, &mut *a);
        }
    }
    

    This code is invalid because it is creating two copies of the same mutable pointer, and the external function is unaware that the two pointers alias. The rule that the programmer has violated is that the pointer *mut T is only dereferenced during the lifetime of the &'a mut T pointer. For example, here are the lifetimes spelled out:

    fn foo(a: &mut int) {
        unsafe {
            bar(&mut *a, &mut *a);
    //          |-----|  |-----|
    //             |        |
    //             |       Lifetime of second argument
    //            Lifetime of first argument
        }
    }
    

    Here it can be seen that it is impossible for the C code to safely dereference the pointers passed in because lifetimes don’t extend into the function call itself. The compiler could, in this case, extend the lifetime of a coerced pointer to follow the otherwise applied temporary rules for expressions.

    In the example above, the compiler’s temporary lifetime rules would cause the first coercion to last for the entire lifetime of the call to bar, thereby disallowing the second reborrow because it has an overlapping lifetime with the first.

    It is currently an open question how necessary this sort of treatment will be, and this lifetime treatment will likely require a new RFC.

  • Will all pointer types in C need to have their own keyword in Rust for representation in the FFI?

  • To what degree will the compiler emit metadata about FFI function calls in order to take advantage of optimizations on the caller side of a function call? Do the theoretical wins justify the scope of this redesign? There is currently no concrete data measuring what benefits could be gained from informing optimization passes about const vs non-const pointers.

Summary

Add ASCII byte literals and ASCII byte string literals to the language, similar to the existing (Unicode) character and string literals. Before the RFC process was in place, this was discussed in #4334.

Motivation

Programs dealing with text usually should use Unicode, represented in Rust by the str and char types. In some cases however, a program may be dealing with bytes that can not be interpreted as Unicode as a whole, but still contain ASCII compatible bits.

For example, the HTTP protocol was originally defined as Latin-1, but in practice different pieces of the same request or response can use different encodings. The PDF file format is mostly ASCII, but can contain UTF-16 strings and raw binary data.

There is a precedent at least in Python, which has both Unicode and byte strings.

Drawbacks

The language becomes slightly more complex, although that complexity should be limited to the parser.

Detailed design

Using terminology from the Reference Manual:

Extend the syntax of expressions and patterns to add byte literals of type u8 and byte string literals of type &'static [u8] (or [u8], post-DST). They are identical to the existing character and string literals, except that:

  • They are prefixed with a b (for “binary”), to distinguish them. This is similar to the r prefix for raw strings.
  • Unescaped code points in the body must be in the ASCII range: U+0000 to U+007F.
  • '\x5c' 'u' hex_digit 4 and '\x5c' 'U' hex_digit 8 escapes are not allowed.
  • '\x5c' 'x' hex_digit 2 escapes represent a single byte rather than a code point. (They are the only way to express a non-ASCII byte.)

Examples: b'A' == 65u8, b'\t' == 9u8, b'\xFF' == 0xFFu8, b"A\t\xFF" == [65u8, 9, 0xFF]

Assuming buffer of type &[u8]

match buffer[i] {
    b'a' .. b'z' => { /* ... */ }
    c => { /* ... */ }
}

Alternatives

Status quo: patterns must use numeric literals for ASCII values, or (for a single byte, not a byte string) cast to char

match buffer[i] {
    c @ 0x61 .. 0x7A => { /* ... */ }
    c => { /* ... */ }
}
match buffer[i] as char {
    // `c` is of the wrong type!
    c @ 'a' .. 'z' => { /* ... */ }
    c => { /* ... */ }
}

Another option is to change the syntax so that macros such as bytes!() can be used in patterns, and add a byte!() macro:

match buffer[i] {
    c @ byte!('a') .. byte!('z') => { /* ... */ }
    c => { /* ... */ }
}q

This RFC was written to align the syntax with Python, but there could be many variations such as using a different prefix (maybe a for ASCII), or using a suffix instead (maybe u8, as in integer literals).

The code points from syntax could be encoded as UTF-8 rather than being mapped to bytes of the same value, but assuming UTF-8 is not always appropriate when working with bytes.

See also previous discussion in #4334.

Unresolved questions

Should there be “raw byte string” literals? E.g. pdf_file.write(rb"<< /Title (FizzBuzz \(Part one\)) >>")

Should control characters (U+0000 to U+001F) be disallowed in syntax? This should be consistent across all kinds of literals.

Should the bytes!() macro be removed in favor of this?

Summary

Allow block expressions in statics, as long as they only contain items and a trailing const expression.

Example:

static FOO: uint = { 100 };
static BAR: fn() -> int = {
    fn hidden() -> int {
        42
    }
    hidden
};

Motivation

This change allows defining items as part of a const expression, and evaluating to a value using them. This is mainly useful for macros, as it allows hiding complex machinery behind something that expands to a value, but also enables using unsafe {} blocks in a static initializer.

Real life examples include the regex! macro, which currently expands to a block containing a function definition and a value, and would be usable in a static with this.

Another example would be to expose a static reference to a fixed memory address by dereferencing a raw pointer in a const expr, which is useful in embedded and kernel, but requires a unsafe block to do.

The outcome of this is that one additional expression type becomes valid as a const expression, with semantics that are a strict subset of its equivalent in a function.

Drawbacks

Block expressions in a function are usually just used to run arbitrary code before evaluating to a value. Allowing them in statics without allowing code execution might be confusing.

Detailed design

A branch implementing this feature can be found at https://github.com/Kimundi/rust/tree/const_block.

It mainly involves the following changes:

  • const check now allows block expressions in statics:
    • All statements that are not item declarations lead to an compile error.
  • trans and const eval are made aware of block expressions:
    • A trailing expression gets evaluated as a constant.
    • A missing trailing expressions is treated as a unit value.
  • trans is made to recurse into static expressions to generate possible items.

Things like privacy/reachability of definitions inside a static block are already handled more generally at other places, as the situation is very similar to a regular function.

The branch also includes tests that show how this feature works in practice.

Alternatives

Because this feature is a straight forward extension of the valid const expressions, it already causes a very minimal impact on the language, with most alternative ways of enabling the same benefits being more complex.

For example, a expression AST node that can include items but is only usable from procedural macros could be added.

Not having this feature would not prevent anything interesting from getting implemented, but it would lead to less nice looking solutions.

For example, a comparison between static-supporting regex! with and without this feature:

// With this feature, you can just initialize a static:
static R: Regex = regex!("[0-9]");

// Without it, the static needs to be generated by the
// macro itself, alongside all generated items:
regex! {
    static R = "[0-9]";
}

Unresolved questions

None so far.

Summary

Leave structs with unspecified layout by default like enums, for optimisation purposes. Use something like #[repr(C)] to expose C compatible layout.

Motivation

The members of a struct are always laid in memory in the order in which they were specified, e.g.

struct A {
    x: u8,
    y: u64,
    z: i8,
    w: i64,
}

will put the u8 first in memory, then the u64, the i8 and lastly the i64. Due to the alignment requirements of various types padding is often required to ensure the members start at an appropriately aligned byte. Hence the above struct is not 1 + 8 + 1 + 8 == 18 bytes, but rather 1 + 7 + 8 + 1 + 7 + 8 == 32 bytes, since it is laid out like

#[packed] // no automatically inserted padding
struct AFull {
    x: u8,
    _padding1: [u8, .. 7],
    y: u64,
    z: i8,
    _padding2: [u8, .. 7],
    w: i64
}

If the fields were reordered to

struct B {
    y: u64,
    w: i64,

    x: u8,
    i: i8
}

then the struct is (strictly) only 18 bytes (but the alignment requirements of u64 forces it to take up 24).

Having an undefined layout does allow for possible security improvements, like randomising struct fields, but this can trivially be done with a syntax extension that can be attached to a struct to reorder the fields in the AST itself. That said, there may be benefits from being able to randomise all structs in a program automatically/for testing, effectively fuzzing code (especially unsafe code).

Notably, Rust’s enums already have undefined layout, and provide the #[repr] attribute to control layout more precisely (specifically, selecting the size of the discriminant).

Drawbacks

Forgetting to add #[repr(C)] for a struct intended for FFI use can cause surprising bugs and crashes. There is already a lint for FFI use of enums without a #[repr(...)] attribute, so this can be extended to include structs.

Having an unspecified (or otherwise non-C-compatible) layout by default makes interfacing with C slightly harder. A particularly bad case is passing to C a struct from an upstream library that doesn’t have a repr(C) attribute. This situation seems relatively similar to one where an upstream library type is missing an implementation of a core trait e.g. Hash if one wishes to use it as a hashmap key.

It is slightly better if structs had a specified-but-C-incompatible layout, and one has control over the C interface, because then one can manually arrange the fields in the C definition to match the Rust order.

That said, this scenario requires:

  • Needing to pass a Rust struct into C/FFI code, where that FFI code actually needs to use things from the struct, rather than just pass it through, e.g., back into a Rust callback.
  • The Rust struct is defined upstream & out of your control, and not intended for use with C code.
  • The C/FFI code is designed by someone other than that vendor, or otherwise not designed for use with the Rust struct (or else it is a bug in the vendor’s library that the Rust struct can’t be sanely passed to C).

Detailed design

A struct declaration like

struct Foo {
    x: T,
    y: U,
    ...
}

has no fixed layout, that is, a compiler can choose whichever order of fields it prefers.

A fixed layout can be selected with the #[repr] attribute

#[repr(C)]
struct Foo {
    x: T,
    y: U,
    ...
}

This will force a struct to be laid out like the equivalent definition in C.

There would be a lint for the use of non-repr(C) structs in related FFI definitions, for example:

struct UnspecifiedLayout {
   // ...
}

#[repr(C)]
struct CLayout {
   // ...
}


extern {
    fn foo(x: UnspecifiedLayout); // warning: use of non-FFI-safe struct in extern declaration

    fn bar(x: CLayout); // no warning
}

extern "C" fn foo(x: UnspecifiedLayout) { } // warning: use of non-FFI-safe struct in function with C abi.

Alternatives

  • Have non-C layouts opt-in, via #[repr(smallest)] and #[repr(random)] (or similar).
  • Have layout defined, but not declaration order (like Java(?)), for example, from largest field to smallest, so u8 fields get placed last, and [u8, .. 1000000] fields get placed first. The #[repr] attributes would still allow for selecting declaration-order layout.

Unresolved questions

  • How does this interact with binary compatibility of dynamic libraries?
  • How does this interact with DST, where some fields have to be at the end of a struct? (Just always lay-out unsized fields last? (i.e. after monomorphisation if a field was originally marked Sized? then it needs to be last).)

Summary

Allow macro expansion in patterns, i.e.

match x {
    my_macro!() => 1,
    _ => 2,
}

Motivation

This is consistent with allowing macros in expressions etc. It’s also a year-old open issue.

I have implemented this feature already and I’m using it to condense some ubiquitous patterns in the HTML parser I’m writing. This makes the code more concise and easier to cross-reference with the spec.

Drawbacks / alternatives

A macro invocation in this position:

match x {
    my_macro!()

could potentially expand to any of three different syntactic elements:

  • A pattern, i.e. Foo(x)
  • The left side of a match arm, i.e. Foo(x) | Bar(x) if x > 5
  • An entire match arm, i.e. Foo(x) | Bar(x) if x > 5 => 1

This RFC proposes only the first of these, but the others would be more useful in some cases. Supporting multiple of the above would be significantly more complex.

Another alternative is to use a macro for the entire match expression, e.g.

my_match!(x {
    my_new_syntax => 1,
    _ => 2,
})

This doesn’t involve any language changes, but requires writing a complicated procedural macro. (My sustained attempts to do things like this with MBE macros have all failed.) Perhaps I could alleviate some of the pain with a library for writing match-like macros, or better use of the existing parser in libsyntax.

The my_match! approach is also not very composable.

Another small drawback: rustdoc can’t document the name of a function argument which is produced by a pattern macro.

Unresolved questions

None, as far as I know.

Summary

Generalize the #[macro_registrar] feature so it can register other kinds of compiler plugins.

Motivation

I want to implement loadable lints and use them for project-specific static analysis passes in Servo. Landing this first will allow more evolution of the plugin system without breaking source compatibility for existing users.

Detailed design

To register a procedural macro in current Rust:

use syntax::ast::Name;
use syntax::parse::token;
use syntax::ext::base::{SyntaxExtension, BasicMacroExpander, NormalTT};

#[macro_registrar]
pub fn macro_registrar(register: |Name, SyntaxExtension|) {
    register(token::intern("named_entities"),
        NormalTT(box BasicMacroExpander {
            expander: named_entities::expand,
            span: None
        },
        None));
}

I propose an interface like

use syntax::parse::token;
use syntax::ext::base::{BasicMacroExpander, NormalTT};

use rustc::plugin::Registry;

#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
    reg.register_macro(token::intern("named_entities"),
        NormalTT(box BasicMacroExpander {
            expander: named_entities::expand,
            span: None
        },
        None));
}

Then the struct Registry could provide additional methods such as register_lint as those features are implemented.

It could also provide convenience methods:

use rustc::plugin::Registry;

#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
    reg.register_simple_macro("named_entities", named_entities::expand);
}

phase(syntax) becomes phase(plugin), with the former as a deprecated synonym that warns. This is to avoid silent breakage of the very common #[phase(syntax)] extern crate log.

We only need one phase of loading plugin crates, even though the plugins we load may be used at different points (or not at all).

Drawbacks

Breaking change for existing procedural macros.

More moving parts.

Registry is provided by librustc, because it will have methods for registering lints and other librustc things. This means that syntax extensions must link librustc, when before they only needed libsyntax (but could link librustc anyway if desired). This was discussed on the RFC PR and the Rust PR and on IRC.

#![feature(macro_registrar)] becomes unknown, contradicting a comment in feature_gate.rs:

This list can never shrink, it may only be expanded (in order to prevent old programs from failing to compile)

Since when do we ensure that old programs will compile? ;) The #[macro_registrar] attribute wouldn’t work anyway.

Alternatives

We could add #[lint_registrar] etc. alongside #[macro_registrar]. This seems like it will produce more duplicated effort all around. It doesn’t provide convenience methods, and it won’t support API evolution as well.

We could support the old #[macro_registrar] by injecting an adapter shim. This is significant extra work to support a feature with no stability guarantee.

Unresolved questions

Naming bikeshed.

What set of convenience methods should we provide?

Summary

Bounds on trait objects should be separated with +.

Motivation

With DST there is an ambiguity between the following two forms:

trait X {
    fn f(foo: b);
}

and

trait X {
    fn f(Trait: Share);
}

See Rust issue #12778 for details.

Also, since kinds are now just built-in traits, it makes sense to treat a bounded trait object as just a combination of traits. This could be extended in the future to allow objects consisting of arbitrary trait combinations.

Detailed design

Instead of : in trait bounds for first-class traits (e.g. &Trait:Share + Send), we use + (e.g. &Trait + Share + Send).

+ will not be permitted in as without parentheses. This will be done via a special restriction in the type grammar: the special TYPE production following as will be the same as the regular TYPE production, with the exception that it does not accept + as a binary operator.

Drawbacks

  • It may be that + is ugly.

  • Adding a restriction complicates the type grammar more than I would prefer, but the community backlash against the previous proposal was overwhelming.

Alternatives

The impact of not doing this is that the inconsistencies and ambiguities above remain.

Unresolved questions

Where does the 'static bound fit into all this?

Summary

Allow users to load custom lints into rustc, similar to loadable syntax extensions.

Motivation

There are many possibilities for user-defined static checking:

  • Enforcing correct usage of Servo’s JS-managed pointers
  • lilyball’s use case: checking that rust-lua functions which call longjmp never have destructors on stack variables
  • Enforcing a company or project style guide
  • Detecting common misuses of a library, e.g. expensive or non-idiomatic constructs
  • In cryptographic code, annotating which variables contain secrets and then forbidding their use in variable-time operations or memory addressing

Existing project-specific static checkers include:

  • A Clang plugin that detects misuse of GLib and GObject
  • A GCC plugin (written in Python!) that detects misuse of the CPython extension API
  • Sparse, which checks Linux kernel code for issues such as mixing up userspace and kernel pointers (often exploitable for privilege escalation)

We should make it easy to build such tools and integrate them with an existing Rust project.

Detailed design

In rustc::lint (which today is rustc::middle::lint):

pub struct Lint {
    /// An identifier for the lint, written with underscores,
    /// e.g. "unused_imports".
    pub name: &'static str,

    /// Default level for the lint.
    pub default_level: Level,

    /// Description of the lint or the issue it detects,
    /// e.g. "imports that are never used"
    pub desc: &'static str,
}

#[macro_export]
macro_rules! declare_lint ( ($name:ident, $level:ident, $desc:expr) => (
    static $name: &'static ::rustc::lint::Lint
        = &::rustc::lint::Lint {
            name: stringify!($name),
            default_level: ::rustc::lint::$level,
            desc: $desc,
        };
))

pub type LintArray = &'static [&'static Lint];

#[macro_export]
macro_rules! lint_array ( ($( $lint:expr ),*) => (
    {
        static array: LintArray = &[ $( $lint ),* ];
        array
    }
))

pub trait LintPass {
    fn get_lints(&self) -> LintArray;

    fn check_item(&mut self, cx: &Context, it: &ast::Item) { }
    fn check_expr(&mut self, cx: &Context, e: &ast::Expr) { }
    ...
}

pub type LintPassObject = Box<LintPass: 'static>;

To define a lint:

#![crate_id="lipogram"]
#![crate_type="dylib"]
#![feature(phase, plugin_registrar)]

extern crate syntax;

// Load rustc as a plugin to get macros
#[phase(plugin, link)]
extern crate rustc;

use syntax::ast;
use syntax::parse::token;
use rustc::lint::{Context, LintPass, LintPassObject, LintArray};
use rustc::plugin::Registry;

declare_lint!(letter_e, Warn, "forbid use of the letter 'e'")

struct Lipogram;

impl LintPass for Lipogram {
    fn get_lints(&self) -> LintArray {
        lint_array!(letter_e)
    }

    fn check_item(&mut self, cx: &Context, it: &ast::Item) {
        let name = token::get_ident(it.ident);
        if name.get().contains_char('e') || name.get().contains_char('E') {
            cx.span_lint(letter_e, it.span, "item name contains the letter 'e'");
        }
    }
}

#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
    reg.register_lint_pass(box Lipogram as LintPassObject);
}

A pass which defines multiple lints will have e.g. lint_array!(deprecated, experimental, unstable).

To use a lint when compiling another crate:

#![feature(phase)]

#[phase(plugin)]
extern crate lipogram;

fn hello() { }

fn main() { hello() }

And you will get

test.rs:6:1: 6:15 warning: item name contains the letter 'e', #[warn(letter_e)] on by default
test.rs:6 fn hello() { }
          ^~~~~~~~~~~~~~

Internally, lints are identified by the address of a static Lint. This has a number of benefits:

  • The linker takes care of assigning unique IDs, even with dynamically loaded plugins.
  • A typo writing a lint ID is usually a compiler error, unlike with string IDs.
  • The ability to output a given lint is controlled by the usual visibility mechanism. Lints defined within rustc use the same infrastructure and will simply export their Lints if other parts of the compiler need to output those lints.
  • IDs are small and easy to hash.
  • It’s easy to go from an ID to name, description, etc.

User-defined lints are controlled through the usual mechanism of attributes and the -A -W -D -F flags to rustc. User-defined lints will show up in -W help if a crate filename is also provided; otherwise we append a message suggesting to re-run with a crate filename.

See also the full demo.

Drawbacks

This increases the amount of code in rustc to implement lints, although it makes each individual lint much easier to understand in isolation.

Loadable lints produce more coupling of user code to rustc internals (with no official stability guarantee, of course).

There’s no scoping / namespacing of the lint name strings used by attributes and compiler flags. Attempting to register a lint with a duplicate name is an error at registration time.

The use of &'static means that lint plugins can’t dynamically generate the set of lints based on some external resource.

Alternatives

We could provide a more generic mechanism for user-defined AST visitors. This could support other use cases like code transformation. But it would be harder to use, and harder to integrate with the lint infrastructure.

It would be nice to magically find all static Lints in a crate, so we don’t need get_lints. Is this worth adding another attribute and another crate metadata type? The plugin::Registry mechanism was meant to avoid such a proliferation of metadata types, but it’s not as declarative as I would like.

Unresolved questions

Do we provide guarantees about visit order for a lint, or the order of multiple lints defined in the same crate? Some lints may require multiple passes.

Should we enforce (while running lints) that each lint printed with span_lint was registered by the corresponding LintPass? Users who particularly care can already wrap lints in modules and use visibility to enforce this statically.

Should we separate registering a lint pass from initializing / constructing the value implementing LintPass? This would support a future where a single rustc invocation can compile multiple crates and needs to reset lint state.

Summary

Simplify Rust’s lexical syntax to make tooling easier to use and easier to define.

Motivation

Rust’s lexer does a lot of work. It un-escapes escape sequences in string and character literals, and parses numeric literals of 4 different bases. It also strips comments, which is sensible, but can be undesirable for pretty printing or syntax highlighting without hacks. Since many characters are allowed in strings both escaped and raw (tabs, newlines, and unicode characters come to mind), after lexing it is impossible to tell if a given character was escaped or unescaped in the source, making the lexer difficult to test against a model.

Detailed design

The following (antlr4) grammar completely describes the proposed lexical syntax:

lexer grammar RustLexer;

/* import Xidstart, Xidcont; */

/* Expression-operator symbols */

EQ      : '=' ;
LT      : '<' ;
LE      : '<=' ;
EQEQ    : '==' ;
NE      : '!=' ;
GE      : '>=' ;
GT      : '>' ;
ANDAND  : '&&' ;
OROR    : '||' ;
NOT     : '!' ;
TILDE   : '~' ;
PLUS    : '+' ;
MINUS   : '-' ;
STAR    : '*' ;
SLASH   : '/' ;
PERCENT : '%' ;
CARET   : '^' ;
AND     : '&' ;
OR      : '|' ;
SHL     : '<<' ;
SHR     : '>>' ;

BINOP
    : PLUS
    | MINUS
    | STAR
    | PERCENT
    | CARET
    | AND
    | OR
    | SHL
    | SHR
    ;

BINOPEQ : BINOP EQ ;

/* "Structural symbols" */

AT         : '@' ;
DOT        : '.' ;
DOTDOT     : '..' ;
DOTDOTDOT  : '...' ;
COMMA      : ',' ;
SEMI       : ';' ;
COLON      : ':' ;
MOD_SEP    : '::' ;
LARROW     : '->' ;
FAT_ARROW  : '=>' ;
LPAREN     : '(' ;
RPAREN     : ')' ;
LBRACKET   : '[' ;
RBRACKET   : ']' ;
LBRACE     : '{' ;
RBRACE     : '}' ;
POUND      : '#';
DOLLAR     : '$' ;
UNDERSCORE : '_' ;

KEYWORD : STRICT_KEYWORD | RESERVED_KEYWORD ;

fragment STRICT_KEYWORD
  : 'as'
  | 'box'
  | 'break'
  | 'continue'
  | 'crate'
  | 'else'
  | 'enum'
  | 'extern'
  | 'fn'
  | 'for'
  | 'if'
  | 'impl'
  | 'in'
  | 'let'
  | 'loop'
  | 'match'
  | 'mod'
  | 'mut'
  | 'once'
  | 'proc'
  | 'pub'
  | 'ref'
  | 'return'
  | 'self'
  | 'static'
  | 'struct'
  | 'super'
  | 'trait'
  | 'true'
  | 'type'
  | 'unsafe'
  | 'use'
  | 'virtual'
  | 'while'
  ;

fragment RESERVED_KEYWORD
  : 'alignof'
  | 'be'
  | 'const'
  | 'do'
  | 'offsetof'
  | 'priv'
  | 'pure'
  | 'sizeof'
  | 'typeof'
  | 'unsized'
  | 'yield'
  ;

// Literals

fragment HEXIT
  : [0-9a-fA-F]
  ;

fragment CHAR_ESCAPE
  : [nrt\\'"0]
  | [xX] HEXIT HEXIT
  | 'u' HEXIT HEXIT HEXIT HEXIT
  | 'U' HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT
  ;

LIT_CHAR
  : '\'' ( '\\' CHAR_ESCAPE | ~[\\'\n\t\r] ) '\''
  ;

INT_SUFFIX
  : 'i'
  | 'i8'
  | 'i16'
  | 'i32'
  | 'i64'
  | 'u'
  | 'u8'
  | 'u16'
  | 'u32'
  | 'u64'
  ;

LIT_INTEGER
  : [0-9][0-9_]* INT_SUFFIX?
  | '0b' [01][01_]* INT_SUFFIX?
  | '0o' [0-7][0-7_]* INT_SUFFIX?
  | '0x' [0-9a-fA-F][0-9a-fA-F_]* INT_SUFFIX?
  ;

FLOAT_SUFFIX
  : 'f32'
  | 'f64'
  | 'f128'
  ;

LIT_FLOAT
  : [0-9][0-9_]* ('.' | ('.' [0-9][0-9_]*)? ([eE] [-+]? [0-9][0-9_]*)? FLOAT_SUFFIX?)
  ;

LIT_STR
  : '"' ('\\\n' | '\\\r\n' | '\\' CHAR_ESCAPE | .)*? '"'
  ;

/* this is a bit messy */

fragment LIT_STR_RAW_INNER
  : '"' .*? '"'
  | LIT_STR_RAW_INNER2
  ;

fragment LIT_STR_RAW_INNER2
  : POUND LIT_STR_RAW_INNER POUND
  ;

LIT_STR_RAW
  : 'r' LIT_STR_RAW_INNER
  ;

fragment BLOCK_COMMENT
  : '/*' (BLOCK_COMMENT | .)*? '*/'
  ;

COMMENT
  : '//' ~[\r\n]*
  | BLOCK_COMMENT
  ;

IDENT : XID_start XID_continue* ;

LIFETIME : '\'' IDENT ;

WHITESPACE : [ \r\n\t]+ ;

There are a few notable changes from today’s lexical syntax:

  • Non-doc comments are not stripped. To compensate, when encountering a COMMENT token the parser can check itself whether or not it’s a doc comment. This can be done with a simple regex: (//(/[^/]|!)|/\*(\*[^*]|!)).
  • Numeric literals are not differentiated based on presence of type suffix, nor are they converted from binary/octal/hexadecimal to decimal, nor are underscores stripped. This can be done trivially in the parser.
  • Character escapes are not unescaped. That is, if you write ‘\x20’, this lexer will give you LIT_CHAR('\x20') rather than LIT_CHAR(' '). The same applies to string literals.

The output of the lexer then becomes annotated spans – which part of the document corresponds to which token type. Even whitespace is categorized.

Drawbacks

Including comments and whitespace in the token stream is very non-traditional and not strictly necessary.

Summary

Do not identify struct literals by searching for :. Instead define a sub- category of expressions which excludes struct literals and re-define for, if, and other expressions which take an expression followed by a block (or non-terminal which can be replaced by a block) to take this sub-category, instead of all expressions.

Motivation

Parsing by looking ahead is fragile - it could easily be broken if we allow : to appear elsewhere in types (e.g., type ascription) or if we change struct literals to not require the : (e.g., if we allow empty structs to be written with braces, or if we allow struct literals to unify field names to local variable names, as has been suggested in the past and which we currently do for struct literal patterns). We should also be able to give better error messages today if users make these mistakes. More worryingly, we might come up with some language feature in the future which is not predictable now and which breaks with the current system.

Hopefully, it is pretty rare to use struct literals in these positions, so there should not be much fallout. Any problems can be easily fixed by assigning the struct literal into a variable. However, this is a backwards incompatible change, so it should block 1.0.

Detailed design

Here is a simplified version of a subset of Rust’s abstract syntax:

e      ::= x
         | e `.` f
         | name `{` (x `:` e)+ `}`
         | block
         | `for` e `in` e block
         | `if` e block (`else` block)?
         | `|` pattern* `|` e
         | ...
block  ::=  `{` (e;)* e? `}`

Parsing this grammar is ambiguous since x cannot be distinguished from name, so e block in the for expression is ambiguous with the struct literal expression. We currently solve this by using lookahead to find a : token in the struct literal.

I propose the following adjustment:

e      ::= e'
         | name `{` (x `:` e)+ `}`
         | `|` pattern* `|` e
         | ...
e'     ::= x
         | e `.` f
         | block
         | `for` e `in` e' block
         | `if` e' block (`else` block)?
         | `|` pattern* `|` e'
         | ...
block  ::=  `{` (e;)* e? `}`

e' is just e without struct literal expressions. We use e' instead of e wherever e is followed directly by block or any other non-terminal which may have block as its first terminal (after any possible expansions).

For any expressions where a sub-expression is the final lexical element (closures in the subset above, but also unary and binary operations), we require two versions of the meta-expression - the normal one in e and a version with e' for the final element in e'.

Implementation would be simpler, we just add a flag to parser::restriction called RESTRICT_BLOCK or something, which puts us into a mode which reflects e'. We would drop in to this mode when parsing e' position expressions and drop out of it for all but the last sub-expression of an expression.

Drawbacks

It makes the formal grammar and parsing a little more complicated (although it is simpler in terms of needing less lookahead and avoiding a special case).

Alternatives

Don’t do this.

Allow all expressions but greedily parse non-terminals in these positions, e.g., for N {} {} would be parsed as for (N {}) {}. This seems worse because I believe it will be much rarer to have structs in these positions than to have an identifier in the first position, followed by two blocks (i.e., parse as (for N {}) {}).

Unresolved questions

Do we need to expose this distinction anywhere outside of the parser? E.g., macros?

Summary

Remove localization features from format!, and change the set of escapes accepted by format strings. The plural and select methods would be removed, # would no longer need to be escaped, and {{/}} would become escapes for { and }, respectively.

Motivation

Localization is difficult to implement correctly, and doing so will likely not be done in the standard library, but rather in an external library. After talking with others much more familiar with localization, it has come to light that our ad-hoc “localization support” in our format strings are woefully inadequate for most real use cases of support for localization.

Instead of having a half-baked unused system adding complexity to the compiler and libraries, the support for this functionality would be removed from format strings.

Detailed design

The primary localization features that format! supports today are the plural and select methods inside of format strings. These methods are choices made at format-time based on the input arguments of how to format a string. This functionality would be removed from the compiler entirely.

As fallout of this change, the # special character, a back reference to the argument being formatted, would no longer be necessary. In that case, this character no longer needs an escape sequence.

The new grammar for format strings would be as follows:

format_string := <text> [ format <text> ] *
format := '{' [ argument ] [ ':' format_spec ] '}'
argument := integer | identifier

format_spec := [[fill]align][sign]['#'][0][width]['.' precision][type]
fill := character
align := '<' | '>'
sign := '+' | '-'
width := count
precision := count | '*'
type := identifier | ''
count := parameter | integer
parameter := integer '$'

The current syntax can be found at http://doc.rust-lang.org/std/fmt/#syntax to see the diff between the two

Choosing a new escape sequence

Upon landing, there was a significant amount of discussion about the escape sequence that would be used in format strings. Some context can be found on some old pull requests, and the current escape mechanism has been the source of much confusion. With the removal of localization methods, and namely nested format directives, it is possible to reconsider the choices of escaping again.

The only two characters that need escaping in format strings are { and }. One of the more appealing syntaxes for escaping was to double the character to represent the character itself. This would mean that {{ is an escape for a { character, while }} would be an escape for a } character.

Adopting this scheme would avoid clashing with Rust’s string literal escapes. There would be no “double escape” problem. More details on this can be found in the comments of an old PR.

Drawbacks

The localization methods of select/plural are genuinely used for applications that do not involve localization. For example, the compiler and rustdoc often use plural to easily create plural messages. Removing this functionality from format strings would impose a burden of likely dynamically allocating a string at runtime or defining two separate format strings.

Additionally, changing the syntax of format strings is quite an invasive change. Raw string literals serve as a good use case for format strings that must escape the { and } characters. The current system is arguably good enough to pass with for today.

Alternatives

The major localization approach explored has been l20n, which has shown itself to be fairly incompatible with the way format strings work today. Different localization systems, however, have not been explored. Systems such as gettext would be able to leverage format strings quite well, but it was claimed that gettext for localization is inadequate for modern use-cases.

It is also an unexplored possibility whether the current format string syntax could be leveraged by l20n. It is unlikely that time will be allocated to polish off an localization library before 1.0, and it is currently seen as undesirable to have a half-baked system in the libraries rather than a first-class well designed system.

Unresolved questions

  • Should localization support be left in std::fmt as a “poor man’s” implementation for those to use as they see fit?

Summary

Add a partial_cmp method to PartialOrd, analogous to cmp in Ord.

Motivation

The Ord::cmp method is useful when working with ordered values. When the exact ordering relationship between two values is required, cmp is both potentially more efficient than computing both a > b and then a < b and makes the code clearer as well.

I feel that in the case of partial orderings, an equivalent to cmp is even more important. I’ve found that it’s very easy to accidentally make assumptions that only hold true in the total order case (for example !(a < b) => a >= b). Explicitly matching against the possible results of the comparison helps keep these assumptions from creeping in.

In addition, the current default implementation setup is a bit strange, as implementations in the partial equality trait assume total equality. This currently makes it easier to incorrectly implement PartialOrd for types that do not have a total ordering, and if PartialOrd is separated from Ord in a way similar to this proposal, the default implementations for PartialOrd will need to be removed and an implementation of the trait will require four repetitive implementations of the required methods.

Detailed design

Add a method to PartialOrd, changing the default implementations of the other methods:

pub trait PartialOrd {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering>;

    fn lt(&self, other: &Self) -> bool {
        match self.partial_cmp(other) {
            Some(Less) => true,
            _ => false,
        }
    }

    fn le(&self, other: &Self) -> bool {
        match self.partial_cmp(other) {
            Some(Less) | Some(Equal) => true,
            _ => false,
        }
    }

    fn gt(&self, other: &Self) -> bool {
        match self.partial_cmp(other) {
            Some(Greater) => true,
            _ => false,
        }
    }

    fn ge(&self, other: &Self) -> bool {
        match self.partial_cmp(other) {
            Some(Greater) | Some(Equal) => true,
            _ => false,
        }
    }
}

Since almost all ordered types have a total ordering, the implementation of partial_cmp is trivial in most cases:

impl PartialOrd for Foo {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

This can be done automatically if/when RFC #48 or something like it is accepted and implemented.

Drawbacks

This does add some complexity to PartialOrd. In addition, the more commonly used methods (lt, etc) may become more expensive than they would normally be if their implementations call into partial_ord.

Alternatives

We could invert the default implementations and have a default implementation of partial_cmp in terms of lt and gt. This may slightly simplify things in current Rust, but it makes the default implementation less efficient than it should be. It would also require more work to implement PartialOrd once the currently planned cmp reform has finished as noted above.

partial_cmp could just be called cmp, but it seems like UFCS would need to be implemented first for that to be workable.

Unresolved questions

We may want to add something similar to PartialEq as well. I don’t know what it would be called, though (maybe partial_eq?):

// I don't feel great about these variant names, but `Equal` is already taken
// by `Ordering` which is in the same module.
pub enum Equality {
    AreEqual,
    AreUnequal,
}

pub trait PartialEq {
    fn partial_eq(&self, other: &Self) -> Option<Equality>;

    fn eq(&self, other: &Self) -> bool {
        match self.partial_eq(other) {
            Some(AreEqual) => true,
            _ => false,
        }
    }

    fn neq(&self, other: &Self) -> bool {
        match self.partial_eq(other) {
            Some(AreUnequal) => true,
            _ => false,
        }
    }
}

Summary

Rust currently forbids pattern guards on match arms with move-bound variables. Allowing them would increase the applicability of pattern guards.

Motivation

Currently, if you attempt to use guards on a match arm with a move-bound variable, e.g.

struct A { a: Box<int> }

fn foo(n: int) {
    let x = A { a: box n };
    let y = match x {
        A { a: v } if *v == 42 => v,
        _ => box 0
    };
}

you get an error:

test.rs:6:16: 6:17 error: cannot bind by-move into a pattern guard
test.rs:6         A { a: v } if *v == 42 => v,
                         ^

This should be permitted in cases where the guard only accesses the moved value by reference or copies out of derived paths.

This allows for succinct code with less pattern matching duplication and a minimum number of copies at runtime. The lack of this feature was encountered by @kmcallister when developing Servo’s new HTML 5 parser.

Detailed design

This change requires all occurrences of move-bound pattern variables in the guard to be treated as paths to the values being matched before they are moved, rather than the moved values themselves. Any moves of matched values into the bound variables would occur on the control flow edge between the guard and the arm’s expression. There would be no changes to the handling of reference-bound pattern variables.

The arm would be treated as its own nested scope with respect to borrows, so that pattern-bound variables would be able to be borrowed and dereferenced freely in the guard, but these borrows would not be in scope in the arm’s expression. Since the guard dominates the expression and the move into the pattern-bound variable, moves of either the match’s head expression or any pattern-bound variables in the guard would trigger an error.

The following examples would be accepted:

struct A { a: Box<int> }

impl A {
    fn get(&self) -> int { *self.a }
}

fn foo(n: int) {
    let x = A { a: box n };
    let y = match x {
        A { a: v } if *v == 42 => v,
        _ => box 0
    };
}

fn bar(n: int) {
    let x = A { a: box n };
    let y = match x {
        A { a: v } if x.get() == 42 => v,
        _ => box 0
    };
}

fn baz(n: int) {
    let x = A { a: box n };
    let y = match x {
        A { a: v } if *v.clone() == 42 => v,
        _ => box 0
    };
}

This example would be rejected, due to a double move of v:

struct A { a: Box<int> }

fn foo(n: int) {
    let x = A { a: box n };
    let y = match x {
        A { a: v } if { drop(v); true } => v,
        _ => box 0
    };
}

This example would also be rejected, even though there is no use of the move-bound variable in the first arm’s expression, since the move into the bound variable would be moving the same value a second time:

enum VecWrapper { A(Vec<int>) }

fn foo(x: VecWrapper) -> uint {
    match x {
        A(v) if { drop(v); false } => 1,
        A(v) => v.len()
    }
}

There are issues with mutation of the bound values, but that is true without the changes proposed by this RFC, e.g. Rust issue #14684. The general approach to resolving that issue should also work with these proposed changes.

This would be implemented behind a feature(bind_by_move_pattern_guards) gate until we have enough experience with the feature to remove the feature gate.

Drawbacks

The current error message makes it more clear what the user is doing wrong, but if this change is made the error message for an invalid use of this feature (even if it were accidental) would indicate a use of a moved value, which might be more confusing.

This might be moderately difficult to implement in rustc.

Alternatives

As far as I am aware, the only workarounds for the lack of this feature are to manually expand the control flow of the guard (which can quickly get messy) or use unnecessary copies.

Unresolved questions

This has nontrivial interaction with guards in arbitrary patterns as proposed in #99.

Summary

  • Remove the crate_id attribute and knowledge of versions from rustc.
  • Add a #[crate_name] attribute similar to the old #[crate_id] attribute
  • Filenames will no longer have versions, nor will symbols
  • A new flag, --extern, will be used to override searching for external crates
  • A new flag, -C metadata=foo, used when hashing symbols

Motivation

The intent of CrateId and its support has become unclear over time as the initial impetus, rustpkg, has faded over time. With cargo on the horizon, doubts have been cast on the compiler’s support for dealing with crate versions and friends. The goal of this RFC is to simplify the compiler’s knowledge about the identity of a crate to allow cargo to do all the necessary heavy lifting.

This new crate identification is designed to not compromise on the usability of the compiler independent of cargo. Additionally, all use cases support today with a CrateId should still be supported.

Detailed design

A new #[crate_name] attribute will be accepted by the compiler, which is the equivalent of the old #[crate_id] attribute, except without the “crate id” support. This new attribute can have a string value describe a valid crate name.

A crate name must be a valid rust identifier with the exception of allowing the - character after the first character.

#![crate_name = "foo"]
#![crate_type = "lib"]

pub fn foo() { /* ... */ }

Naming library filenames

Currently, rustc creates filenames for library following this pattern:

lib<name>-<version>-<hash>.rlib

The current scheme defines <hash> to be the hash of the CrateId value. This naming scheme achieves a number of goals:

  • Libraries of the same name can exist next to one another if they have different versions.
  • Libraries of the same name and version, but from different sources, can exist next to one another due to having different hashes.
  • Rust libraries can have very privileged names such as core and std without worrying about polluting the global namespace of other system libraries.

One drawback of this scheme is that the output filename of the compiler is unknown due to the <hash> component. One must query rustc itself to determine the name of the library output.

Under this new scheme, the new output filenames by the compiler would be:

lib<name>.rlib

Note that both the <version> and the <hash> are missing by default. The <version> was removed because the compiler no longer knows about the version, and the <hash> was removed to make the output filename predictable.

The three original goals can still be satisfied with this simplified naming scheme. As explained in the next section, the compiler’s “glob pattern” when searching for a crate named foo will be libfoo*.rlib, which will help rationalize some of these conclusions.

  • Libraries of the same name can exist next to one another because they can be manually renamed to have extra data after the libfoo, such as the version.
  • Libraries of the same name and version, but different source, can also exist by modifying what comes after libfoo, such as including a hash.
  • Rust does not need to occupy a privileged namespace as the default rust installation would include hashes in all the filenames as necessary. More on this later.

Additionally, with a predictable filename output external tooling should be easier to write.

Loading crates

The goal of the crate loading phase of the compiler is to map a set of extern crate statements to (dylib,rlib) pairs that are present on the filesystem. To do this, the current system matches dependencies via the CrateId syntax:

extern crate json = "super-fast-json#0.1.0";

In today’s compiler, this directive indicates that the a filename of the form libsuper-fast-json-0.1.0-<hash>.rlib must be found to be a candidate. Further checking happens once a candidate is found to ensure that it is indeed a rust library.

Concerns have been raised that this key point of dependency management is where the compiler is doing work that is not necessarily its prerogative. In a cargo-driven world, versions are primarily managed in an external manifest, in addition to doing other various actions such as renaming packages at compile time.

One solution would be to add more version management to the compiler, but this is seen as the compiler delving too far outside what it was initially tasked to do. With this in mind, this is the new proposal for the extern crate syntax:

extern crate json = "super-fast-json";

Notably, the CrateId is removed entirely, along with the version and path associated with it. The string value of the extern crate directive is still optional (defaulting to the identifier), and the string must be a valid crate name (as defined above).

The compiler’s searching and file matching logic would be altered to only match crates based on name. If two versions of a crate are found, the compiler will unconditionally emit an error. It will be up to the user to move the two libraries on the filesystem and control the -L flags to the compiler to enable disambiguation.

This imples that when the compiler is searching for the crate named foo, it will search all of the lookup paths for files which match the pattern libfoo*.{so,rlib}. This is likely to return many false positives, but they will be easily weeded out once the compiler realizes that there is no metadata in the library.

This scheme is strictly less powerful than the previous, but it moves a good deal of logic from the compiler to cargo.

Manually specifying dependencies

Cargo is often seen as “expert mode” in its usage of the compiler. Cargo will always have prior knowledge about what exact versions of a library will be used for any particular dependency, as well as where the outputs are located.

If the compiler provided no support for loading crates beyond matching filenames, it would limit many of cargo’s use cases. For example, cargo could not compile a crate with two different versions of an upstream crate. Additionally, cargo could not substitute libfast-json for libslow-json at compile time (assuming they have the same API).

To accommodate an “expert mode” in rustc, the compiler will grow a new command line flag of the form:

--extern json=path/to/libjson

This directive will indicate that the library json can be found at path/to/libjson. The file extension is not specified, and it is assume that the rlib/dylib pair are located next to one another at this location (libjson is the file stem).

This will enable cargo to drive how the compiler loads crates by manually specifying where files are located and exactly what corresponds to what.

Symbol mangling

Today, mangled symbols contain the version number at the end of the symbol itself. This was originally intended to tie into Linux’s ability to version symbols, but in retrospect this is generally viewed as over-ambitious as the support is not currently there, nor does it work on windows or OSX.

Symbols would no longer contain the version number anywhere within them. The hash at the end of each symbol would only include the crate name and metadata from the command line. Metadata from the command line will be passed via a new command line flag, -C metadata=foo, which specifies a string to hash.

The standard rust distribution

The standard distribution would continue to put hashes in filenames manually because the libraries are intended to occupy a privileged space on the system. The build system would manually move a file after it was compiled to the correct destination filename.

Drawbacks

  • The compiler is able to operate fairly well independently of cargo today, and this scheme would hamstring the compiler by limiting the number of “it just works” use cases. If cargo is not being used, build systems will likely have to start using --extern to specify dependencies if name conflicts or version conflicts arise between crates.

  • This scheme still has redundancy in the list of dependencies with the external cargo manifest. The source code would no longer list versions, but the cargo manifest will contain the same identifier for each dependency that the source code will contain.

Alternatives

  • The compiler could go in the opposite direction of this proposal, enhancing extern crate instead of simplifying it. The compiler could learn about things like version ranges and friends, while still maintaining flags to fine tune its behavior. It is unclear whether this increase in complexity will be paired with a large enough gain in usability of the compiler independent of cargo.

Unresolved questions

  • An implementation for the more advanced features of cargo does not currently exist, to it is unknown whether --extern will be powerful enough for cargo to satisfy all its use cases with.

  • Are the string literal parts of extern crate justified? Allowing a string literal just for the - character may be overkill.

Summary

Index should be split into Index and IndexMut.

Motivation

Currently, the Index trait is not suitable for most array indexing tasks. The slice functionality cannot be replicated using it, and as a result the new Vec has to use .get() and .get_mut() methods.

Additionally, this simply follows the Deref/DerefMut split that has been implemented for a while.

Detailed design

We split Index into two traits (borrowed from @nikomatsakis):

// self[element] -- if used as rvalue, implicitly a deref of the result
trait Index<E,R> {
    fn index<'a>(&'a self, element: &E) -> &'a R;
}

// &mut self[element] -- when used as a mutable lvalue
trait IndexMut<E,R> {
    fn index_mut<'a>(&'a mut self, element: &E) -> &'a mut R;
}

Drawbacks

  • The number of lang. items increases.

  • This design doesn’t support moving out of a vector-like object. This can be added backwards compatibly.

  • This design doesn’t support hash tables because there is no assignment operator. This can be added backwards compatibly.

Alternatives

The impact of not doing this is that the [] notation will not be available to Vec.

Unresolved questions

None that I’m aware of.

Summary

Remove the coercion from Box<T> to &mut T from the language.

Motivation

Currently, the coercion between Box<T> to &mut T can be a hazard because it can lead to surprising mutation where it was not expected.

Detailed design

The coercion between Box<T> and &mut T should be removed.

Note that methods that take &mut self can still be called on values of type Box<T> without any special referencing or dereferencing. That is because the semantics of auto-deref and auto-ref conspire to make it work: the types unify after one autoderef followed by one autoref.

Drawbacks

Borrowing from Box<T> to &mut T may be convenient.

Alternatives

An alternative is to remove &T coercions as well, but this was decided against as they are convenient.

The impact of not doing this is that the coercion will remain.

Unresolved questions

None.

Summary

  • Convert function call a(b, ..., z) into an overloadable operator via the traits Fn<A,R>, FnShare<A,R>, and FnOnce<A,R>, where A is a tuple (B, ..., Z) of the types B...Z of the arguments b...z, and R is the return type. The three traits differ in their self argument (&mut self vs &self vs self).
  • Remove the proc expression form and type.
  • Remove the closure types (though the form lives on as syntactic sugar, see below).
  • Modify closure expressions to permit specifying by-reference vs by-value capture and the receiver type:
    • Specifying by-reference vs by-value closures:
      • ref |...| expr indicates a closure that captures upvars from the environment by reference. This is what closures do today and the behavior will remain unchanged, other than requiring an explicit keyword.
      • |...| expr will therefore indicate a closure that captures upvars from the environment by value. As usual, this is either a copy or move depending on whether the type of the upvar implements Copy.
    • Specifying receiver mode (orthogonal to capture mode above):
      • |a, b, c| expr is equivalent to |&mut: a, b, c| expr
      • |&mut: ...| expr indicates that the closure implements Fn
      • |&: ...| expr indicates that the closure implements FnShare
      • |: a, b, c| expr indicates that the closure implements FnOnce.
  • Add syntactic sugar where |T1, T2| -> R1 is translated to a reference to one of the fn traits as follows:
    • |T1, ..., Tn| -> R is translated to Fn<(T1, ..., Tn), R>
    • |&mut: T1, ..., Tn| -> R is translated to Fn<(T1, ..., Tn), R>
    • |&: T1, ..., Tn| -> R is translated to FnShare<(T1, ..., Tn), R>
    • |: T1, ..., Tn| -> R is translated to FnOnce<(T1, ..., Tn), R>

One aspect of closures that this RFC does not describe is that we must permit trait references to be universally quantified over regions as closures are today. A description of this change is described below under Unresolved questions and the details will come in a forthcoming RFC.

Motivation

Over time we have observed a very large number of possible use cases for closures. The goal of this RFC is to create a unified closure model that encompasses all of these use cases.

Specific goals (explained in more detail below):

  1. Give control over inlining to users.
  2. Support closures that bind by reference and closures that bind by value.
  3. Support different means of accessing the closure environment, corresponding to self, &self, and &mut self methods.

As a side benefit, though not a direct goal, the RFC reduces the size/complexity of the language’s core type system by unifying closures and traits.

The core idea: unifying closures and traits

The core idea of the RFC is to unify closures, procs, and traits. There are a number of reasons to do this. First, it simplifies the language, because closures, procs, and traits already served similar roles and there was sometimes a lack of clarity about which would be the appropriate choice. However, in addition, the unification offers increased expressiveness and power, because traits are a more generic model that gives users more control over optimization.

The basic idea is that function calls become an overridable operator. Therefore, an expression like a(...) will be desugar into an invocation of one of the following traits:

trait Fn<A,R> {
    fn call(&mut self, args: A) -> R;
}

trait FnShare<A,R> {
    fn call_share(&self, args: A) -> R;
}

trait FnOnce<A,R> {
    fn call_once(self, args: A) -> R;
}

Essentially, a(b, c, d) becomes sugar for one of the following:

Fn::call(&mut a, (b, c, d))
FnShare::call_share(&a, (b, c, d))
FnOnce::call_once(a, (b, c, d))

To integrate with this, closure expressions are then translated into a fresh struct that implements one of those three traits. The precise trait is currently indicated using explicit syntax but may eventually be inferred.

This change gives user control over virtual vs static dispatch. This works in the same way as generic types today:

fn foo(x: &mut Fn<(int,),int>) -> int {
    x(2) // virtual dispatch
}

fn foo<F:Fn<(int,),int>>(x: &mut F) -> int {
    x(2) // static dispatch
}

The change also permits returning closures, which is not currently possible (the example relies on the proposed impl syntax from rust-lang/rfcs#105):

fn foo(x: impl Fn<(int,),int>) -> impl Fn<(int,),int> {
    |v| x(v * 2)
}

Basically, in this design there is nothing special about a closure. Closure expressions are simply a convenient way to generate a struct that implements a suitable Fn trait.

Bind by reference vs bind by value

When creating a closure, it is now possible to specify whether the closure should capture variables from its environment (“upvars”) by reference or by value. The distinction is indicated using the leading keyword ref:

|| foo(a, b)      // captures `a` and `b` by value

ref || foo(a, b)  // captures `a` and `b` by reference, as today

Reasons to bind by value

Bind by value is useful when creating closures that will escape from the stack frame that created them, such as task bodies (spawn(|| ...)) or combinators. It is also useful for moving values out of a closure, though it should be possible to enable that with bind by reference as well in the future.

Reasons to bind by reference

Bind by reference is useful for any case where the closure is known not to escape the creating stack frame. This frequently occurs when using closures to encapsulate common control-flow patterns:

map.insert_or_update_with(key, value, || ...)
opt_val.unwrap_or_else(|| ...)

In such cases, the closure frequently wishes to read or modify local variables on the enclosing stack frame. Generally speaking, then, such closures should capture variables by-reference – that is, they should store a reference to the variable in the creating stack frame, rather than copying the value out. Using a reference allows the closure to mutate the variables in place and also avoids moving values that are simply read temporarily.

The vast majority of closures in use today are should be “by reference” closures. The only exceptions are those closures that wish to “move out” from an upvar (where we commonly use the so-called “option dance” today). In fact, even those closures could be “by reference” closures, but we will have to extend the inference to selectively identify those variables that must be moved and take those “by value”.

Detailed design

Closure expression syntax

Closure expressions will have the following form (using EBNF notation, where [] denotes optional things and {} denotes a comma-separated list):

CLOSURE = ['ref'] '|' [SELF] {ARG} '|' ['->' TYPE] EXPR
SELF    =  ':' | '&' ':' | '&' 'mut' ':'
ARG     = ID [ ':' TYPE ]

The optional keyword ref is used to indicate whether this closure captures by reference or by value.

Closures are always translated into a fresh struct type with one field per upvar. In a by-value closure, the types of these fields will be the same as the types of the corresponding upvars (modulo &mut reborrows, see below). In a by-reference closure, the types of these fields will be a suitable reference (&, &mut, etc) to the variables being borrowed.

By-value closures

The default form for a closure is by-value. This implies that all upvars which are referenced are copied/moved into the closure as appropriate. There is one special case: if the type of the value to be moved is &mut, we will “reborrow” the value when it is copied into the closure. That is, given an upvar x of type &'a mut T, the value which is actually captured will have type &'b mut T where 'b <= 'a. This rule is consistent with our general treatment of &mut, which is to aggressively reborrow wherever possible; moreover, this rule cannot introduce additional compilation errors, it can only make more programs successfully typecheck.

By-reference closures

A by-reference closure is a convenience form in which values used in the closure are converted into references before being captured. By-reference closures are always rewritable into by-value closures if desired, but the rewrite can often be cumbersome and annoying.

Here is a (rather artificial) example of a by-reference closure in use:

let in_vec: Vec<int> = ...;
let mut out_vec: Vec<int> = Vec::new();
let opt_int: Option<int> = ...;

opt_int.map(ref |v| {
    out_vec.push(v);
    in_vec.fold(v, |a, &b| a + b)
});

This could be rewritten into a by-value closure as follows:

let in_vec: Vec<int> = ...;
let mut out_vec: Vec<int> = Vec::new();
let opt_int: Option<int> = ...;

opt_int.map({
    let in_vec = &in_vec;
    let out_vec = &mut in_vec;
    |v| {
        out_vec.push(v);
        in_vec.fold(v, |a, &b| a + b)
    }
})

In this case, the capture closed over two variables, in_vec and out_vec. As you can see, the compiler automatically infers, for each variable, how it should be borrowed and inserts the appropriate capture.

In the body of a ref closure, the upvars continue to have the same type as they did in the outer environment. For example, the type of a reference to in_vec in the above example is always Vec<int>, whether or not it appears as part of a ref closure. This is not only convenient, it is required to make it possible to infer whether each variable is borrowed as an &T or &mut T borrow.

Note that there are some cases where the compiler internally employs a form of borrow that is not available in the core language, &uniq. This borrow does not permit aliasing (like &mut) but does not require mutability (like &). This is required to allow transparent closing over of &mut pointers as described in this blog post.

Evolutionary note: It is possible to evolve by-reference closures in the future in a backwards compatible way. The goal would be to cause more programs to type-check by default. Two possible extensions follow:

  • Detect when values are moved and hence should be taken by value rather than by reference. (This is only applicable to once closures.)
  • Detect when it is only necessary to borrow a sub-path. Imagine a closure like ref || use(&context.variable_map). Currently, this closure will borrow context, even though it only uses the field variable_map. As a result, it is sometimes necessary to rewrite the closure to have the form {let v = &context.variable_map; || use(v)}. In the future, however, we could extend the inference so that rather than borrowing context to create the closure, we would borrow context.variable_map directly.

Closure sugar in trait references

The current type for closures, |T1, T2| -> R, will be repurposed as syntactic sugar for a reference to the appropriate Fn trait. This shorthand be used any place that a trait reference is appropriate. The full type will be written as one of the following:

<'a...'z> |T1...Tn|: K -> R
<'a...'z> |&mut: T1...Tn|: K -> R
<'a...'z> |&: T1...Tn|: K -> R
<'a...'z> |: T1...Tn|: K -> R

Each of which would then be translated into the following trait references, respectively:

<'a...'z> Fn<(T1...Tn), R> + K
<'a...'z> Fn<(T1...Tn), R> + K
<'a...'z> FnShare<(T1...Tn), R> + K
<'a...'z> FnOnce<(T1...Tn), R> + K

Note that the bound lifetimes 'a...'z are not in scope for the bound K.

Drawbacks

This model is more complex than the existing model in some respects (but the existing model does not serve the full set of desired use cases).

Alternatives

There is one aspect of the design that is still under active discussion:

Introduce a more generic sugar. It was proposed that we could introduce Trait(A, B) -> C as syntactic sugar for Trait<(A,B),C> rather than retaining the form |A,B| -> C. This is appealing but removes the correspondence between the expression form and the corresponding type. One (somewhat open) question is whether there will be additional traits that mirror fn types that might benefit from this more general sugar.

Tweak trait names. In conjunction with the above, there is some concern that the type name fn(A) -> B for a bare function with no environment is too similar to Fn(A) -> B for a closure. To remedy that, we could change the name of the trait to something like Closure(A) -> B (naturally the other traits would be renamed to match).

Then there are a large number of permutations and options that were largely rejected:

Only offer by-value closures. We tried this and found it required a lot of painful rewrites of perfectly reasonable code.

Make by-reference closures the default. We felt this was inconsistent with the language as a whole, which tends to make “by value” the default (e.g., x vs ref x in patterns, x vs &x in expressions, etc.).

Use a capture clause syntax that borrows individual variables. “By value” closures combined with let statements already serve this role. Simply specifying “by-reference closure” also gives us room to continue improving inference in the future in a backwards compatible way. Moreover, the syntactic space around closures expressions is extremely constrained and we were unable to find a satisfactory syntax, particularly when combined with self-type annotations. Finally, if we decide we do want the ability to have “mostly by-value” closures, we can easily extend the current syntax by writing something like (ref x, ref mut y) || ... etc.

Retain the proc expression form. It was proposed that we could retain the proc expression form to specify a by-value closure and have || expressions be by-reference. Frankly, the main objection to this is that nobody likes the proc keyword.

Use variadic generics in place of tuple arguments. While variadic generics are an interesting addition in their own right, we’d prefer not to introduce a dependency between closures and variadic generics. Having all arguments be placed into a tuple is also a simpler model overall. Moreover, native ABIs on platforms of interest treat a structure passed by value identically to distinct arguments. Finally, given that trait calls have the “Rust” ABI, which is not specified, we can always tweak the rules if necessary (though there are advantages for tooling when the Rust ABI closely matches the native ABI).

Use inference to determine the self type of a closure rather than an annotation. We retain this option for future expansion, but it is not clear whether we can always infer the self type of a closure. Moreover, using inference rather a default raises the question of what to do for a type like |int| -> uint, where inference is not possible.

Default to something other than &mut self. It is our belief that this is the most common use case for closures.

Transition plan

TBD. pcwalton is working furiously as we speak.

Unresolved questions

What relationship should there be between the closure traits? On the one hand, there is clearly a relationship between the traits. For example, given a FnShare, one can easily implement Fn:

impl<A,R,T:FnShare<A,R>> Fn<A,R> for T {
    fn call(&mut self, args: A) -> R {
        (&*self).call_share(args)
    }
}

Similarly, given a Fn or FnShare, you can implement FnOnce. From this, one might derive a subtrait relationship:

trait FnOnce { ... }
trait Fn : FnOnce { ... }
trait FnShare : Fn { ... }

Employing this relationship, however, would require that any manual implement of FnShare or Fn must implement adapters for the other two traits, since a subtrait cannot provide a specialized default of supertrait methods (yet?). On the other hand, having no relationship between the traits limits reuse, at least without employing explicit adapters.

Other alternatives that have been proposed to address the problem:

  • Use impls to implement the fn traits in terms of one another, similar to what is shown above. The problem is that we would need to implement FnOnce both for all T where T:Fn and for all T where T:FnShare. This will yield coherence errors unless we extend the language with a means to declare traits as mutually exclusive (which might be valuable, but no such system has currently been proposed nor agreed upon).

  • Have the compiler implement multiple traits for a single closure. As with supertraits, this would require manual implements to implement multiple traits. It would also require generic users to write T:Fn+FnMut or else employ an explicit adapter. On the other hand, it preserves the “one method per trait” rule described below.

Can we optimize away the trait vtable? The runtime representation of a reference &Trait to a trait object (and hence, under this proposal, closures as well) is a pair of pointers (data, vtable). It has been proposed that we might be able to optimize this representation to (data, fnptr) so long as Trait has a single function. This slightly improves the performance of invoking the function as one need not indirect through the vtable. The actual implications of this on performance are unclear, but it might be a reason to keep the closure traits to a single method.

Closures that are quantified over lifetimes

A separate RFC is needed to describe bound lifetimes in trait references. For example, today one can write a type like <'a> |&'a A| -> &'a B, which indicates a closure that takes and returns a reference with the same lifetime specified by the caller at each call-site. Note that a trait reference like Fn<(&'a A), &'a B>, while syntactically similar, does not have the same meaning because it lacks the universal quantifier <'a>. Therefore, in the second case, 'a refers to some specific lifetime 'a, rather than being a lifetime parameter that is specified at each callsite. The high-level summary of the change therefore is to permit trait references like <'a> Fn<(&'a A), &'a B>; in this case, the value of <'a> will be specified each time a method or other member of the trait is accessed.

Summary

Currently we use inference to find the current type of otherwise-unannotated integer literals, and when that fails the type defaults to int. This is often felt to be potentially error-prone behavior.

This proposal removes the integer inference fallback and strengthens the types required for several language features that interact with integer inference.

Motivation

With the integer fallback, small changes to code can change the inferred type in unexpected ways. It’s not clear how big a problem this is, but previous experiments1 indicate that removing the fallback has a relatively small impact on existing code, so it’s reasonable to back off of this feature in favor of more strict typing.

See also https://github.com/mozilla/rust/issues/6023.

Detailed design

The primary change here is that, when integer type inference fails, the compiler will emit an error instead of assigning the value the type int.

This change alone will cause a fair bit of existing code to be unable to type check because of lack of constraints. To add more constraints and increase likelihood of unification, we ‘tighten’ up what kinds of integers are required in some situations:

  • Array repeat counts must be uint ([expr, .. count])
  • << and >> require uint when shifting integral types

Finally, inference for as will be modified to track the types a value is being cast to for cases where the value being cast is unconstrained, like 0 as u8.

Treatment of enum discriminants will need to change:

enum Color { Red = 0, Green = 1, Blue = 2 }

Currently, an unsuffixed integer defaults to int. Instead, we will only require enum discriminants primitive integers of unspecified type; assigning an integer to an enum will behave as if casting from from the type of the integer to an unsigned integer with the size of the enum discriminant.

Drawbacks

This will force users to type hint somewhat more often. In particular, ranges of unsigned ints may need to be type-hinted:

for _ in range(0u, 10) { }

Alternatives

Do none of this.

Unresolved questions

  • If we’re putting new restrictions on shift operators, should we change the traits, or just make the primitives special?
  • Start Date: 2014-06-12
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/116
  • Rust Issue #: https://github.com/rust-lang/rust/issues/16464

Summary

Remove or feature gate the shadowing of view items on the same scope level, in order to have less complicated semantic and be more future proof for module system changes or experiments.

This means the names brought in scope by extern crate and use may never collide with each other, nor with any other item (unless they live in different namespaces). Eg, this will no longer work:

extern crate foo;
use foo::bar::foo; // ERROR: There is already a module `foo` in scope

Shadowing would still be allowed in case of lexical scoping, so this continues to work:

extern crate foo;

fn bar() {
    use foo::bar::foo; // Shadows the outer foo

    foo::baz();
}

Definitions

Due to a certain lack of official, clearly defined semantics and terminology, a list of relevant definitions is included:

  • Scope A scope in Rust is basically defined by a block, following the rules of lexical scoping:

    scope 1 (visible: scope 1)
    {
          scope 1-1 (visible: scope 1, scope 1-1)
          {
              scope 1-1-1 (visible: scope 1, scope 1-1, scope 1-1-1)
          }
          scope 1-1
          {
              scope 1-1-2
          }
          scope 1-1
    }
    scope 1
    

    Blocks include block expressions, fn items and mod items, but not things like extern, enum or struct. Additionally, mod is special in that it isolates itself from parent scopes.

  • Scope Level Anything with the same name in the example above is on the same scope level. In a scope level, all names defined in parent scopes are visible, but can be shadowed by a new definition with the same name, which will be in scope for that scope itself and all its child scopes.

  • Namespace Rust has different namespaces, and the scoping rules apply to each one separately. The exact number of different namespaces is not well defined, but they are roughly

    • types (enum Foo {})
    • modules (mod foo {})
    • item values (static FOO: uint = 0;)
    • local values (let foo = 0;)
    • lifetimes (impl<'a> ...)
    • macros (macro_rules! foo {...})
  • Definition Item Declarations that create new entities in a crate are called (by the author) definition items. They include struct, enum, mod, fn, etc. Each of them creates a name in the type, module, item value or macro namespace in the same scope level they are written in.

  • View Item Declarations that just create aliases to existing declarations in a crate are called view items. They include use and extern crate, and also create a name in the type, module, item value or macro namespace in the same scope level they are written in.

  • Item Both definition items and view items together are collectively called items.

  • Shadowing While the principle of shadowing exists in all namespaces, there are different forms of it:

    • item-style: Declarations shadow names from outer scopes, and are visible everywhere in their own, including lexically before their own definition. This requires there to be only one definition with the same name and namespace per scope level. Types, modules, item values and lifetimes fall under these rules.
    • sequentially: Declarations shadow names that are lexically before them, both in parent scopes and their own. This means you can reuse the same name in the same scope, but a definition will not be visibly before itself. This is how local values and macros work. (Due to sequential code execution and parsing, respectively)
    • view item: A special case exists with view items; In the same scope level, extern crate creates entries in the module namespace, which are shadowable by names created with use, which are shadowable with any definition item. The singular goal of this RFC is to remove this shadowing behavior of view items

Motivation

As explained above, what is currently visible under which namespace in a given scope is determined by a somewhat complicated three step process:

  1. First, every extern crate item creates a name in the module namespace.
  2. Then, every use can create a name in any namespace, shadowing the extern crate ones.
  3. Lastly, any definition item can shadow any name brought in scope by both extern crate and use.

These rules have developed mostly in response to the older, more complicated import system, and the existence of wildcard imports (use foo::*). In the case of wildcard imports, this shadowing behavior prevents local code from breaking if the source module gets updated to include new names that happen to be defined locally.

However, wildcard imports are now feature gated, and name conflicts in general can be resolved by using the renaming feature of extern crate and use, so in the current non-gated state of the language there is no need for this shadowing behavior.

Gating it off opens the door to remove it altogether in a backwards compatible way, or to re-enable it in case wildcard imports are officially supported again.

It also makes the mental model around items simpler: Any shadowing of items happens through lexical scoping only, and every item can be considered unordered and mutually recursive.

If this RFC gets accepted, a possible next step would be a RFC to lift the ordering restriction between extern crate, use and definition items, which would make them truly behave the same in regard to shadowing and the ability to be reordered. It would also lift the weirdness of use foo::bar; mod foo;.

Implementing this RFC would also not change anything about how name resolution works, as its just a tightening of the existing rules.

Drawbacks

  • Feature gating import shadowing might break some code using #[feature(globs)].
  • The behavior of libstds prelude becomes more magical if it still allows shadowing, but this could be de-magified again by a new feature, see below in unresolved questions.
  • Or the utility of libstds prelude becomes more restricted if it doesn’t allow shadowing.

Detailed design

A new feature gate import_shadowing gets created.

During the name resolution phase of compilation, every time the compiler detects a shadowing between extern crate, use and definition items in the same scope level, it bails out unless the feature gate got enabled. This amounts to two rules:

  • Items in the same scope level and either the type, module, item value or lifetime namespace may not shadow each other in the respective namespace.
  • Items may shadow names from outer scopes in any namespace.

Just like for the globs feature, the libstd prelude import would be preempt from this, and still be allowed to be shadowed.

Alternatives

The alternative is to do nothing, and risk running into a backwards compatibility hazard, or committing to make a final design decision around the whole module system before 1.0 gets released.

Unresolved questions

  • It is unclear how the libstd preludes fits into this.

    On the one hand, it basically acts like a hidden use std::prelude::*; import which ignores the globs feature, so it could simply also ignore the import_shadowing feature as well, and the rule becomes that the prelude is a magic compiler feature that injects imports into every module but doesn’t prevent the user from taking the same names.

    On the other hand, it is also thinkable to simply forbid shadowing of prelude items as well, as defining things with the same name as std exports is not recommended anyway, and this would nicely enforce that. It would however mean that the prelude can not change without breaking backwards compatibility, which might be too restricting.

    A compromise would be to specialize wildcard imports into a new prelude use feature, which has the explicit properties of being shadow-able and using a wildcard import. libstds prelude could then simply use that, and users could define and use their own preludes as well. But that’s a somewhat orthogonal feature, and should be discussed in its own RFC.

  • Interaction with overlapping imports.

    Right now its legal to write this:

fn main() { use Bar = std::result::Result; use Bar = std::option::Option; let x: Bar = None; }

where the latter `use` shadows the former. This would have to be forbidden as well,
however the current semantic seems like a accident anyway.

Summary

Rename the Share trait to Sync

Motivation

With interior mutability, the name “immutable pointer” for a value of type &T is not quite accurate. Instead, the term “shared reference” is becoming popular to reference values of type &T. The usage of the term “shared” is in conflict with the Share trait, which is intended for types which can be safely shared concurrently with a shared reference.

Detailed design

Rename the Share trait in std::kinds to Sync. Documentation would refer to &T as a shared reference and the notion of “shared” would simply mean “many references” while Sync implies that it is safe to share among many threads.

Drawbacks

The name Sync may invoke conceptions of “synchronized” from languages such as Java where locks are used, rather than meaning “safe to access in a shared fashion across tasks”.

Alternatives

As any bikeshed, there are a number of other names which could be possible for this trait:

  • Concurrent
  • Synchronized
  • Threadsafe
  • Parallel
  • Threaded
  • Atomic
  • DataRaceFree
  • ConcurrentlySharable

Unresolved questions

None.

Summary

Remove special treatment of Box<T> from the borrow checker.

Motivation

Currently the Box<T> type is special-cased and converted to the old ~T internally. This is mostly invisible to the user, but it shows up in some places that give special treatment to Box<T>. This RFC is specifically concerned with the fact that the borrow checker has greater precision when dereferencing Box<T> vs other smart pointers that rely on the Deref traits. Unlike the other kinds of special treatment, we do not currently have a plan for how to extend this behavior to all smart pointer types, and hence we would like to remove it.

Here is an example that illustrates the extra precision afforded to Box<T> vs other types that implement the Deref traits. The following program, written using the Box type, compiles successfully:

struct Pair {
    a: uint,
    b: uint
}

fn example1(mut smaht: Box<Pair>) {
    let a = &mut smaht.a;
    let b = &mut smaht.b;
    ...
}

This program compiles because the type checker can see that (*smaht).a and (*smaht).b are always distinct paths. In contrast, if I use a smart pointer, I get compilation errors:

fn example2(cell: RefCell<Pair>) {
    let mut smaht: RefMut<Pair> = cell.borrow_mut();
    let a = &mut smaht.a;
    
    // Error: cannot borrow `smaht` as mutable more than once at a time
    let b = &mut smaht.b;
}

To see why this, consider the desugaring:

fn example2(smaht: RefCell<Pair>) {
    let mut smaht = smaht.borrow_mut();
    
    let tmp1: &mut Pair = smaht.deref_mut(); // borrows `smaht`
    let a = &mut tmp1.a;
    
    let tmp2: &mut Pair = smaht.deref_mut(); // borrows `smaht` again!
    let b = &mut tmp2.b;
}

It is a violation of the Rust type system to invoke deref_mut when the reference to a is valid and usable, since deref_mut requires &mut self, which in turn implies no alias to self or anything owned by self.

This desugaring suggests how the problem can be worked around in user code. The idea is to pull the result of the deref into a new temporary:

fn example3(smaht: RefCell<Pair>) {
    let mut smaht: RefMut<Pair> = smaht.borrow_mut();
    let temp: &mut Pair = &mut *smaht;
    let a = &mut temp.a;
    let b = &mut temp.b;
}

Detailed design

Removing this treatment from the borrow checker basically means changing the construction of loan paths for unique pointers.

I don’t actually know how best to implement this in the borrow checker, particularly concerning the desire to keep the ability to move out of boxes and use them in patterns. This requires some investigation. The easiest and best way may be to “do it right” and is probably to handle derefs of Box<T> in a similar way to how overloaded derefs are handled, but somewhat differently to account for the possibility of moving out of them. Some investigation is needed.

Drawbacks

The borrow checker rules are that much more restrictive.

Alternatives

We have ruled out inconsistent behavior between Box and other smart pointer types. We considered a number of ways to extend the current treatment of box to other smart pointer types:

  1. Require compiler to introduce deref temporaries automatically where possible. This is plausible as a future extension but requires some thought to work through all cases. It may be surprising. Note that this would be a required optimization because if the optimization is not performed it affects what programs can successfully type check. (Naturally it is also observable.)

  2. Some sort of unsafe deref trait that acknowledges possibility of other pointers into the referent. Unappealing because the problem is not that bad as to require unsafety.

  3. Determining conditions (perhaps based on parametricity?) where it is provably safe to call deref. It is dubious and unknown if such conditions exist or what that even means. Rust also does not really enjoy parametricity properties due to presence of reflection and unsafe code.

Unresolved questions

Best implementation strategy.

Summary

Note: This RFC discusses the behavior of rustc, and not any changes to the language.

Change how target specification is done to be more flexible for unexpected usecases. Additionally, add support for the “unknown” OS in target triples, providing a minimum set of target specifications that is valid for bare-metal situations.

Motivation

One of Rust’s important use cases is embedded, OS, or otherwise “bare metal” software. At the moment, we still depend on LLVM’s split-stack prologue for stack safety. In certain situations, it is impossible or undesirable to support what LLVM requires to enable this (on x86, a certain thread-local storage setup). Additionally, porting rustc to a new platform requires modifying the compiler, adding a new OS manually.

Detailed design

A target triple consists of three strings separated by a hyphen, with a possible fourth string at the end preceded by a hyphen. The first is the architecture, the second is the “vendor”, the third is the OS type, and the optional fourth is environment type. In theory, this specifies precisely what platform the generated binary will be able to run on. All of this is determined not by us but by LLVM and other tools. When on bare metal or a similar environment, there essentially is no OS, and to handle this there is the concept of “unknown” in the target triple. When the OS is “unknown”, no runtime environment is assumed to be present (including things such as dynamic linking, threads/thread-local storage, IO, etc).

Rather than listing specific targets for special treatment, introduce a general mechanism for specifying certain characteristics of a target triple. Redesign how targets are handled around this specification, including for the built-in targets. Extend the --target flag to accept a file name of a target specification. A table of the target specification flags and their meaning:

  • data-layout: The LLVM data layout to use. Mostly included for completeness; changing this is unlikely to be used.
  • link-args: Arguments to pass to the linker, unconditionally.
  • cpu: Default CPU to use for the target, overridable with -C target-cpu
  • features: Default target features to enable, augmentable with -C target-features.
  • dynamic-linking-available: Whether the dylib crate type is allowed.
  • split-stacks-supported: Whether there is runtime support that will allow LLVM’s split stack prologue to function as intended.
  • llvm-target: What target to pass to LLVM.
  • relocation-model: What relocation model to use by default.
  • target_endian, target_word_size: Specify the strings used for the corresponding cfg variables.
  • code-model: Code model to pass to LLVM, overridable with -C code-model.
  • no-redzone: Disable use of any stack redzone, overridable with -C no-redzone

Rather than hardcoding a specific set of behaviors per-target, with no recourse for escaping them, the compiler would also use this mechanism when deciding how to build for a given target. The process would look like:

  1. Look up the target triple in an internal map, and load that configuration if it exists. If that fails, check if the target name exists as a file, and try loading that. If the file does not exist, look up <target>.json in the RUST_TARGET_PATH, which is a colon-separated list of directories.
  2. If -C linker is specified, use that instead of the target-specified linker.
  3. If -C link-args is given, add those to the ones specified by the target.
  4. If -C target-cpu is specified, replace the target cpu with it.
  5. If -C target-feature is specified, add those to the ones specified by the target.
  6. If -C relocation-model is specified, replace the target relocation-model with it.
  7. If -C code-model is specified, replace the target code-model with it.
  8. If -C no-redzone is specified, replace the target no-redzone with true.

Then during compilation, this information is used at the proper places rather than matching against an enum listing the OSes we recognize. The target_os, target_family, and target_arch cfg variables would be extracted from the --target passed to rustc.

Drawbacks

More complexity. However, this is very flexible and allows one to use Rust on a new or non-standard target incredibly easy, without having to modify the compiler. rustc is the only compiler I know of that would allow that.

Alternatives

A less holistic approach would be to just allow disabling split stacks on a per-crate basis. Another solution could be adding a family of targets, <arch>-unknown-unknown, which omits all of the above complexity but does not allow extending to new targets easily.

  • Start Date: 2014-03-17
  • RFC PR #: #132
  • Rust Issue #: #16293

Summary

This RFC describes a variety of extensions to allow any method to be used as first-class functions. The same extensions also allow for trait methods without receivers to be invoked in a more natural fashion.

First, at present, the notation path::method() can be used to invoke inherent methods on types. For example, Vec::new() is used to create an instance of a vector. This RFC extends that notion to also cover trait methods, so that something like T::size_of() or T::default() is legal.

Second, currently it is permitted to reference so-called “static methods” from traits using a function-like syntax. For example, one can write Default::default(). This RFC extends that notation so it can be used with any methods, whether or not they are defined with a receiver. (In fact, the distinction between static methods and other methods is completely erased, as per the method lookup of RFC PR #48.)

Third, we introduce an unambiguous if verbose notation that permits one to precisely specify a trait method and its receiver type in one form. Specifically, the notation <T as TraitRef>::item can be used to designate an item item, defined in a trait TraitRef, as implemented by the type T.

Motivation

There are several motivations:

  • There is a need for an unambiguous way to invoke methods. This is typically a fallback for when the more convenient invocation forms fail:
    • For example, when multiple traits are in scope that all define the same method for the same types, there must be a way to disambiguate which method you mean.
    • It is sometimes desirable not to have autoderef:
      • For methods like clone() that apply to almost all types, it is convenient to be more specific about which precise type you want to clone. To get this right with autoderef, one must know the precise rules being used, which is contrary to the “DWIM” intention.
      • For types that implement Deref<T>, UFCS can be used to unambiguously differentiate between methods invoked on the smart pointer itself and methods invoked on its referent.
  • There are many methods, such as SizeOf::size_of(), that return properties of the type alone and do not naturally take any argument that can be used to decide which trait impl you are referring to.
    • This proposal introduces a variety of ways to invoke such methods, varying in the amount of explicit information one includes:
      • T::size_of() – shorthand, but only works if T is a path
      • <T>::size_of() – infers the trait SizeOf based on the traits in scope, just as with a method call
      • <T as SizeOf>::size_of() – completely unambiguous

Detailed design

Path syntax

The syntax of paths is extended as follows:

PATH = ID_SEGMENT { '::' ID_SEGMENT }
     | TYPE_SEGMENT { '::' ID_SEGMENT }
     | ASSOC_SEGMENT '::' ID_SEGMENT { '::' ID_SEGMENT }
ID_SEGMENT   = ID [ '::' '<' { TYPE ',' TYPE } '>' ]
TYPE_SEGMENT = '<' TYPE '>'
ASSOC_SEGMENT = '<' TYPE 'as' TRAIT_REFERENCE '>'

Examples of valid paths. In these examples, capitalized names refer to types (though this doesn’t affect the grammar).

a::b::c
a::<T1,T2>::b::c
T::size_of
<T>::size_of
<T as SizeOf>::size_of
Eq::eq
Eq::<T>::eq
Zero::zero

Normalization of path that reference types

Whenever a path like ...::a::... resolves to a type (but not a trait), it is rewritten (internally) to <...::a>::....

Note that there is a subtle distinction between the following paths:

ToStr::to_str
<ToStr>::to_str

In the former, we are selecting the member to_str from the trait ToStr. The result is a function whose type is basically equivalent to:

fn to_str<Self:ToStr>(self: &Self) -> String

In the latter, we are selecting the member to_str from the type ToStr (i.e., an ToStr object). Resolving type members is different. In this case, it would yield a function roughly equivalent to:

fn to_str(self: &ToStr) -> String

This subtle distinction arises from the fact that we pun on the trait name to indicate both a type and a reference to the trait itself. In this case, depending on which interpretation we choose, the path resolution rules differ slightly.

Paths that begin with a TYPE_SEGMENT

When a path begins with a TYPE_SEGMENT, it is a type-relative path. If this is the complete path (e.g., <int>), then the path resolves to the specified type. If the path continues (e.g., <int>::size_of) then the next segment is resolved using the following procedure. The procedure is intended to mimic method lookup, and hence any changes to method lookup may also change the details of this lookup.

Given a path <T>::m::...:

  1. Search for members of inherent impls defined on T (if any) with the name m. If any are found, the path resolves to that item.
  2. Otherwise, let IN_SCOPE_TRAITS be the set of traits that are in scope and which contain a member named m:
    • Let IMPLEMENTED_TRAITS be those traits from IN_SCOPE_TRAITS for which an implementation exists that (may) apply to T.
      • There can be ambiguity in the case that T contains type inference variables.
    • If IMPLEMENTED_TRAITS is not a singleton set, report an ambiguity error. Otherwise, let TRAIT be the member of IMPLEMENTED_TRAITS.
    • If TRAIT is ambiguously implemented for T, report an ambiguity error and request further type information.
    • Otherwise, rewrite the path to <T as Trait>::m::... and continue.

Paths that begin with an ASSOC_SEGMENT

When a path begins with an ASSOC_SEGMENT, it is a reference to an associated item defined from a trait. Note that such paths must always have a follow-on member m (that is, <T as Trait> is not a complete path, but <T as Trait>::m is).

To resolve the path, first search for an applicable implementation of Trait for T. If no implementation can be found – or the result is ambiguous – then report an error.

Otherwise:

  • Determine the types of output type parameters for Trait from the implementation.
  • If output type parameters were specified in the path, ensure that they are compatible with those specified on the impl.
    • For example, if the path were <int as SomeTrait<uint>>, and the impl is declared as impl SomeTrait<char> for int, then an error would be reported because char and uint are not compatible.
  • Resolve the path to the member of the trait with the substitution composed of the output type parameters from the impl and Self => T.

Alternatives

We have explored a number of syntactic alternatives. This has been selected as being the only one that is simultaneously:

  • Tolerable to look at.
  • Able to convey all necessary information along with auxiliary information the user may want to verify:
    • Self type, type of trait, name of member, type output parameters

Here are some leading candidates that were considered along with their equivalents in the syntax proposed by this RFC. The reasons for their rejection are listed:

module::type::(Trait::member)    <module::type as Trait>::member
--> semantics of parentheses considered too subtle
--> cannot accommodate types that are not paths, like `[int]`

(type: Trait)::member            <type as Trait>::member
--> complicated to parse
--> cannot accommodate types that are not paths, like `[int]`

... (I can't remember all the rest)

One variation that is definitely possible is that we could use the : rather than the keyword as:

<type: Trait>::member            <type as Trait>::member
--> no real objection. `as` was chosen because it mimics the
    syntax for constructing a trait object.

Unresolved questions

Is there a better way to disambiguate a reference to a trait item ToStr::to_str versus a reference to a member of the object type <ToStr>::to_str? I personally do not think so: so long as we pun on the name of the trait, the potential for confusion will remain. Therefore, the only two possibilities I could come up with are to try and change the question:

  • One answer might be that we simply make the second form meaningless by prohibiting inherent impls on object types. But there remains a utility to being able to write something like <ToStr>::is_sized() (where is_sized() is an example of a trait fn that could apply to both sized and unsized types). Moreover, artificially restricting object types just for this reason doesn’t seem right.

  • Another answer is to change the syntax of object types. I have sometimes considered that impl ToStr might be better suited as the object type and then ToStr could be used as syntactic sugar for a type parameter. But there exists a lot of precedent for the current approach and hence I think this is likely a bad idea (not to mention that it’s a drastic change).

  • Start Date: 2014-09-30
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/135
  • Rust Issue #: https://github.com/rust-lang/rust/issues/17657

Summary

Add where clauses, which provide a more expressive means of specifying trait parameter bounds. A where clause comes after a declaration of a generic item (e.g., an impl or struct definition) and specifies a list of bounds that must be proven once precise values are known for the type parameters in question. The existing bounds notation would remain as syntactic sugar for where clauses.

So, for example, the impl for HashMap could be changed from this:

impl<K:Hash+Eq,V> HashMap<K, V>
{
    ..
}

to the following:

impl<K,V> HashMap<K, V>
    where K : Hash + Eq
{
    ..
}

The full grammar can be found in the detailed design.

Motivation

The high-level bit is that the current bounds syntax does not scale to complex cases. Introducing where clauses is a simple extension that gives us a lot more expressive power. In particular, it will allow us to refactor the operator traits to be in a convenient, multidispatch form (e.g., so that user-defined mathematical types can be added to int and vice versa). (It’s also worth pointing out that, once #5527 lands at least, implementing where clauses will be very little work.)

Here is a list of limitations with the current bounds syntax that are overcome with the where syntax:

  • It cannot express bounds on anything other than type parameters. Therefore, if you have a function generic in T, you can write T:MyTrait to declare that T must implement MyTrait, but you can’t write Option<T> : MyTrait or (int, T) : MyTrait. These forms are less commonly required but still important.

  • It does not work well with associated types. This is because there is no space to specify the value of an associated type. Other languages use where clauses (or something analogous) for this purpose.

  • It’s just plain hard to read. Experience has shown that as the number of bounds grows, the current syntax becomes hard to read and format.

Let’s examine each case in detail.

Bounds are insufficiently expressive

Currently bounds can only be declared on type parameters. But there are situations where one wants to declare bounds not on the type parameter itself but rather a type that includes the type parameter.

Partially generic types

One situation where this occurs is when you want to write functions where types are partially known and have those interact with other functions that are fully generic. To explain the situation, let’s examine some code adapted from rustc.

Imagine I have a table parameterized by a value type V and a key type K. There are also two traits, Value and Key, that describe the keys and values. Also, each type of key is linked to a specific value:

struct Table<V:Value, K:Key<V>> { ... }
trait Key<V:Value> { ... }
trait Value { ... }

Now, imagine I want to write some code that operates over all keys whose value is an Option<T> for some T:

fn example<T,K:Key<Option<T>>>(table: &Table<Option<T>, K>) { ... }

This seems reasonable, but this code will not compile. The problem is that the compiler needs to know that the value type implements Value, but here the value type is Option<T>. So we’d need to declare Option<T> : Value, which we cannot do.

There are workarounds. I might write a new trait OptionalValue:

trait OptionalValue<T> {
    fn as_option<'a>(&'a self) -> &'a Option<T>; // identity fn
}

and then I could write my example as:

fn example<T,O:OptionalValue<T>,K:Key<O>>(table: &Table<O, K>) { ... }

But this is making my example function, already a bit complicated, become quite obscure.

Multidispatch traits

Another situation where a similar problem is encountered is multidispatch traits (aka, multiparameter type classes in Haskell). The idea of a multidispatch trait is to be able to choose the impl based not just on one type, as is the most common case, but on multiple types (usually, but not always, two).

Multidispatch is rarely needed because the vast majority of traits are characterized by a single type. But when you need it, you really need it. One example that arises in the standard library is the traits for binary operators like +. Today, the Add trait is defined using only single-dispatch (like so):

pub trait Add<Rhs,Sum> {
    fn add(&self, rhs: &Rhs) -> Sum;
}

The expression a + b is thus sugar for Add::add(&a, &b). Because of how our trait system works, this means that only the type of the left-hand side (the Self parameter) will be used to select the impl. The type for the right-hand side (Rhs) along with the type of their sum (Sum) are defined as trait parameters, which are always outputs of the trait matching: that is, they are specified by the impl and are not used to select which impl is used.

This setup means that addition is not as extensible as we would like. For example, the standard library includes implementations of this trait for integers and other built-in types:

impl Add<int,int> for int { ... }
impl Add<f32,f32> for f32 { ... }

The limitations of this setup become apparent when we consider how a hypothetical user library might integrate. Imagine a library L that defines a type Complex representing complex numbers:

struct Complex { ... }

Naturally, it should be possible to add complex numbers and integers. Since complex number addition is commutative, it should be possible to write both 1 + c and c + 1. Thus one might try the following impls:

impl Add<int,Complex> for Complex { ... }     // 1. Complex + int
impl Add<Complex,Complex> for int { ... }     // 2. int + Complex
impl Add<Complex,Complex> for Complex { ... } // 3. Complex + Complex

Due to the coherence rules, however, this setup will not work. There are in fact three errors. The first is that there are two impls of Add defined for Complex (1 and 3). The second is that there are two impls of Add defined for int (the one from the standard library and 2). The final error is that impl 2 violates the orphan rule, since the type int is not defined in the current crate.

This is not a new problem. Object-oriented languages, with their focus on single dispatch, have long had trouble dealing with binary operators. One common solution is double dispatch, an awkward but effective pattern in which no type ever implements Add directly. Instead, we introduce “indirection” traits so that, e.g., int is addable to anything that implements AddToInt and so on. This is not my preferred solution so I will not describe it in detail, but rather refer readers to this blog post where I describe how it works.

An alternative to double dispatch is to define Add on tuple types (LHS, RHS) rather than on a single value. Imagine that the Add trait were defined as follows:

trait Add<Sum> {
    fn add(self) -> Sum;
}

impl Add<int> for (int, int) {
    fn add(self) -> int {
        let (x, y) = self;
        x + y
    }
}

Now the expression a + b would be sugar for Add::add((a, b)). This small change has several interesting ramifications. For one thing, the library L can easily extend Add to cover complex numbers:

impl Add<Complex> for (Complex, int)     { ... }
impl Add<Complex> for (int, Complex)     { ... }
impl Add<Complex> for (Complex, Complex) { ... }

These impls do not violate the coherence rules because they are all applied to distinct types. Moreover, none of them violate the orphan rule because each of them is a tuple involving at least one type local to the library.

One downside of this Add pattern is that there is no way within the trait definition to refer to the type of the left- or right-hand side individually; we can only use the type Self to refer to the tuple of both types. In the Discussion section below, I will introduce an extended “multi-dispatch” pattern that addresses this particular problem.

There is however another problem that where clauses help to address. Imagine that we wish to define a function to increment complex numbers:

fn increment(c: Complex) -> Complex {
    1 + c
}

This function is pretty generic, so perhaps we would like to generalize it to work over anything that can be added to an int. We’ll use our new version of the Add trait that is implemented over tuples:

fn increment<T:...>(c: T) -> T {
    1 + c
}

At this point we encounter the problem. What bound should we give for T? We’d like to write something like (int, T) : Add<T> – that is, Add is implemented for the tuple (int, T) with the sum type T. But we can’t write that, because the current bounds syntax is too limited.

Where clauses give us an answer. We can write a generic version of increment like so:

fn increment<T>(c: T) -> T
    where (int, T) : Add<T>
{
    1 + c
}

Associated types

It is unclear exactly what form associated types will have in Rust, but it is well documented that our current design, in which type parameters decorate traits, does not scale particularly well. (For curious readers, there are several blog posts exploring the design space of associated types with respect to Rust in particular.)

The high-level summary of associated types is that we can replace a generic trait like Iterator:

trait Iterator<E> {
    fn next(&mut self) -> Option<E>;
}

With a version where the type parameter is a “member” of the Iterator trait:

trait Iterator {
    type E;
    
    fn next(&mut self) -> Option<E>;
}

This syntactic change helps to highlight that, for any given type, the type E is fixed by the impl, and hence it can be considered a member (or output) of the trait. It also scales better as the number of associated types grows.

One challenge with this design is that it is not clear how to convert a function like the following:

fn sum<I:Iterator<int>>(i: I) -> int {
    ...    
}

With associated types, the reference Iterator<int> is no longer valid, since the trait Iterator doesn’t have type parameters.

The usual solution to this problem is to employ a where clause:

fn sum<I:Iterator>(i: I) -> int
  where I::E == int
{
    ...    
}

We can also employ where clauses with object types via a syntax like &Iterator<where E=int> (admittedly somewhat wordy)

Readability

When writing very generic code, it is common to have a large number of parameters with a large number of bounds. Here is some example function extracted from rustc:

fn set_var_to_merged_bounds<T:Clone + InferStr + LatticeValue,
                            V:Clone+Eq+ToStr+Vid+UnifyVid<Bounds<T>>>(
                            &self,
                            v_id: V,
                            a: &Bounds<T>,
                            b: &Bounds<T>,
                            rank: uint)
                            -> ures;

Definitions like this are very difficult to read (it’s hard to even know how to format such a definition).

Using a where clause allows the bounds to be separated from the list of type parameters:

fn set_var_to_merged_bounds<T,V>(&self,
                                 v_id: V,
                                 a: &Bounds<T>,
                                 b: &Bounds<T>,
                                 rank: uint)
                                 -> ures
    where T:Clone,         // it is legal to use individual clauses...
          T:InferStr,
          T:LatticeValue,
          V:Clone+Eq+ToStr+Vid+UnifyVid<Bounds<T>>, // ...or use `+`
{                                     
    ..
}

This helps to separate out the function signature from the extra requirements that the function places on its types.

If I may step aside from the “impersonal voice” of the RFC for a moment, I personally find that when writing generic code it is helpful to focus on the types and signatures, and come to the bounds later. Where clauses help to separate these distinctions. Naturally, your mileage may vary. - nmatsakis

Detailed design

Where can where clauses appear?

Where clauses can be added to anything that can be parameterized with type/lifetime parameters with the exception of trait method definitions: impl declarations, fn declarations, and trait and struct definitions. They appear as follows:

impl Foo<A,B>
    where ...
{ }

impl Foo<A,B> for C
    where ...
{ }

impl Foo<A,B> for C
{
    fn foo<A,B> -> C
        where ...
    { }
}

fn foo<A,B> -> C
    where ...
{ }

struct Foo<A,B>
    where ...
{ }

trait Foo<A,B> : C
    where ...
{ }

Where clauses cannot (yet) appear on trait methods

Note that trait method definitions were specifically excluded from the list above. The reason is that including where clauses on a trait method raises interesting questions for what it means to implement the trait. Using where clauses it becomes possible to define methods that do not necessarily apply to all implementations. We intend to enable this feature but it merits a second RFC to delve into the details.

Where clause grammar

The grammar for a where clause would be as follows (BNF):

WHERE = 'where' BOUND { ',' BOUND } [,]
BOUND = TYPE ':' TRAIT { '+' TRAIT } [+]
TRAIT = Id [ '<' [ TYPE { ',' TYPE } [,] ] '>' ]
TYPE  = ... (same type grammar as today)

Semantics

The meaning of a where clause is fairly straightforward. Each bound in the where clause must be proven by the caller after substitution of the parameter types.

One interesting case concerns trivial where clauses where the self-type does not refer to any of the type parameters, such as the following:

fn foo()
    where int : Eq
{ ... }

Where clauses like these are considered an error. They have no particular meaning, since the callee knows all types involved. This is a conservative choice: if we find that we do desire a particular interpretation for them, we can always make them legal later.

Drawbacks

This RFC introduces two ways to declare a bound.

Alternatives

Remove the existing trait bounds. I decided against this both to avoid breaking lots of existing code and because the existing syntax is convenient much of the time.

Embed where clauses in the type parameter list. One alternative syntax that was proposed is to embed a where-like clause in the type parameter list. Thus the increment() example

fn increment<T>(c: T) -> T
    where () : Add<int,T,T>
{
    1 + c
}

would become something like:

fn increment<T, ():Add<int,T,T>>(c: T) -> T
{
    1 + c
}

This is unfortunately somewhat ambiguous, since a bound like T:Eq could either be declared a type parameter T or as a condition that the (existing) type T implement Eq.

Use a colon instead of the keyword. There is some precedent for this from the type state days. Unfortunately, it doesn’t work with traits due to the supertrait list, and it also doesn’t look good with the use of : as a trait-bound separator:

fn increment<T>(c: T) -> T
    : () : Add<int,T,T>
{
    1 + c
}
  • Start Date: 2014-06-24
  • RFC PR #: #136
  • Rust Issue #: #16463

Summary

Require a feature gate to expose private items in public APIs, until we grow the appropriate language features to be able to remove the feature gate and forbid it entirely.

Motivation

Privacy is central to guaranteeing the invariants necessary to write correct code that employs unsafe blocks. Although the current language rules prevent a private item from being directly named from outside the current module, they still permit direct access to private items in some cases. For example, a public function might return a value of private type. A caller from outside the module could then invoke this function and, thanks to type inference, gain access to the private type (though they still could not invoke public methods or access public fields). This access could undermine the reasoning of the author of the module. Fortunately, it is not hard to prevent.

Detailed design

Overview

The general idea is that:

  • If an item is declared as public, items referred to in the public-facing parts of that item (e.g. its type) must themselves be declared as public.

Details follow.

The rules

These rules apply as long as the feature gate is not enabled. After the feature gate has been removed, they will apply always.

When is an item “public”?

Items that are explicitly declared as pub are always public. In addition, items in the impl of a trait (not an inherent impl) are considered public if all of the following conditions are met:

  • The trait being implemented is public.
  • All input types (currently, the self type) of the impl are public.
  • Motivation: If any of the input types or the trait is public, it should be impossible for an outside to access the items defined in the impl. They cannot name the types nor they can get direct access to a value of those types.

What restrictions apply to public items?

The rules for various kinds of public items are as follows:

  • If it is a static declaration, items referred to in its type must be public.

  • If it is an fn declaration, items referred to in its trait bounds, argument types, and return type must be public.

  • If it is a struct or enum declaration, items referred to in its trait bounds and in the types of its pub fields must be public.

  • If it is a type declaration, items referred to in its definition must be public.

  • If it is a trait declaration, items referred to in its super-traits, in the trait bounds of its type parameters, and in the signatures of its methods (see fn case above) must be public.

Examples

Here are some examples to demonstrate the rules.

Struct fields

// A private struct may refer to any type in any field.
struct Priv {
    a: Priv,
    b: Pub,
    pub c: Priv
}

enum Vapor<A> { X, Y, Z } // Note that A is not used

// Public fields of a public struct may only refer to public types.
pub struct Item {
    // Private field may reference a private type.
    a: Priv,
    
    // Public field must refer to a public type.
    pub b: Pub,

    // ERROR: Public field refers to a private type.
    pub c: Priv,
    
    // ERROR: Public field refers to a private type.
    // For the purposes of this test, we do not descend into the type,
    // but merely consider the names that appear in type parameters
    // on the type, regardless of usage (or lack thereof) within the type
    // definition itself.
    pub d: Vapor<Priv>,
}

pub struct Pub { ... }

Methods

struct Priv { .. }
pub struct Pub { .. }
pub struct Foo { .. }

impl Foo {
    // Illegal: public method with argument of private type.
    pub fn foo(&self, p: Priv) { .. }
}

Trait bounds

trait PrivTrait { ... }

// Error: type parameter on public item bounded by a private trait.
pub struct Foo<X: PrivTrait> { ... }

// OK: type parameter on private item.
struct Foo<X: PrivTrait> { ... }

Trait definitions

struct PrivStruct { ... }

pub trait PubTrait {
    // Error: private struct referenced from method in public trait
    fn method(x: PrivStruct) { ... }
}

trait PrivTrait {
    // OK: private struct referenced from method in private trait 
    fn method(x: PrivStruct) { ... }
}

Implementations

To some extent, implementations are prevented from exposing private types because their types must match the trait. However, that is not true with generics.

pub trait PubTrait<T> {
    fn method(t: T);
}

struct PubStruct { ... }

struct PrivStruct { ... }

impl PubTrait<PrivStruct> for PubStruct {
           // ^~~~~~~~~~ Error: Private type referenced from impl of
           //            public trait on a public type. [Note: this is
           //            an "associated type" here, not an input.]

    fn method(t: PrivStruct) {
              // ^~~~~~~~~~ Error: Private type in method signature.
              //
              // Implementation note. It may not be a good idea to report
              // an error here; I think private types can only appear in
              // an impl by having an associated type bound to a private
              // type.
    }
}

Type aliases

Note that the path to the public item does not have to be private.

mod impl {
    pub struct Foo { ... }
}
pub type Bar = self::impl::Foo;

Negative examples

The following examples should fail to compile under these rules.

Non-public items referenced by a pub use

These examples are illegal because they use a pub use to re-export a private item:

struct Item { ... }
pub mod module {
    // Error: Item is not declared as public, but is referenced from
    // a `pub use`.
    pub use Item;
}
struct Foo { ... }
// Error: Non-public item referenced by `pub use`.
pub use Item = Foo;

If it was desired to have a private name that is publicly “renamed” using a pub use, that can be achieved using a module:

mod impl {
    pub struct ItemPriv;
}
pub use Item = self::impl::ItemPriv;

Drawbacks

Adds a (temporary) feature gate.

Requires some existing code to opt-in to the feature gate before transitioning to a more explicit alternative.

Requires effort to implement.

Alternatives

If we stick with the status quo, we’ll have to resolve several bizarre questions and keep supporting its behavior indefinitely after 1.0.

Instead of a feature gate, we could just ban these things outright right away, at the cost of temporarily losing some convenience and a small amount of expressiveness before the more principled replacement features are implemented.

We could make an exception for private supertraits, as these are not quite as problematic as the other cases. However, especially given that a more principled alternative is known (private methods), I would rather not make any exceptions.

The original design of this RFC had a stronger notion of “public” which also considered whether a public path existed to the item. In other words, a module X could not refer to a public item Y from a submodule Z, unless X also exposed a public path to Y (whether that be because Z was public, or via a pub use). This definition strengthened the basic guarantee of “private things are only directly accessible from within the current module” to include the idea that public functions in outer modules cannot accidentally refer to public items from inner modules unless there is a public path from the outer to the inner module. Unfortunately, these rules were complex to state concisely and also hard to understand in practice; when an error occurred under these rules, it was very hard to evaluate whether the error was legitimate. The newer rules are simpler while still retaining the basic privacy guarantee.

One important advantage of the earlier approach, and a scenario not directly addressed in this RFC, is that there may be items which are declared as public by an inner module but still not intended to be exposed to the world at large (in other words, the items are only expected to be used within some subtree). A special case of this is crate-local data. In the older rules, the “intended scope” of privacy could be somewhat inferred from the existence (or non-existence) of pub use declarations. However, in the author’s opinion, this scenario would be best addressed by making pub declarations more expressive so that the intended scope can be stated directly.

Summary

Remove the coercion from Box<T> to &T from the language.

Motivation

The coercion between Box<T> to &T is not replicable by user-defined smart pointers and has been found to be rarely used 1. We already removed the coercion between Box<T> and &mut T in RFC 33.

Detailed design

The coercion between Box<T> and &T should be removed.

Note that methods that take &self can still be called on values of type Box<T> without any special referencing or dereferencing. That is because the semantics of auto-deref and auto-ref conspire to make it work: the types unify after one autoderef followed by one autoref.

Drawbacks

Borrowing from Box<T> to &T may be convenient.

Alternatives

The impact of not doing this is that the coercion will remain.

Unresolved questions

None.

Summary

This RFC proposes to

  1. Expand the rules for eliding lifetimes in fn definitions, and
  2. Follow the same rules in impl headers.

By doing so, we can avoid writing lifetime annotations ~87% of the time that they are currently required, based on a survey of the standard library.

Motivation

In today’s Rust, lifetime annotations make code more verbose, both for methods

fn get_mut<'a>(&'a mut self) -> &'a mut T

and for impl blocks:

impl<'a> Reader for BufReader<'a> { ... }

In the vast majority of cases, however, the lifetimes follow a very simple pattern.

By codifying this pattern into simple rules for filling in elided lifetimes, we can avoid writing any lifetimes in ~87% of the cases where they are currently required.

Doing so is a clear ergonomic win.

Detailed design

Today’s lifetime elision rules

Rust currently supports eliding lifetimes in functions, so that you can write

fn print(s: &str);
fn get_str() -> &str;

instead of

fn print<'a>(s: &'a str);
fn get_str<'a>() -> &'a str;

The elision rules work well for functions that consume references, but not for functions that produce them. The get_str signature above, for example, promises to produce a string slice that lives arbitrarily long, and is either incorrect or should be replaced by

fn get_str() -> &'static str;

Returning 'static is relatively rare, and it has been proposed to make leaving off the lifetime in output position an error for this reason.

Moreover, lifetimes cannot be elided in impl headers.

The proposed rules

Overview

This RFC proposes two changes to the lifetime elision rules:

  1. Since eliding a lifetime in output position is usually wrong or undesirable under today’s elision rules, interpret it in a different and more useful way.

  2. Interpret elided lifetimes for impl headers analogously to fn definitions.

Lifetime positions

A lifetime position is anywhere you can write a lifetime in a type:

&'a T
&'a mut T
T<'a>

As with today’s Rust, the proposed elision rules do not distinguish between different lifetime positions. For example, both &str and Ref<uint> have elided a single lifetime.

Lifetime positions can appear as either “input” or “output”:

  • For fn definitions, input refers to the types of the formal arguments in the fn definition, while output refers to result types. So fn foo(s: &str) -> (&str, &str) has elided one lifetime in input position and two lifetimes in output position. Note that the input positions of a fn method definition do not include the lifetimes that occur in the method’s impl header (nor lifetimes that occur in the trait header, for a default method).

  • For impl headers, input refers to the lifetimes appears in the type receiving the impl, while output refers to the trait, if any. So impl<'a> Foo<'a> has 'a in input position, while impl<'a, 'b, 'c> SomeTrait<'b, 'c> for Foo<'a, 'c> has 'a in input position, 'b in output position, and 'c in both input and output positions.

The rules

  • Each elided lifetime in input position becomes a distinct lifetime parameter. This is the current behavior for fn definitions.

  • If there is exactly one input lifetime position (elided or not), that lifetime is assigned to all elided output lifetimes.

  • If there are multiple input lifetime positions, but one of them is &self or &mut self, the lifetime of self is assigned to all elided output lifetimes.

  • Otherwise, it is an error to elide an output lifetime.

Notice that the actual signature of a fn or impl is based on the expansion rules above; the elided form is just a shorthand.

Examples

fn print(s: &str);                                      // elided
fn print<'a>(s: &'a str);                               // expanded

fn debug(lvl: uint, s: &str);                           // elided
fn debug<'a>(lvl: uint, s: &'a str);                    // expanded

fn substr(s: &str, until: uint) -> &str;                // elided
fn substr<'a>(s: &'a str, until: uint) -> &'a str;      // expanded

fn get_str() -> &str;                                   // ILLEGAL

fn frob(s: &str, t: &str) -> &str;                      // ILLEGAL

fn get_mut(&mut self) -> &mut T;                        // elided
fn get_mut<'a>(&'a mut self) -> &'a mut T;              // expanded

fn args<T:ToCStr>(&mut self, args: &[T]) -> &mut Command                  // elided
fn args<'a, 'b, T:ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command // expanded

fn new(buf: &mut [u8]) -> BufWriter;                    // elided
fn new<'a>(buf: &'a mut [u8]) -> BufWriter<'a>          // expanded

impl Reader for BufReader { ... }                       // elided
impl<'a> Reader for BufReader<'a> { .. }                // expanded

impl Reader for (&str, &str) { ... }                    // elided
impl<'a, 'b> Reader for (&'a str, &'b str) { ... }      // expanded

impl StrSlice for &str { ... }                          // elided
impl<'a> StrSlice<'a> for &'a str { ... }               // expanded

trait Bar<'a> { fn bound(&'a self) -> &int { ... }    fn fresh(&self) -> &int { ... } }           // elided
trait Bar<'a> { fn bound(&'a self) -> &'a int { ... } fn fresh<'b>(&'b self) -> &'b int { ... } } // expanded

impl<'a> Bar<'a> for &'a str {
  fn bound(&'a self) -> &'a int { ... } fn fresh(&self) -> &int { ... }              // elided
}
impl<'a> Bar<'a> for &'a str {
  fn bound(&'a self) -> &'a int { ... } fn fresh<'b>(&'b self) -> &'b int { ... }    // expanded
}

// Note that when the impl reuses the same signature (with the same elisions)
// from the trait definition, the expanded forms will also match, and thus
// the `impl` will be compatible with the `trait`.

impl Bar for &str            { fn bound(&self) -> &int { ... } }           // elided
impl<'a> Bar<'a> for &'a str { fn bound<'b>(&'b self) -> &'b int { ... } } // expanded

// Note that the preceding example's expanded methods do not match the
// signatures from the above trait definition for `Bar`; in the general
// case, if the elided signatures between the `impl` and the `trait` do
// not match, an expanded `impl` may not be compatible with the given
// `trait` (and thus would not compile).

impl Bar for &str            { fn fresh(&self) -> &int { ... } }           // elided
impl<'a> Bar<'a> for &'a str { fn fresh<'b>(&'b self) -> &'b int { ... } } // expanded

impl Bar for &str {
  fn bound(&'a self) -> &'a int { ... } fn fresh(&self) -> &int { ... }    // ILLEGAL: unbound 'a
}

Error messages

Since the shorthand described above should eliminate most uses of explicit lifetimes, there is a potential “cliff”. When a programmer first encounters a situation that requires explicit annotations, it is important that the compiler gently guide them toward the concept of lifetimes.

An error can arise with the above shorthand only when the program elides an output lifetime and neither of the rules can determine how to annotate it.

For fn

The error message should guide the programmer toward the concept of lifetime by talking about borrowed values:

This function’s return type contains a borrowed value, but the signature does not say which parameter it is borrowed from. It could be one of a, b, or c. Mark the input parameter it borrows from using lifetimes, e.g. [generated example]. See [url] for an introduction to lifetimes.

This message is slightly inaccurate, since the presence of a lifetime parameter does not necessarily imply the presence of a borrowed value, but there are no known use-cases of phantom lifetime parameters.

For impl

The error case on impl is exceedingly rare: it requires (1) that the impl is for a trait with a lifetime argument, which is uncommon, and (2) that the Self type has multiple lifetime arguments.

Since there are no clear “borrowed values” for an impl, this error message speaks directly in terms of lifetimes. This choice seems warranted given that a programmer implementing a trait with lifetime parameters will almost certainly already understand lifetimes.

TraitName requires lifetime arguments, and the impl does not say which lifetime parameters of TypeName to use. Mark the parameters explicitly, e.g. [generated example]. See [url] for an introduction to lifetimes.

The impact

To assess the value of the proposed rules, we conducted a survey of the code defined in libstd (as opposed to the code it reexports). This corpus is large and central enough to be representative, but small enough to easily analyze.

We found that of the 169 lifetimes that currently require annotation for libstd, 147 would be elidable under the new rules, or 87%.

Note: this percentage does not include the large number of lifetimes that are already elided with today’s rules.

The detailed data is available at: https://gist.github.com/aturon/da49a6d00099fdb0e861

Drawbacks

Learning lifetimes

The main drawback of this change is pedagogical. If lifetime annotations are rarely used, newcomers may encounter error messages about lifetimes long before encountering lifetimes in signatures, which may be confusing. Counterpoints:

  • This is already the case, to some extent, with the current elision rules.

  • Most existing error messages are geared to talk about specific borrows not living long enough, pinpointing their locations in the source, rather than talking in terms of lifetime annotations. When the errors do mention annotations, it is usually to suggest specific ones.

  • The proposed error messages above will help programmers transition out of the fully elided regime when they first encounter a signature requiring it.

  • When combined with a good tutorial on the borrow/lifetime system (which should be introduced early in the documentation), the above should provide a reasonably gentle path toward using and understanding explicit lifetimes.

Programmers learn lifetimes once, but will use them many times. Better to favor long-term ergonomics, if a simple elision rule can cover 87% of current lifetime uses (let alone the currently elided cases).

Subtlety for non-& types

While the rules are quite simple and regular, they can be subtle when applied to types with lifetime positions. To determine whether the signature

fn foo(r: Bar) -> Bar

is actually using lifetimes via the elision rules, you have to know whether Bar has a lifetime parameter. But this subtlety already exists with the current elision rules. The benefit is that library types like Ref<'a, T> get the same status and ergonomics as built-ins like &'a T.

Alternatives

  • Do not include output lifetime elision for impl. Since traits with lifetime parameters are quite rare, this would not be a great loss, and would simplify the rules somewhat.

  • Only add elision rules for fn, in keeping with current practice.

  • Only add elision for explicit & pointers, eliminating one of the drawbacks mentioned above. Doing so would impose an ergonomic penalty on abstractions, though: Ref would be more painful to use than &.

Unresolved questions

The fn and impl cases tackled above offer the biggest bang for the buck for lifetime elision. But we may eventually want to consider other opportunities.

Double lifetimes

Another pattern that sometimes arises is types like &'a Foo<'a>. We could consider an additional elision rule that expands &Foo to &'a Foo<'a>.

However, such a rule could be easily added later, and it is unclear how common the pattern is, so it seems best to leave that for a later RFC.

Lifetime elision in structs

We may want to allow lifetime elision in structs, but the cost/benefit analysis is much less clear. In particular, it could require chasing an arbitrary number of (potentially private) struct fields to discover the source of a lifetime parameter for a struct. There are also some good reasons to treat elided lifetimes in structs as 'static.

Again, since shorthand can be added backwards-compatibly, it seems best to wait.

Summary

Closures should capture their upvars by value unless the ref keyword is used.

Motivation

For unboxed closures, we will need to syntactically distinguish between captures by value and captures by reference.

Detailed design

This is a small part of #114, split off to separate it from the rest of the discussion going on in that RFC.

Closures should capture their upvars (closed-over variables) by value unless the ref keyword precedes the opening | of the argument list. Thus |x| x + 2 will capture x by value (and thus, if x is not Copy, it will move x into the closure), but ref |x| x + 2 will capture x by reference.

In an unboxed-closures world, the immutability/mutability of the borrow (as the case may be) is inferred from the type of the closure: Fn captures by immutable reference, while FnMut captures by mutable reference. In a boxed-closures world, the borrows are always mutable.

Drawbacks

It may be that ref is unwanted complexity; it only changes the semantics of 10%-20% of closures, after all. This does not add any core functionality to the language, as a reference can always be made explicitly and then captured. However, there are a lot of closures, and the workaround to capture a reference by value is painful.

Alternatives

As above, the impact of not doing this is that reference semantics would have to be achieved. However, the diff against current Rust was thousands of lines of pretty ugly code.

Another alternative would be to annotate each individual upvar with its capture semantics, like capture clauses in C++11. This proposal does not preclude adding that functionality should it be deemed useful in the future. Note that C++11 provides a syntax for capturing all upvars by reference, exactly as this proposal does.

Unresolved questions

None.

Summary

Require “anonymous traits”, i.e. impl MyStruct to occur only in the same module that MyStruct is defined.

Motivation

Before I can explain the motivation for this, I should provide some background as to how anonymous traits are implemented, and the sorts of bugs we see with the current behaviour. The conclusion will be that we effectively already only support impl MyStruct in the same module that MyStruct is defined, and making this a rule will simply give cleaner error messages.

  • The compiler first sees impl MyStruct during the resolve phase, specifically in Resolver::build_reduced_graph(), called by Resolver::resolve() in src/librustc/middle/resolve.rs. This is before any type checking (or type resolution, for that matter) is done, so the compiler trusts for now that MyStruct is a valid type.
  • If MyStruct is a path with more than one segment, such as mymod::MyStruct, it is silently ignored (how was this not flagged when the code was written??), which effectively causes static methods in such impls to be dropped on the floor. A silver lining here is that nothing is added to the current module namespace, so the shadowing bugs demonstrated in the next bullet point do not apply here. (To locate this bug in the code, find the match immediately following the FIXME (#3785) comment in resolve.rs.) This leads to the following
mod break1 {
    pub struct MyGuy;

    impl MyGuy {
        pub fn do1() { println!("do 1"); }
    }
}

impl break1::MyGuy {
    fn do2() { println!("do 2"); }
}

fn main() {
    break1::MyGuy::do1();
    break1::MyGuy::do2();
}
<anon>:15:5: 15:23 error: unresolved name `break1::MyGuy::do2`.
<anon>:15     break1::MyGuy::do2();

as noticed by @huonw in https://github.com/rust-lang/rust/issues/15060 .

  • If one does not exist, the compiler creates a submodule MyStruct of the current module, with kind ImplModuleKind. Static methods are placed into this module. If such a module already exists, the methods are appended to it, to support multiple impl MyStruct blocks within the same module. If a module exists that is not ImplModuleKind, the compiler signals a duplicate module definition error.
  • Notice at this point that if there is a use MyStruct, the compiler will act as though it is unaware of this. This is because imports are not resolved yet (they are in Resolver::resolve_imports() called immediately after Resolver::build_reduced_graph() is called). In the final resolution step, MyStruct will be searched in the namespace of the current module, checking imports only as a fallback (and only in some contexts), so the use MyStruct is effectively shadowed. If there is an impl MyStruct in the file being imported from, the user expects that the new impl MyStruct will append to that one, same as if they are in the original file. This leads to the original bug report https://github.com/rust-lang/rust/issues/15060 .
  • In fact, even if no methods from the import are used, the name MyStruct will not be associated to a type, so that
trait T {}
impl<U: T> Vec<U> {
    fn from_slice<'a>(x: &'a [uint]) -> Vec<uint> {
        fail!()
    }
}
fn main() { let r = Vec::from_slice(&[1u]); }
error: found module name used as a type: impl Vec<U>::Vec<U> (id=5)
impl<U: T> Vec<U>

which @Ryman noticed in https://github.com/rust-lang/rust/issues/15060 . The reason for this is that in Resolver::resolve_crate(), the final step of Resolver::resolve(), the type of an anonymous impl is determined by NameBindings::def_for_namespace(TypeNS). This function searches the namespace TypeNS (which is not affected by imports) for a type; failing that it tries for a module; failing that it returns None. The result is that when typeck runs, it sees impl [module name] instead of impl [type name].

The main motivation of this RFC is to clear out these bugs, which do not make sense to a user of the language (and had me confused for quite a while).

A secondary motivation is to enforce consistency in code layout; anonymous traits are used the way that class methods are used in other languages, and the data and methods of a struct should be defined nearby.

Detailed design

I propose three changes to the language:

  • impl on multiple-ident paths such as impl mymod::MyStruct is disallowed. Since this currently surprises the user by having absolutely no effect for static methods, support for this is already broken.
  • impl MyStruct must occur in the same module that MyStruct is defined. This is to prevent the above problems with impl-across-modules. Migration path is for users to just move code between source files.

Drawbacks

Static methods on impls-away-from-definition never worked, while non-static methods can be implemented using non-anonymous traits. So there is no loss in expressivity. However, using a trait where before there was none may be clumsy, since it might not have a sensible name, and it must be explicitly imported by all users of the trait methods.

For example, in the stdlib src/libstd/io/fs.rs we see the code impl path::Path to attach (non-static) filesystem-related methods to the Path type. This would have to be done via a FsPath trait which is implemented on Path and exported alongside Path in the prelude.

It is worth noting that this is the only instance of this RFC conflicting with current usage in the stdlib or compiler.

Alternatives

  • Leaving this alone and fixing the bugs directly. This is really hard. To do it properly, we would need to seriously refactor resolve.

Unresolved questions

None.

Summary

Introduce a new if let PAT = EXPR { BODY } construct. This allows for refutable pattern matching without the syntactic and semantic overhead of a full match, and without the corresponding extra rightward drift. Informally this is known as an “if-let statement”.

Motivation

Many times in the past, people have proposed various mechanisms for doing a refutable let-binding. None of them went anywhere, largely because the syntax wasn’t great, or because the suggestion introduced runtime failure if the pattern match failed.

This proposal ties the refutable pattern match to the pre-existing conditional construct (i.e. if statement), which provides a clear and intuitive explanation for why refutable patterns are allowed here (as opposed to a let statement which disallows them) and how to behave if the pattern doesn’t match.

The motivation for having any construct at all for this is to simplify the cases that today call for a match statement with a single non-trivial case. This is predominately used for unwrapping Option<T> values, but can be used elsewhere.

The idiomatic solution today for testing and unwrapping an Option<T> looks like

match optVal {
    Some(x) => {
        doSomethingWith(x);
    }
    None => {}
}

This is unnecessarily verbose, with the None => {} (or _ => {}) case being required, and introduces unnecessary rightward drift (this introduces two levels of indentation where a normal conditional would introduce one).

The alternative approach looks like this:

if optVal.is_some() {
    let x = optVal.unwrap();
    doSomethingWith(x);
}

This is generally considered to be a less idiomatic solution than the match. It has the benefit of fixing rightward drift, but it ends up testing the value twice (which should be optimized away, but semantically speaking still happens), with the second test being a method that potentially introduces failure. From context, the failure won’t happen, but it still imposes a semantic burden on the reader. Finally, it requires having a pre-existing let-binding for the optional value; if the value is a temporary, then a new let-binding in the parent scope is required in order to be able to test and unwrap in two separate expressions.

The if let construct solves all of these problems, and looks like this:

if let Some(x) = optVal {
    doSomethingWith(x);
}

Detailed design

The if let construct is based on the precedent set by Swift, which introduced its own if let statement. In Swift, if let var = expr { ... } is directly tied to the notion of optional values, and unwraps the optional value that expr evaluates to. In this proposal, the equivalent is if let Some(var) = expr { ... }.

Given the following rough grammar for an if condition:

if-expr     = 'if' if-cond block else-clause?
if-cond     = expression
else-clause = 'else' block | 'else' if-expr

The grammar is modified to add the following productions:

if-cond = 'let' pattern '=' expression

The expression is restricted to disallow a trailing braced block (e.g. for struct literals) the same way the expression in the normal if statement is, to avoid ambiguity with the then-block.

Contrary to a let statement, the pattern in the if let expression allows refutable patterns. The compiler should emit a warning for an if let expression with an irrefutable pattern, with the suggestion that this should be turned into a regular let statement.

Like the for loop before it, this construct can be transformed in a syntax-lowering pass into the equivalent match statement. The expression is given to match and the pattern becomes a match arm. If there is an else block, that becomes the body of the _ => {} arm, otherwise _ => {} is provided.

Optionally, one or more else if (not else if let) blocks can be placed in the same match using pattern guards on _. This could be done to simplify the code when pretty-printing the expansion result. Otherwise, this is an unnecessary transformation.

Due to some uncertainty regarding potentially-surprising fallout of AST rewrites, and some worries about exhaustiveness-checking (e.g. a tautological if let would be an error, which may be unexpected), this is put behind a feature gate named if_let.

Examples

Source:

if let Some(x) = foo() {
    doSomethingWith(x)
}

Result:

match foo() {
    Some(x) => {
        doSomethingWith(x)
    }
    _ => {}
}

Source:

if let Some(x) = foo() {
    doSomethingWith(x)
} else {
    defaultBehavior()
}

Result:

match foo() {
    Some(x) => {
        doSomethingWith(x)
    }
    _ => {
        defaultBehavior()
    }
}

Source:

if cond() {
    doSomething()
} else if let Some(x) = foo() {
    doSomethingWith(x)
} else {
    defaultBehavior()
}

Result:

if cond() {
    doSomething()
} else {
    match foo() {
        Some(x) => {
            doSomethingWith(x)
        }
        _ => {
            defaultBehavior()
        }
    }
}

With the optional addition specified above:

if let Some(x) = foo() {
    doSomethingWith(x)
} else if cond() {
    doSomething()
} else if other_cond() {
    doSomethingElse()
}

Result:

match foo() {
    Some(x) => {
        doSomethingWith(x)
    }
    _ if cond() => {
        doSomething()
    }
    _ if other_cond() => {
        doSomethingElse()
    }
    _ => {}
}

Drawbacks

It’s one more addition to the grammar.

Alternatives

This could plausibly be done with a macro, but the invoking syntax would be pretty terrible and would largely negate the whole point of having this sugar.

Alternatively, this could not be done at all. We’ve been getting alone just fine without it so far, but at the cost of making Option just a bit more annoying to work with.

Unresolved questions

It’s been suggested that alternates or pattern guards should be allowed. I think if you need those you could just go ahead and use a match, and that if let could be extended to support those in the future if a compelling use-case is found.

I don’t know how many match statements in our current code base could be replaced with this syntax. Probably quite a few, but it would be informative to have real data on this.

Summary

Rust’s support for pattern matching on slices has grown steadily and incrementally without a lot of oversight. We have concern that Rust is doing too much here, and that the complexity is not worth it. This RFC proposes to feature gate multiple-element slice matches in the head and middle positions ([xs.., 0, 0] and [0, xs.., 0]).

Motivation

Some general reasons and one specific: first, the implementation of Rust’s match machinery is notoriously complex, and not well-loved. Removing features is seen as a valid way to reduce complexity. Second, slice matching in particular, is difficult to implement, while also being of only moderate utility (there are many types of collections - slices just happen to be built into the language). Finally, the exhaustiveness check is not correct for slice patterns because of their complexity; it’s not known if it can be done correctly, nor whether it is worth the effort to do so.

Detailed design

The advanced_slice_patterns feature gate will be added. When the compiler encounters slice pattern matches in head or middle position it will emit a warning or error according to the current settings.

Drawbacks

It removes two features that some people like.

Alternatives

Fixing the exhaustiveness check would allow the feature to remain.

Unresolved questions

N/A

Summary

Add syntax sugar for importing a module and items in that module in a single view item.

Motivation

Make use clauses more concise.

Detailed design

The mod keyword may be used in a braced list of modules in a use item to mean the prefix module for that list. For example, writing prefix::{mod, foo}; is equivalent to writing

use prefix;
use prefix::foo;

The mod keyword cannot be used outside of braces, nor can it be used inside braces which do not have a prefix path. Both of the following examples are illegal:

use module::mod;
use {mod, foo};

A programmer may write mod in a module list with only a single item. E.g., use prefix::{mod};, although this is considered poor style and may be forbidden by a lint. (The preferred version is use prefix;).

Drawbacks

Another use of the mod keyword.

We introduce a way (the only way) to have paths in use items which do not correspond with paths which can be used in the program. For example, with use foo::bar::{mod, baz}; the programmer can use foo::bar::baz in their program but not foo::bar::mod (instead foo::bar is imported).

Alternatives

Don’t do this.

Unresolved questions

N/A

  • Start Date: 2014-07-16
  • RFC PR #: #169
  • Rust Issue #: https://github.com/rust-lang/rust/issues/16461

Summary

Change the rebinding syntax from use ID = PATH to use PATH as ID, so that paths all line up on the left side, and imported identifiers are all on the right side. Also modify extern crate syntax analogously, for consistency.

Motivation

Currently, the view items at the start of a module look something like this:

mod old_code {
  use a::b::c::d::www;
  use a::b::c::e::xxx;
  use yyy = a::b::yummy;
  use a::b::c::g::zzz;
}

This means that if you want to see what identifiers have been imported, your eyes need to scan back and forth on both the left-hand side (immediately beside the use) and the right-hand side (at the end of each line). In particular, note that yummy is not in scope within the body of old_code

This RFC proposes changing the grammar of Rust so that the example above would look like this:

mod new_code {
  use a::b::c::d::www;
  use a::b::c::e::xxx;
  use a::b::yummy as yyy;
  use a::b::c::g::zzz;
}

There are two benefits we can see by comparing mod old_code and mod new_code:

  • As alluded to above, now all of the imported identifiers are on the right-hand side of the block of view items.

  • Additionally, the left-hand side looks much more regular, since one sees the straight lines of a::b:: characters all the way down, which makes the actual differences between the different paths more visually apparent.

Detailed design

Currently, the grammar for use statements is something like:

  use_decl : "pub" ? "use" [ ident '=' path
                            | path_glob ] ;

Likewise, the grammar for extern crate declarations is something like:

  extern_crate_decl : "extern" "crate" ident [ '(' link_attrs ')' ] ? [ '=' string_lit ] ? ;

This RFC proposes changing the grammar for use statements to something like:

  use_decl : "pub" ? "use" [ path "as" ident
                            | path_glob ] ;

and the grammar for extern crate declarations to something like:

  extern_crate_decl : "extern" "crate" [ string_lit "as" ] ? ident [ '(' link_attrs ')' ] ? ;

Both use and pub use forms are changed to use path as ident instead of ident = path. The form use path as ident has the same constraints and meaning that use ident = path has today.

Nothing about path globs is changed; the view items that use ident = path are disjoint from the view items that use path globs, and that continues to be the case under path as ident.

The old syntaxes "use" ident '=' path and "extern" "crate" ident '=' string_lit are removed (or at least deprecated).

Drawbacks

  • pub use export = import_path may be preferred over pub use import_path as export since people are used to seeing the name exported by a pub item on the left-hand side of an = sign. (See “Have distinct rebinding syntaxes for use and pub use” below.)

  • The ‘as’ keyword is not currently used for any binding form in Rust. Adopting this RFC would change that precedent. (See “Change the signaling token” below.)

Alternatives

Keep things as they are

This just has the drawbacks outlined in the motivation: the left-hand side of the view items are less regular, and one needs to scan both the left- and right-hand sides to see all the imported identifiers.

Change the signaling token

Go ahead with switch, so imported identifier is on the left-hand side, but use a different token than as to signal a rebinding.

For example, we could use @, as an analogy with its use as a binding operator in match expressions:

mod new_code {
  use a::b::c::d::www;
  use a::b::c::e::xxx;
  use a::b::yummy @ yyy;
  use a::b::c::g::zzz;
}

(I do not object to path @ ident, though I find it somehow more “line-noisy” than as in this context.)

Or, we could use =:

mod new_code {
  use a::b::c::d::www;
  use a::b::c::e::xxx;
  use a::b::yummy = yyy;
  use a::b::c::g::zzz;
}

(I do object to path = ident, since typically when = is used to bind, the identifier being bound occurs on the left-hand side.)

Or, we could use :, by (weak) analogy with struct pattern syntax:

mod new_code {
  use a::b::c::d::www;
  use a::b::c::e::xxx;
  use a::b::yummy : yyy;
  use a::b::c::g::zzz;
}

(I cannot figure out if this is genius or madness. Probably madness, especially if one is allowed to omit the whitespace around the :)

Have distinct rebinding syntaxes for use and pub use

If people really like having ident = path for pub use, by the reasoning presented above that people are used to seeing the name exported by a pub item on the left-hand side of an = sign, then we could support that by continuing to support pub use ident = path.

If we were to go down that route, I would prefer to have distinct notions of the exported name and imported name, so that:

pub use a = foo::bar; would actually import bar (and a would just be visible as an export), and then one could rebind for export and import simultaneously, like so: pub use exported_bar = foo::bar as imported_bar;

But really, is pub use foo::bar as a all that bad?

Allow extern crate ident as ident

As written, this RFC allows for two variants of extern_crate_decl:

extern crate old_name;
extern crate "old_name" as new_name;

These are just analogous to the current options that use = instead of as.

However, the RFC comment dialogue suggested also allowing a renaming form that does not use a string literal:

extern crate old_name as new_name;

I have no opinion on whether this should be added or not. Arguably this choice is orthogonal to the goals of this RFC (since, if this is a good idea, it could just as well be implemented with the = syntax). Perhaps it should just be filed as a separate RFC on its own.

Unresolved questions

  • In the revised extern crate form, is it best to put the link_attrs after the identifier, as written above? Or would it be better for them to come after the string_literal when using the extern crate string_literal as ident form?

Summary

Change pattern matching on an &mut T to &mut <pat>, away from its current &<pat> syntax.

Motivation

Pattern matching mirrors construction for almost all types, except &mut, which is constructed with &mut <expr> but destructured with &<pat>. This is almost certainly an unnecessary inconsistency.

This can and does lead to confusion, since people expect the pattern syntax to match construction, but a pattern like &mut (ref mut x, _) is actually currently a parse error:

fn main() {
    let &mut (ref mut x, _);
}
and-mut-pat.rs:2:10: 2:13 error: expected identifier, found path
and-mut-pat.rs:2     let &mut (ref mut x, _);
                          ^~~

Another (rarer) way it can be confusing is the pattern &mut x. It is expected that this binds x to the contents of &mut T pointer… which it does, but as a mutable binding (it is parsed as &(mut x)), meaning something like

for &mut x in some_iterator_over_and_mut {
    println!("{}", x)
}

gives an unused mutability warning. NB. it’s somewhat rare that one would want to pattern match to directly bind a name to the contents of a &mut (since the normal reason to have a &mut is to mutate the thing it points at, but this pattern is (byte) copying the data out, both before and after this change), but can occur if a type only offers a &mut iterator, i.e. types for which a & one is no more flexible than the &mut one.

Detailed design

Add <pat> := &mut <pat> to the pattern grammar, and require that it is used when matching on a &mut T.

Drawbacks

It makes matching through a &mut more verbose: for &mut (ref mut x, p_) in v.mut_iter() instead of for &(ref mut x, _) in v.mut_iter().

Macros wishing to pattern match on either & or &mut need to handle each case, rather than performing both with a single &. However, macros handling these types already need special mut vs. not handling if they ever name the types, or if they use ref vs. ref mut subpatterns.

It also makes obtaining the current behaviour (binding by-value the contents of a reference to a mutable local) slightly harder. For a &mut T the pattern becomes &mut mut x, and, at the moment, for a &T, it must be matched with &x and then rebound with let mut x = x; (since disambiguating like &(mut x) doesn’t yet work). However, based on some loose grepping of the Rust repo, both of these are very rare.

Alternatives

None.

Unresolved questions

None.

  • Start Date: 2014-07-24
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/184
  • Rust Issue #: https://github.com/rust-lang/rust/issues/16950

Summary

Add simple syntax for accessing values within tuples and tuple structs behind a feature gate.

Motivation

Right now accessing fields of tuples and tuple structs is incredibly painful—one must rely on pattern-matching alone to extract values. This became such a problem that twelve traits were created in the standard library (core::tuple::Tuple*) to make tuple value accesses easier, adding .valN(), .refN(), and .mutN() methods to help this. But this is not a very nice solution—it requires the traits to be implemented in the standard library, not the language, and for those traits to be imported on use. On the whole this is not a problem, because most of the time std::prelude::* is imported, but this is still a hack which is not a real solution to the problem at hand. It also only supports tuples of length up to twelve, which is normally not a problem but emphasises how bad the current situation is.

Detailed design

Add syntax of the form <expr>.<integer> for accessing values within tuples and tuple structs. This (and the functionality it provides) would only be allowed when the feature gate tuple_indexing is enabled. This syntax is recognised wherever an unsuffixed integer literal is found in place of the normal field or method name expected when accessing fields with .. Because the parser would be expecting an integer, not a float, an expression like expr.0.1 would be a syntax error (because 0.1 would be treated as a single token).

Tuple/tuple struct field access behaves the same way as accessing named fields on normal structs:

// With tuple struct
struct Foo(int, int);
let mut foo = Foo(3, -15);
foo.0 = 5;
assert_eq!(foo.0, 5);

// With normal struct
struct Foo2 { _0: int, _1: int }
let mut foo2 = Foo2 { _0: 3, _1: -15 };
foo2._0 = 5;
assert_eq!(foo2._0, 5);

Effectively, a tuple or tuple struct field is just a normal named field with an integer for a name.

Drawbacks

This adds more complexity that is not strictly necessary.

Alternatives

Stay with the status quo. Either recommend using a struct with named fields or suggest using pattern-matching to extract values. If extracting individual fields of tuples is really necessary, the TupleN traits could be used instead, and something like #[deriving(Tuple3)] could possibly be added for tuple structs.

Unresolved questions

None.

Summary

  • Remove the special-case bound 'static and replace with a generalized lifetime bound that can be used on objects and type parameters.
  • Remove the rules that aim to prevent references from being stored into objects and replace with a simple lifetime check.
  • Tighten up type rules pertaining to reference lifetimes and well-formed types containing references.
  • Introduce explicit lifetime bounds ('a:'b), with the meaning that the lifetime 'a outlives the lifetime 'b. These exist today but are always inferred; this RFC adds the ability to specify them explicitly, which is sometimes needed in more complex cases.

Motivation

Currently, the type system is not supposed to allow references to escape into object types. However, there are various bugs where it fails to prevent this from happening. Moreover, it is very useful (and frequently necessary) to store a reference into an object. Moreover, the current treatment of generic types is in some cases naive and not obviously sound.

Detailed design

Lifetime bounds on parameters

The heart of the new design is the concept of a lifetime bound. In fact, this (sort of) exists today in the form of the 'static bound:

 fn foo<A:'static>(x: A) { ... }

Here, the notation 'static means “all borrowed content within A outlives the lifetime 'static”. (Note that when we say that something outlives a lifetime, we mean that it lives at least that long. In other words, for any lifetime 'a, 'a outlives 'a. This is similar to how we say that every type T is a subtype of itself.)

In the newer design, it is possible to use an arbitrary lifetime as a bound, and not just 'static:

 fn foo<'a, A:'a>(x: A) { ... }

Explicit lifetime bounds are in fact only rarely necessary, for two reasons:

  1. The compiler is often able to infer this relationship from the argument and return types. More on this below.
  2. It is only important to bound the lifetime of a generic type like A when one of two things is happening (and both of these are cases where the inference generally is sufficient):
    • A borrowed pointer to an A instance (i.e., value of type &A) is being consumed or returned.
    • A value of type A is being closed over into an object reference (or closure, which per the unboxed closures RFC is really the same thing).

Note that, per RFC 11, these lifetime bounds may appear in types as well (this is important later on). For example, an iterator might be declared:

struct Items<'a, T:'a> {
    v: &'a Collection<T>
}

Here, the constraint T:'a indicates that the data being iterated over must live at least as long as the collection (logically enough).

Lifetime bounds on object types

Like parameters, all object types have a lifetime bound. Unlike parameter types, however, object types are required to have exactly one bound. This bound can be either specified explicitly or derived from the traits that appear in the object type. In general, the rule is as follows:

  • If an explicit bound is specified, use that.
  • Otherwise, let S be the set of lifetime bounds we can derive.
  • Otherwise, if S contains ’static, use ’static.
  • Otherwise, if S is a singleton set, use that.
  • Otherwise, error.

Here are some examples:

trait IsStatic : 'static { }
trait Is<'a> : 'a { }

// Type               Bounds
// IsStatic           'static
// Is<'a>             'a
// IsStatic+Is<'a>    'static+'a
// IsStatic+'a        'static+'a
// IsStatic+Is<'a>+'b 'static,'a,'b

Object types must have exactly one bound – zero bounds is not acceptable. Therefore, if an object type with no derivable bounds appears, we will supply a default lifetime using the normal rules:

trait Writer { /* no derivable bounds */ }
struct Foo<'a> {
    Box<Writer>,      // Error: try Box<Writer+'static> or Box<Writer+'a>
    Box<Writer+Send>, // OK: Send implies 'static
    &'a Writer,       // Error: try &'a (Writer+'a)
}

fn foo(a: Box<Writer>, // OK: Sugar for Box<Writer+'a> where 'a fresh
       b: &Writer)     // OK: Sugar for &'b (Writer+'c) where 'b, 'c fresh
{ ... }

This kind of annotation can seem a bit tedious when using object types extensively, though type aliases can help quite a bit:

type WriterObj = Box<Writer+'static>;
type WriterRef<'a> = &'a (Writer+'a);

The unresolved questions section discussed possibles ways to lighten the burden.

See Appendix B for the motivation on why object types are permitted to have exactly one lifetime bound.

Specifying relations between lifetimes

Currently, when a type or fn has multiple lifetime parameters, there is no facility to explicitly specify a relationship between them. For example, in a function like this:

fn foo<'a, 'b>(...) { ... }

the lifetimes 'a and 'b are declared as independent. In some cases, though, it can be important that there be a relation between them. In most cases, these relationships can be inferred (and in fact are inferred today, see below), but it is useful to be able to state them explicitly (and necessary in some cases, see below).

A lifetime bound is written 'a:'b and it means that “'a outlives 'b”. For example, if foo were declared like so:

fn foo<'x, 'y:'x>(...) { ... }

that would indicate that the lifetime ’x was shorter than (or equal to) 'y.

The “type must outlive” and well-formedness relation

Many of the rules to come make use of a “type must outlive” relation, written T outlives 'a. This relation means primarily that all borrowed data in T is known to have a lifetime of at least ’a (hence the name). However, the relation also guarantees various basic lifetime constraints are met. For example, for every reference type &'b U that is found within T, it would be required that U outlives 'b (and that 'b outlives 'a).

In fact, T outlives 'a is defined on another function WF(T:'a), which yields up a list of lifetime relations that must hold for T to be well-formed and to outlive 'a. It is not necessary to understand the details of this relation in order to follow the rest of the RFC, I will defer its precise specification to an appendix below.

For this section, it suffices to give some examples:

// int always outlives any region
WF(int : 'a) = []

// a reference with lifetime 'a outlives 'b if 'a outlives 'b
WF(&'a int : 'b) = ['a : 'b]

// the outer reference must outlive 'c, and the inner reference
// must outlive the outer reference
WF(&'a &'b int : 'c) = ['a : 'c, 'b : 'a]

// Object type with bound 'static
WF(SomeTrait+'static : 'a) = ['static : 'a]

// Object type with bound 'a 
WF(SomeTrait+'a : 'b) = ['a : 'b]

Whenever data of type T is closed over to form an object, the type checker will require that T outlives 'a where 'a is the primary lifetime bound of the object type.

Rules for types to be well-formed

Currently we do not apply any tests to the types that appear in type declarations. Per RFC 11, however, this should change, as we intend to enforce trait bounds on types, wherever those types appear. Similarly, we should be requiring that types are well-formed with respect to the WF function. This means that a type like the following would be illegal without a lifetime bound on the type parameter T:

struct Ref<'a, T> { c: &'a T }

This is illegal because the field c has type &'a T, which is only well-formed if T:'a. Per usual practice, this RFC does not propose any form of inference on struct declarations and instead requires all conditions to be spelled out (this is in contrast to fns and methods, see below).

Rules for expression type validity

We should add the condition that for every expression with lifetime 'e and type T, then T outlives 'e. We already enforce this in many special cases but not uniformly.

Inference

The compiler will infer lifetime bounds on both type parameters and region parameters as follows. Within a function or method, we apply the wellformedness function WF to each function or parameter type. This yields up a set of relations that must hold. The idea here is that the caller could not have type checked unless the types of the arguments were well-formed, so that implies that the callee can assume that those well-formedness constraints hold.

As an example, in the following function:

fn foo<'a, A>(x: &'a A) { ... }

the callee here can assume that the type parameter A outlives the lifetime 'a, even though that was not explicitly declared.

Note that the inference also pulls in constraints that were declared on the types of arguments. So, for example, if there is a type Items declared as follows:

struct Items<'a, T:'a> { ... }

And a function that takes an argument of type Items:

fn foo<'a, T>(x: Items<'a, T>) { ... }

The inference rules will conclude that T:'a because the Items type was declared with that bound.

In practice, these inference rules largely remove the need to manually declare lifetime relations on types. When porting the existing library and rustc over to these rules, I had to add explicit lifetime bounds to exactly one function (but several types, almost exclusively iterators).

Note that this sort of inference is already done. This RFC simply proposes a more extensive version that also includes bounds of the form X:'a, where X is a type parameter.

What does all this mean in practice?

This RFC has a lot of details. The main implications for end users are:

  1. Object types must specify a lifetime bound when they appear in a type. This most commonly means changing Box<Trait> to Box<Trait+'static> and &'a Trait to &'a Trait+'a.

  2. For types that contain references to generic types, lifetime bounds are needed in the type definition. This comes up most often in iterators:

    struct Items<'a, T:'a> {
        x: &'a [T]
    }
    

    Here, the presence of &'a [T] within the type definition requires that the type checker can show that T outlives 'a which in turn requires the bound T:'a on the type definition. These bounds are rarely outside of type definitions, because they are almost always implied by the types of the arguments.

  3. It is sometimes, but rarely, necessary to use lifetime bounds, specifically around double indirections (references to references, often the second reference is contained within a struct). For example:

    struct GlobalContext<'global> {
        arena: &'global Arena
    }
    
    struct LocalContext<'local, 'global:'local> {
        x: &'local mut Context<'global>
    }
    

    Here, we must know that the lifetime 'global outlives 'local in order for this type to be well-formed.

Phasing

Some parts of this RFC require new syntax and thus must be phased in. The current plan is to divide the implementation three parts:

  1. Implement support for everything in this RFC except for region bounds and requiring that every expression type be well-formed. Enforcing the latter constraint leads to type errors that require lifetime bounds to resolve.
  2. Implement support for 'a:'b notation to be parsed under a feature gate issue_5723_bootstrap.
  3. Implement the final bits of the RFC:
    • Bounds on lifetime parameters
    • Wellformedness checks on every expression
    • Wellformedness checks in type definitions

Parts 1 and 2 can be landed simultaneously, but part 3 requires a snapshot. Parts 1 and 2 have largely been written. Depending on precisely how the timing works out, it might make sense to just merge parts 1 and 3.

Drawbacks / Alternatives

If we do not implement some solution, we could continue with the current approach (but patched to be sound) of banning references from being closed over in object types. I consider this a non-starter.

Unresolved questions

Inferring wellformedness bounds

Under this RFC, it is required to write bounds on struct types which are in principle inferable from their contents. For example, iterators tend to follow a pattern like:

struct Items<'a, T:'a> {
    x: &'a [T]
}

Note that T is bounded by 'a. It would be possible to infer these bounds, but I’ve stuck to our current principle that type definitions are always fully spelled out. The danger of inference is that it becomes unclear why a particular constraint exists if one must traverse the type hierarchy deeply to find its origin. This could potentially be addressed with better error messages, though our track record for lifetime error messages is not very good so far.

Also, there is a potential interaction between this sort of inference and the description of default trait bounds below.

Default trait bounds

When referencing a trait object, it is almost always the case that one follows certain fixed patterns:

  • Box<Trait+'static>
  • Rc<Trait+'static> (once DST works)
  • &'a (Trait+'a)
  • and so on.

You might think that we should simply provide some kind of defaults that are sensitive to where the Trait appears. The same is probably true of struct type parameters (in other words, &'a SomeStruct<'a> is a very common pattern).

However, there are complications:

  • What about a type like struct Ref<'a, T:'a> { x: &'a T }? Ref<'a, Trait> should really work the same way as &'a Trait. One way that I can see to do this is to drive the defaulting based on the default trait bounds of the T type parameter – but if we do that, it is both a non-local default (you have to consult the definition of Ref) and interacts with the potential inference described in the previous section.

  • There are reasons to want a type like Box<Trait+'a>. For example, the macro parser includes a function like:

    fn make_macro_ext<'cx>(cx: &'cx Context, ...) -> Box<MacroExt+'cx>
    

    In other words, this function returns an object that closes over the macro context. In such a case, if Box<MacroExt> implies a static bound, then taking ownership of this macro object would require a signature like:

    fn take_macro_ext<'cx>(b: Box<MacroExt+'cx>) {  }
    

    Note that the 'cx variable is only used in one place. It’s purpose is just to disable the 'static default that would otherwise be inserted.

Appendix: Definition of the outlives relation and well-formedness

To make this more specific, we can “formally” model the Rust type system as:

T = scalar (int, uint, fn(...))   // Boring stuff
  | *const T                      // Unsafe pointer
  | *mut T                        // Unsafe pointer
  | Id<P>                         // Nominal type (struct, enum)
  | &'x T                         // Reference
  | &'x mut T                     // Mutable reference
  | {TraitReference<P>}+'x        // Object type
  | X                             // Type variable
P = {'x} + {T}

We can define a function WF(T : 'a) which, given a type T and lifetime 'a yields a list of 'b:'c or X:'d pairs. For each pair 'b:'c, the lifetime 'b must outlive the lifetime 'c for the type T to be well-formed in a location with lifetime 'a. For each pair X:'d, the type parameter X must outlive the lifetime 'd.

  • WF(int : 'a) yields an empty list
  • WF(X:'a) where X is a type parameter yields (X:'a).
  • WF(Foo<P>:'a) where Foo<P> is an enum or struct type yields:
    • For each lifetime parameter 'b that is contravariant or invariant, 'b : 'a.
    • For each type parameter T that is covariant or invariant, the results of WF(T : 'a).
    • The lifetime bounds declared on Foo’s lifetime or type parameters.
    • The reasoning here is that if we can reach borrowed data with lifetime 'a through Foo<'a>, then 'a must be contra- or invariant. Covariant lifetimes only occur in “setter” situations. Analogous reasoning applies to the type case.
  • WF(T:'a) where T is an object type:
    • For the primary bound 'b, 'b : 'a.
    • For each derived bound 'c of T, 'b : 'c
      • Motivation: The primary bound of an object type implies that all other bounds are met. This simplifies some of the other formulations and does not represent a loss of expressiveness.

We can then say that T outlives 'a if all lifetime relations returned by WF(T:'a) hold.

Appendix B: Why object types must have exactly one bound

The motivation is that handling multiple bounds is overwhelmingly complicated to reason about and implement. In various places, constraints arise of the form all i. exists j. R[i] <= R[j], where R is a list of lifetimes. This is challenging for lifetime inference, since there are many options for it to choose from, and thus inference is no longer a fixed-point iteration. Moreover, it doesn’t seem to add any particular expressiveness.

The places where this becomes important are:

  • Checking lifetime bounds when data is closed over into an object type
  • Subtyping between object types, which would most naturally be contravariant in the lifetime bound

Similarly, requiring that the “master” bound on object lifetimes outlives all other bounds also aids inference. Now, given a type like the following:

trait Foo<'a> : 'a { }
trait Bar<'b> : 'b { }

...

let x: Box<Foo<'a>+Bar<'b>>

the inference engine can create a fresh lifetime variable '0 for the master bound and then say that '0:'a and '0:'b. Without the requirement that '0 be a master bound, it would be somewhat unclear how '0 relates to 'a and 'b (in fact, there would be no necessary relation). But if there is no necessary relation, then when closing over data, one would have to ensure that the closed over data outlives all derivable lifetime bounds, which again creates a constraint of the form all i. exists j..

Summary

The #[cfg(...)] attribute provides a mechanism for conditional compilation of items in a Rust crate. This RFC proposes to change the syntax of #[cfg] to make more sense as well as enable expansion of the conditional compilation system to attributes while maintaining a single syntax.

Motivation

In the current implementation, #[cfg(...)] takes a comma separated list of key, key = "value", not(key), or not(key = "value"). An individual #[cfg(...)] attribute “matches” if all of the contained cfg patterns match the compilation environment, and an item preserved if it either has no #[cfg(...)] attributes or any of the #[cfg(...)] attributes present match.

This is problematic for several reasons:

  • It is excessively verbose in certain situations. For example, implementing the equivalent of (a AND (b OR c OR d)) requires three separate attributes and a to be duplicated in each.
  • It differs from all other attributes in that all #[cfg(...)] attributes on an item must be processed together instead of in isolation. This change will move #[cfg(...)] closer to implementation as a normal syntax extension.

Detailed design

The <p> inside of #[cfg(<p>)] will be called a cfg pattern and have a simple recursive syntax:

  • key is a cfg pattern and will match if key is present in the compilation environment.
  • key = "value" is a cfg pattern and will match if a mapping from key to value is present in the compilation environment. At present, key-value pairs only exist for compiler defined keys such as target_os and endian.
  • not(<p>) is a cfg pattern if <p> is and matches if <p> does not match.
  • all(<p>, ...) is a cfg pattern if all of the comma-separated <p>s are cfg patterns and all of them match.
  • any(<p>, ...) is a cfg pattern if all of the comma-separated <p>s are cfg patterns and any of them match.

If an item is tagged with #[cfg(<p>)], that item will be stripped from the AST if the cfg pattern <p> does not match.

One implementation hazard is that the semantics of

#[cfg(a)]
#[cfg(b)]
fn foo() {}

will change from “include foo if either of a and b are present in the compilation environment” to “include foo if both of a and b are present in the compilation environment”. To ease the transition, the old semantics of multiple #[cfg(...)] attributes will be maintained as a special case, with a warning. After some reasonable period of time, the special case will be removed.

In addition, #[cfg(a, b, c)] will be accepted with a warning and be equivalent to #[cfg(all(a, b, c))]. Again, after some reasonable period of time, this behavior will be removed as well.

The cfg!() syntax extension will be modified to accept cfg patterns as well. A #[cfg_attr(<p>, <attr>)] syntax extension will be added (PR 16230) which will expand to #[<attr>] if the cfg pattern <p> matches. The test harness’s #[ignore] attribute will have its built-in cfg filtering functionality stripped in favor of #[cfg_attr(<p>, ignore)].

Drawbacks

While the implementation of this change in the compiler will be straightforward, the effects on downstream code will be significant, especially in the standard library.

Alternatives

all and any could be renamed to and and or, though I feel that the proposed names read better with the function-like syntax and are consistent with Iterator::all and Iterator::any.

Issue #2119 proposed the addition of || and && operators and parentheses to the attribute syntax to result in something like #[cfg(a || (b && c)]. I don’t favor this proposal since it would result in a major change to the attribute syntax for relatively little readability gain.

Unresolved questions

How long should multiple #[cfg(...)] attributes on a single item be forbidden? It should probably be at least until after 0.12 releases.

Should we permanently keep the behavior of treating #[cfg(a, b)] as #[cfg(all(a, b))]? It is the common case, and adding this interpretation can reduce the noise level a bit. On the other hand, it may be a bit confusing to read as it’s not immediately clear if it will be processed as and(..) or all(..).

Summary

This RFC extends traits with associated items, which make generic programming more convenient, scalable, and powerful. In particular, traits will consist of a set of methods, together with:

  • Associated functions (already present as “static” functions)
  • Associated consts
  • Associated types
  • Associated lifetimes

These additions make it much easier to group together a set of related types, functions, and constants into a single package.

This RFC also provides a mechanism for multidispatch traits, where the impl is selected based on multiple types. The connection to associated items will become clear in the detailed text below.

Note: This RFC was originally accepted before RFC 246 introduced the distinction between const and static items. The text has been updated to clarify that associated consts will be added rather than statics, and to provide a summary of restrictions on the initial implementation of associated consts. Other than that modification, the proposal has not been changed to reflect newer Rust features or syntax.

Motivation

A typical example where associated items are helpful is data structures like graphs, which involve at least three types: nodes, edges, and the graph itself.

In today’s Rust, to capture graphs as a generic trait, you have to take the additional types associated with a graph as parameters:

trait Graph<N, E> {
    fn has_edge(&self, &N, &N) -> bool;
    ...
}

The fact that the node and edge types are parameters is confusing, since any concrete graph type is associated with a unique node and edge type. It is also inconvenient, because code working with generic graphs is likewise forced to parameterize, even when not all of the types are relevant:

fn distance<N, E, G: Graph<N, E>>(graph: &G, start: &N, end: &N) -> uint { ... }

With associated types, the graph trait can instead make clear that the node and edge types are determined by any impl:

trait Graph {
    type N;
    type E;
    fn has_edge(&self, &N, &N) -> bool;
}

and clients can abstract over them all at once, referring to them through the graph type:

fn distance<G: Graph>(graph: &G, start: &G::N, end: &G::N) -> uint { ... }

The following subsections expand on the above benefits of associated items, as well as some others.

Associated types: engineering benefits for generics

As the graph example above illustrates, associated types do not increase the expressiveness of traits per se, because you can always use extra type parameters to a trait instead. However, associated types provide several engineering benefits:

  • Readability and scalability

    Associated types make it possible to abstract over a whole family of types at once, without having to separately name each of them. This improves the readability of generic code (like the distance function above). It also makes generics more “scalable”: traits can incorporate additional associated types without imposing an extra burden on clients that don’t care about those types.

    In today’s Rust, by contrast, adding additional generic parameters to a trait often feels like a very “heavyweight” move.

  • Ease of refactoring/evolution

    Because users of a trait do not have to separately parameterize over its associated types, new associated types can be added without breaking all existing client code.

    In today’s Rust, by contrast, associated types can only be added by adding more type parameters to a trait, which breaks all code mentioning the trait.

Clearer trait matching

Type parameters to traits can either be “inputs” or “outputs”:

  • Inputs. An “input” type parameter is used to determine which impl to use.

  • Outputs. An “output” type parameter is uniquely determined by the impl, but plays no role in selecting the impl.

Input and output types play an important role for type inference and trait coherence rules, which is described in more detail later on.

In the vast majority of current libraries, the only input type is the Self type implementing the trait, and all other trait type parameters are outputs. For example, the trait Iterator<A> takes a type parameter A for the elements being iterated over, but this type is always determined by the concrete Self type (e.g. Items<u8>) implementing the trait: A is typically an output.

Additional input type parameters are useful for cases like binary operators, where you may want the impl to depend on the types of both arguments. For example, you might want a trait

trait Add<Rhs, Sum> {
    fn add(&self, rhs: &Rhs) -> Sum;
}

to view the Self and Rhs types as inputs, and the Sum type as an output (since it is uniquely determined by the argument types). This would allow impls to vary depending on the Rhs type, even though the Self type is the same:

impl Add<int, int> for int { ... }
impl Add<Complex, Complex> for int { ... }

Today’s Rust does not make a clear distinction between input and output type parameters to traits. If you attempted to provide the two impls above, you would receive an error like:

error: conflicting implementations for trait `Add`

This RFC clarifies trait matching by:

  • Treating all trait type parameters as input types, and
  • Providing associated types, which are output types.

In this design, the Add trait would be written and implemented as follows:

// Self and Rhs are *inputs*
trait Add<Rhs> {
    type Sum; // Sum is an *output*
    fn add(&self, &Rhs) -> Sum;
}

impl Add<int> for int {
    type Sum = int;
    fn add(&self, rhs: &int) -> int { ... }
}

impl Add<Complex> for int {
    type Sum = Complex;
    fn add(&self, rhs: &Complex) -> Complex { ... }
}

With this approach, a trait declaration like trait Add<Rhs> { ... } is really defining a family of traits, one for each choice of Rhs. One can then provide a distinct impl for every member of this family.

Expressiveness

Associated types, lifetimes, and functions can already be expressed in today’s Rust, though it is unwieldy to do so (as argued above).

But associated consts cannot be expressed using today’s traits.

For example, today’s Rust includes a variety of numeric traits, including Float, which must currently expose constants as static functions:

trait Float {
    fn nan() -> Self;
    fn infinity() -> Self;
    fn neg_infinity() -> Self;
    fn neg_zero() -> Self;
    fn pi() -> Self;
    fn two_pi() -> Self;
    ...
}

Because these functions cannot be used in constant expressions, the modules for float types also export a separate set of constants as consts, not using traits.

Associated constants would allow the consts to live directly on the traits:

trait Float {
    const NAN: Self;
    const INFINITY: Self;
    const NEG_INFINITY: Self;
    const NEG_ZERO: Self;
    const PI: Self;
    const TWO_PI: Self;
    ...
}

Why now?

The above motivations aside, it may not be obvious why adding associated types now (i.e., pre-1.0) is important. There are essentially two reasons.

First, the design presented here is not backwards compatible, because it re-interprets trait type parameters as inputs for the purposes of trait matching. The input/output distinction has several ramifications on coherence rules, type inference, and resolution, which are all described later on in the RFC.

Of course, it might be possible to give a somewhat less ideal design where associated types can be added later on without changing the interpretation of existing trait type parameters. For example, type parameters could be explicitly marked as inputs, and otherwise assumed to be outputs. That would be unfortunate, since associated types would also be outputs – leaving the language with two ways of specifying output types for traits.

But the second reason is for the library stabilization process:

  • Since most existing uses of trait type parameters are intended as outputs, they should really be associated types instead. Making promises about these APIs as they currently stand risks locking the libraries into a design that will seem obsolete as soon as associated items are added. Again, this risk could probably be mitigated with a different, backwards-compatible associated item design, but at the cost of cruft in the language itself.

  • The binary operator traits (e.g. Add) should be multidispatch. It does not seem possible to stabilize them now in a way that will support moving to multidispatch later.

  • There are some thorny problems in the current libraries, such as the _equiv methods accumulating in HashMap, that can be solved using associated items. (See “Defaults” below for more on this specific example.) Additional examples include traits for error propagation and for conversion (to be covered in future RFCs). Adding these traits would improve the quality and consistency of our 1.0 library APIs.

Detailed design

Trait headers

Trait headers are written according to the following grammar:

TRAIT_HEADER =
  'trait' IDENT [ '<' INPUT_PARAMS '>' ] [ ':' BOUNDS ] [ WHERE_CLAUSE ]

INPUT_PARAMS = INPUT_PARAM { ',' INPUT_PARAM }* [ ',' ]
INPUT_PARAM  = IDENT [ ':' BOUNDS ]

BOUNDS = BOUND { '+' BOUND }* [ '+' ]
BOUND  = IDENT [ '<' ARGS '>' ]

ARGS   = INPUT_ARGS
       | OUTPUT_CONSTRAINTS
       | INPUT_ARGS ',' OUTPUT_CONSTRAINTS

INPUT_ARGS = TYPE { ',' TYPE }*

OUTPUT_CONSTRAINTS = OUTPUT_CONSTRAINT { ',' OUTPUT_CONSTRAINT }*
OUTPUT_CONSTRAINT  = IDENT '=' TYPE

NOTE: The grammar for WHERE_CLAUSE and BOUND is explained in detail in the subsection “Constraining associated types” below.

All type parameters to a trait are considered inputs, and can be used to select an impl; conceptually, each distinct instantiation of the types yields a distinct trait. More details are given in the section “The input/output type distinction” below.

Trait bodies: defining associated items

Trait bodies are expanded to include three new kinds of items: consts, types, and lifetimes:

TRAIT = TRAIT_HEADER '{' TRAIT_ITEM* '}'
TRAIT_ITEM =
  ... <existing productions>
  | 'const' IDENT ':' TYPE [ '=' CONST_EXP ] ';'
  | 'type' IDENT [ ':' BOUNDS ] [ WHERE_CLAUSE ] [ '=' TYPE ] ';'
  | 'lifetime' LIFETIME_IDENT ';'

Traits already support associated functions, which had previously been called “static” functions.

The BOUNDS and WHERE_CLAUSE on associated types are obligations for the implementor of the trait, and assumptions for users of the trait:

trait Graph {
    type N: Show + Hash;
    type E: Show + Hash;
    ...
}

impl Graph for MyGraph {
    // Both MyNode and MyEdge must implement Show and Hash
    type N = MyNode;
    type E = MyEdge;
    ...
}

fn print_nodes<G: Graph>(g: &G) {
    // here, can assume G::N implements Show
    ...
}

Namespacing/shadowing for associated types

Associated types may have the same name as existing types in scope, except for type parameters to the trait:

struct Foo { ... }

trait Bar<Input> {
    type Foo; // this is allowed
    fn into_foo(self) -> Foo; // this refers to the trait's Foo

    type Input; // this is NOT allowed
}

By not allowing name clashes between input and output types, keep open the possibility of later allowing syntax like:

Bar<Input=u8, Foo=uint>

where both input and output parameters are constrained by name. And anyway, there is no use for clashing input/output names.

In the case of a name clash like Foo above, if the trait needs to refer to the outer Foo for some reason, it can always do so by using a type alias external to the trait.

Defaults

Notice that associated consts and types both permit defaults, just as trait methods and functions can provide defaults.

Defaults are useful both as a code reuse mechanism, and as a way to expand the items included in a trait without breaking all existing implementors of the trait.

Defaults for associated types, however, present an interesting question: can default methods assume the default type? In other words, is the following allowed?

trait ContainerKey : Clone + Hash + Eq {
    type Query: Hash = Self;
    fn compare(&self, other: &Query) -> bool { self == other }
    fn query_to_key(q: &Query) -> Self { q.clone() };
}

impl ContainerKey for String {
    type Query = str;
    fn compare(&self, other: &str) -> bool {
        self.as_slice() == other
    }
    fn query_to_key(q: &str) -> String {
        q.into_string()
    }
}

impl<K,V> HashMap<K,V> where K: ContainerKey {
    fn find(&self, q: &K::Query) -> &V { ... }
}

In this example, the ContainerKey trait is used to associate a “Query” type (for lookups) with an owned key type. This resolves the thorny “equiv” problem in HashMap, where the hash map keys are Strings but you want to index the hash map with &str values rather than &String values, i.e. you want the following to work:

// H: HashMap<String, SomeType>
H.find("some literal")

rather than having to write

H.find(&"some literal".to_string())`

The current solution involves duplicating the API surface with _equiv methods that use the somewhat subtle Equiv trait, but the associated type approach makes it easy to provide a simple, single API that covers the same use cases.

The defaults for ContainerKey just assume that the owned key and lookup key types are the same, but the default methods have to assume the default associated types in order to work.

For this to work, it must not be possible for an implementor of ContainerKey to override the default Query type while leaving the default methods in place, since those methods may no longer typecheck.

We deal with this in a very simple way:

  • If a trait implementor overrides any default associated types, they must also override all default functions and methods.

  • Otherwise, a trait implementor can selectively override individual default methods/functions, as they can today.

Trait implementations

Trait impl syntax is much the same as before, except that const, type, and lifetime items are allowed:

IMPL_ITEM =
  ... <existing productions>
  | 'const' IDENT ':' TYPE '=' CONST_EXP ';'
  | 'type' IDENT' '=' 'TYPE' ';'
  | 'lifetime' LIFETIME_IDENT '=' LIFETIME_REFERENCE ';'

Any type implementation must satisfy all bounds and where clauses in the corresponding trait item.

Referencing associated items

Associated items are referenced through paths. The expression path grammar was updated as part of UFCS, but to accommodate associated types and lifetimes we need to update the type path grammar as well.

The full grammar is as follows:

EXP_PATH
  = EXP_ID_SEGMENT { '::' EXP_ID_SEGMENT }*
  | TYPE_SEGMENT { '::' EXP_ID_SEGMENT }+
  | IMPL_SEGMENT { '::' EXP_ID_SEGMENT }+
EXP_ID_SEGMENT   = ID [ '::' '<' TYPE { ',' TYPE }* '>' ]

TY_PATH
  = TY_ID_SEGMENT { '::' TY_ID_SEGMENT }*
  | TYPE_SEGMENT { '::' TY_ID_SEGMENT }*
  | IMPL_SEGMENT { '::' TY_ID_SEGMENT }+

TYPE_SEGMENT = '<' TYPE '>'
IMPL_SEGMENT = '<' TYPE 'as' TRAIT_REFERENCE '>'
TRAIT_REFERENCE = ID [ '<' TYPE { ',' TYPE * '>' ]

Here are some example paths, along with what they might be referencing

// Expression paths ///////////////////////////////////////////////////////////////

a::b::c         // reference to a function `c` in module `a::b`
a::<T1, T2>     // the function `a` instantiated with type arguments `T1`, `T2`
Vec::<T>::new   // reference to the function `new` associated with `Vec<T>`
<Vec<T> as SomeTrait>::some_fn
                // reference to the function `some_fn` associated with `SomeTrait`,
                //   as implemented by `Vec<T>`
T::size_of      // the function `size_of` associated with the type or trait `T`
<T>::size_of    // the function `size_of` associated with `T` _viewed as a type_
<T as SizeOf>::size_of
                // the function `size_of` associated with `T`'s impl of `SizeOf`

// Type paths /////////////////////////////////////////////////////////////////////

a::b::C         // reference to a type `C` in module `a::b`
A<T1, T2>       // type A instantiated with type arguments `T1`, `T2`
Vec<T>::Iter    // reference to the type `Iter` associated with `Vec<T>
<Vec<T> as SomeTrait>::SomeType
                // reference to the type `SomeType` associated with `SomeTrait`,
                //   as implemented by `Vec<T>`

Ways to reference items

Next, we’ll go into more detail on the meaning of each kind of path.

For the sake of discussion, we’ll suppose we’ve defined a trait like the following:

trait Container {
    type E;
    fn empty() -> Self;
    fn insert(&mut self, E);
    fn contains(&self, &E) -> bool where E: PartialEq;
    ...
}

impl<T> Container for Vec<T> {
    type E = T;
    fn empty() -> Vec<T> { Vec::new() }
    ...
}

Via an ID_SEGMENT prefix

When the prefix resolves to a type

The most common way to get at an associated item is through a type parameter with a trait bound:

fn pick<C: Container>(c: &C) -> Option<&C::E> { ... }

fn mk_with_two<C>() -> C where C: Container, C::E = uint {
    let mut cont = C::empty();  // reference to associated function
    cont.insert(0);
    cont.insert(1);
    cont
}

For these references to be valid, the type parameter must be known to implement the relevant trait:

// Knowledge via bounds
fn pick<C: Container>(c: &C) -> Option<&C::E> { ... }

// ... or equivalently,  where clause
fn pick<C>(c: &C) -> Option<&C::E> where C: Container { ... }

// Knowledge via ambient constraints
struct TwoContainers<C1: Container, C2: Container>(C1, C2);
impl<C1: Container, C2: Container> TwoContainers<C1, C2> {
    fn pick_one(&self) -> Option<&C1::E> { ... }
    fn pick_other(&self) -> Option<&C2::E> { ... }
}

Note that Vec<T>::E and Vec::<T>::empty are also valid type and function references, respectively.

For cases like C::E or Vec<T>::E, the path begins with an ID_SEGMENT prefix that itself resolves to a type: both C and Vec<T> are types. In general, a path PREFIX::REST_OF_PATH where PREFIX resolves to a type is equivalent to using a TYPE_SEGMENT prefix <PREFIX>::REST_OF_PATH. So, for example, following are all equivalent:

fn pick<C: Container>(c: &C) -> Option<&C::E> { ... }
fn pick<C: Container>(c: &C) -> Option<&<C>::E> { ... }
fn pick<C: Container>(c: &C) -> Option<&<<C>::E>> { ... }

The behavior of TYPE_SEGMENT prefixes is described in the next subsection.

When the prefix resolves to a trait

However, it is possible for an ID_SEGMENT prefix to resolve to a trait, rather than a type. In this case, the behavior of an ID_SEGMENT varies from that of a TYPE_SEGMENT in the following way:

// a reference Container::insert is roughly equivalent to:
fn trait_insert<C: Container>(c: &C, e: C::E);

// a reference <Container>::insert is roughly equivalent to:
fn object_insert<E>(c: &Container<E=E>, e: E);

That is, if PREFIX is an ID_SEGMENT that resolves to a trait Trait:

  • A path PREFIX::REST resolves to the item/path REST defined within Trait, while treating the type implementing the trait as a type parameter.

  • A path <PREFIX>::REST treats PREFIX as a (DST-style) type, and is hence usable only with trait objects. See the UFCS RFC for more detail.

Note that a path like Container::E, while grammatically valid, will fail to resolve since there is no way to tell which impl to use. A path like Container::empty, however, resolves to a function roughly equivalent to:

fn trait_empty<C: Container>() -> C;

Via a TYPE_SEGMENT prefix

The following text is slightly changed from the UFCS RFC.

When a path begins with a TYPE_SEGMENT, it is a type-relative path. If this is the complete path (e.g., <int>), then the path resolves to the specified type. If the path continues (e.g., <int>::size_of) then the next segment is resolved using the following procedure. The procedure is intended to mimic method lookup, and hence any changes to method lookup may also change the details of this lookup.

Given a path <T>::m::...:

  1. Search for members of inherent impls defined on T (if any) with the name m. If any are found, the path resolves to that item.

  2. Otherwise, let IN_SCOPE_TRAITS be the set of traits that are in scope and which contain a member named m:

    • Let IMPLEMENTED_TRAITS be those traits from IN_SCOPE_TRAITS for which an implementation exists that (may) apply to T.
      • There can be ambiguity in the case that T contains type inference variables.
    • If IMPLEMENTED_TRAITS is not a singleton set, report an ambiguity error. Otherwise, let TRAIT be the member of IMPLEMENTED_TRAITS.
    • If TRAIT is ambiguously implemented for T, report an ambiguity error and request further type information.
    • Otherwise, rewrite the path to <T as Trait>::m::... and continue.

Via a IMPL_SEGMENT prefix

The following text is somewhat different from the UFCS RFC.

When a path begins with an IMPL_SEGMENT, it is a reference to an item defined from a trait. Note that such paths must always have a follow-on member m (that is, <T as Trait> is not a complete path, but <T as Trait>::m is).

To resolve the path, first search for an applicable implementation of Trait for T. If no implementation can be found – or the result is ambiguous – then report an error. Note that when T is a type parameter, a bound T: Trait guarantees that there is such an implementation, but does not count for ambiguity purposes.

Otherwise, resolve the path to the member of the trait with the substitution Self => T and continue.

This apparently straightforward algorithm has some subtle consequences, as illustrated by the following example:

trait Foo {
    type T;
    fn as_T(&self) -> &T;
}

// A blanket impl for any Show type T
impl<T: Show> Foo for T {
    type T = T;
    fn as_T(&self) -> &T { self }
}

fn bounded<U: Foo>(u: U) where U::T: Show {
    // Here, we just constrain the associated type directly
    println!("{}", u.as_T())
}

fn blanket<U: Show>(u: U) {
    // the blanket impl applies to U, so we know that `U: Foo` and
    // <U as Foo>::T = U (and, of course, U: Show)
    println!("{}", u.as_T())
}

fn not_allowed<U: Foo>(u: U) {
    // this will not compile, since <U as Trait>::T is not known to
    // implement Show
    println!("{}", u.as_T())
}

This example includes three generic functions that make use of an associated type; the first two will typecheck, while the third will not.

  • The first case, bounded, places a Show constraint directly on the otherwise-abstract associated type U::T. Hence, it is allowed to assume that U::T: Show, even though it does not know the concrete implementation of Foo for U.

  • The second case, blanket, places a Show constraint on the type U, which means that the blanket impl of Foo applies even though we do not know the concrete type that U will be. That fact means, moreover, that we can compute exactly what the associated type U::T will be, and know that it will satisfy Show. Coherence guarantees that that the blanket impl is the only one that could apply to U. (See the section “Impl specialization” under “Unresolved questions” for a deeper discussion of this point.)

  • The third case assumes only that U: Foo, and therefore nothing is known about the associated type U::T. In particular, the function cannot assume that U::T: Show.

The resolution rules also interact with instantiation of type parameters in an intuitive way. For example:

trait Graph {
    type N;
    type E;
    ...
}

impl Graph for MyGraph {
    type N = MyNode;
    type E = MyEdge;
    ...
}

fn pick_node<G: Graph>(t: &G) -> &G::N {
    // the type G::N is abstract here
    ...
}

let G = MyGraph::new();
...
pick_node(G) // has type: <MyGraph as Graph>::N = MyNode

Assuming there are no blanket implementations of Graph, the pick_node function knows nothing about the associated type G::N. However, a client of pick_node that instantiates it with a particular concrete graph type will also know the concrete type of the value returned from the function – here, MyNode.

Scoping of trait and impl items

Associated types are frequently referred to in the signatures of a trait’s methods and associated functions, and it is natural and convenient to refer to them directly.

In other words, writing this:

trait Graph {
    type N;
    type E;
    fn has_edge(&self, &N, &N) -> bool;
    ...
}

is more appealing than writing this:

trait Graph {
    type N;
    type E;
    fn has_edge(&self, &Self::N, &Self::N) -> bool;
    ...
}

This RFC proposes to treat both trait and impl bodies (both inherent and for traits) the same way we treat mod bodies: all items being defined are in scope. In particular, methods are in scope as UFCS-style functions:

trait Foo {
    type AssocType;
    lifetime 'assoc_lifetime;
    const ASSOC_CONST: uint;
    fn assoc_fn() -> Self;

    // Note: 'assoc_lifetime and AssocType in scope:
    fn method(&self, Self) -> &'assoc_lifetime AssocType;

    fn default_method(&self) -> uint {
        // method in scope UFCS-style, assoc_fn in scope
        let _ = method(self, assoc_fn());
        ASSOC_CONST // in scope
    }
}

// Same scoping rules for impls, including inherent impls:
struct Bar;
impl Bar {
    fn foo(&self) { ... }
    fn bar(&self) {
        foo(self); // foo in scope UFCS-style
        ...
    }
}

Items from super traits are not in scope, however. See the discussion on super traits below for more detail.

These scope rules provide good ergonomics for associated types in particular, and a consistent scope model for language constructs that can contain items (like traits, impls, and modules). In the long run, we should also explore imports for trait items, i.e. use Trait::some_method, but that is out of scope for this RFC.

Note that, according to this proposal, associated types/lifetimes are not in scope for the optional where clause on the trait header. For example:

trait Foo<Input>
    // type parameters in scope, but associated types are not:
    where Bar<Input, Self::Output>: Encodable {

    type Output;
    ...
}

This setup seems more intuitive than allowing the trait header to refer directly to items defined within the trait body.

It’s also worth noting that trait-level where clauses are never needed for constraining associated types anyway, because associated types also have where clauses. Thus, the above example could (and should) instead be written as follows:

trait Foo<Input> {
    type Output where Bar<Input, Output>: Encodable;
    ...
}

Constraining associated types

Associated types are not treated as parameters to a trait, but in some cases a function will want to constrain associated types in some way. For example, as explained in the Motivation section, the Iterator trait should treat the element type as an output:

trait Iterator {
    type A;
    fn next(&mut self) -> Option<A>;
    ...
}

For code that works with iterators generically, there is no need to constrain this type:

fn collect_into_vec<I: Iterator>(iter: I) -> Vec<I::A> { ... }

But other code may have requirements for the element type:

  • That it implements some traits (bounds).
  • That it unifies with a particular type.

These requirements can be imposed via where clauses:

fn print_iter<I>(iter: I) where I: Iterator, I::A: Show { ... }
fn sum_uints<I>(iter: I) where I: Iterator, I::A = uint { ... }

In addition, there is a shorthand for equality constraints:

fn sum_uints<I: Iterator<A = uint>>(iter: I) { ... }

In general, a trait like:

trait Foo<Input1, Input2> {
    type Output1;
    type Output2;
    lifetime 'a;
    const C: bool;
    ...
}

can be written in a bound like:

T: Foo<I1, I2>
T: Foo<I1, I2, Output1 = O1>
T: Foo<I1, I2, Output2 = O2>
T: Foo<I1, I2, Output1 = O1, Output2 = O2>
T: Foo<I1, I2, Output1 = O1, 'a = 'b, Output2 = O2>
T: Foo<I1, I2, Output1 = O1, 'a = 'b, C = true, Output2 = O2>

The output constraints must come after all input arguments, but can appear in any order.

Note that output constraints are allowed when referencing a trait in a type or a bound, but not in an IMPL_SEGMENT path:

  • As a type: fn foo(obj: Box<Iterator<A = uint>> is allowed.
  • In a bound: fn foo<I: Iterator<A = uint>>(iter: I) is allowed.
  • In an IMPL_SEGMENT: <I as Iterator<A = uint>>::next is not allowed.

The reason not to allow output constraints in IMPL_SEGMENT is that such paths are references to a trait implementation that has already been determined – it does not make sense to apply additional constraints to the implementation when referencing it.

Output constraints are a handy shorthand when using trait bounds, but they are a necessity for trait objects, which we discuss next.

Trait objects

When using trait objects, the Self type is “erased”, so different types implementing the trait can be used under the same trait object type:

impl Show for Foo { ... }
impl Show for Bar { ... }

fn make_vec() -> Vec<Box<Show>> {
    let f = Foo { ... };
    let b = Bar { ... };
    let mut v = Vec::new();
    v.push(box f as Box<Show>);
    v.push(box b as Box<Show>);
    v
}

One consequence of erasing Self is that methods using the Self type as arguments or return values cannot be used on trait objects, since their types would differ for different choices of Self.

In the model presented in this RFC, traits have additional input parameters beyond Self, as well as associated types that may vary depending on all of the input parameters. This raises the question: which of these types, if any, are erased in trait objects?

The approach we take here is the simplest and most conservative: when using a trait as a type (i.e., as a trait object), all input and output types must be provided as part of the type. In other words, only the Self type is erased, and all other types are specified statically in the trait object type.

Consider again the following example:

trait Foo<Input1, Input2> {
    type Output1;
    type Output2;
    lifetime 'a;
    const C: bool;
    ...
}

Unlike the case for static trait bounds, which do not have to specify any of the associated types, lifetimes, or consts, (but do have to specify the input types), trait object types must specify all of the types:

fn consume_foo<T: Foo<I1, I2>>(t: T) // this is valid
fn consume_obj(t: Box<Foo<I1, I2>>)  // this is NOT valid

// but this IS valid:
fn consume_obj(t: Box<Foo<I1, I2, Output1 = O2, Output2 = O2, 'a = 'static, C = true>>)

With this design, it is clear that none of the non-Self types are erased as part of trait objects. But it leaves wiggle room to relax this restriction later on: trait object types that are not allowed under this design can be given meaning in some later design.

Inherent associated items

All associated items are also allowed in inherent impls, so a definition like the following is allowed:

struct MyGraph { ... }
struct MyNode { ... }
struct MyEdge { ... }

impl MyGraph {
    type N = MyNode;
    type E = MyEdge;

    // Note: associated types in scope, just as with trait bodies
    fn has_edge(&self, &N, &N) -> bool {
        ...
    }

    ...
}

Inherent associated items are referenced similarly to trait associated items:

fn distance(g: &MyGraph, from: &MyGraph::N, to: &MyGraph::N) -> uint { ... }

Note, however, that output constraints do not make sense for inherent outputs:

// This is *not* a legal type:
MyGraph<N = SomeNodeType>

The input/output type distinction

When designing a trait that references some unknown type, you now have the option of taking that type as an input parameter, or specifying it as an output associated type. What are the ramifications of this decision?

Coherence implications

Input types are used when determining which impl matches, even for the same Self type:

trait Iterable1<A> {
    type I: Iterator<A>;
    fn iter(self) -> I;
}

// These impls have distinct input types, so are allowed
impl Iterable1<u8> for Foo { ... }
impl Iterable1<char> for Foo { ... }

trait Iterable2 {
    type A;
    type I: Iterator<A>;
    fn iter(self) -> I;
}

// These impls apply to a common input (Foo), so are NOT allowed
impl Iterable2 for Foo { ... }
impl Iterable2 for Foo { ... }

More formally, the coherence property is revised as follows:

  • Given a trait and values for all its type parameters (inputs, including Self), there is at most one applicable impl.

In the trait reform RFC, coherence is guaranteed by maintaining two other key properties, which are revised as follows:

Orphan check: Every implementation must meet one of the following conditions:

  1. The trait being implemented (if any) must be defined in the current crate.

  2. At least one of the input type parameters (including but not necessarily Self) must meet the following grammar, where C is a struct or enum defined within the current crate:

    T = C
      | [T]
      | [T, ..n]
      | &T
      | &mut T
      | ~T
      | (..., T, ...)
      | X<..., T, ...> where X is not bivariant with respect to T
    

Overlapping instances: No two implementations can be instantiable with the same set of types for the input type parameters.

See the trait reform RFC for more discussion of these properties.

Type inference implications

Finally, output type parameters can be inferred/resolved as soon as there is a matching impl based on the input type parameters. Because of the coherence property above, there can be at most one.

On the other hand, even if there is only one applicable impl, type inference is not allowed to infer the input type parameters from it. This restriction makes it possible to ensure crate concatenation: adding another crate may add impls for a given trait, and if type inference depended on the absence of such impls, importing a crate could break existing code.

In practice, these inference benefits can be quite valuable. For example, in the Add trait given at the beginning of this RFC, the Sum output type is immediately known once the input types are known, which can avoid the need for type annotations.

Limitations

The main limitation of associated items as presented here is about associated types in particular. You might be tempted to write a trait like the following:

trait Iterable {
    type A;
    type I: Iterator<&'a A>; // what is the lifetime here?
    fn iter<'a>(&'a self) -> I;  // and how to connect it to self?
}

The problem is that, when implementing this trait, the return type I of iter must generally depend on the lifetime of self. For example, the corresponding method in Vec looks like the following:

impl<T> Vec<T> {
    fn iter(&'a self) -> Items<'a, T> { ... }
}

This means that, given a Vec<T>, there isn’t a single type Items<T> for iteration – rather, there is a family of types, one for each input lifetime. In other words, the associated type I in the Iterable needs to be “higher-kinded”: not just a single type, but rather a family:

trait Iterable {
    type A;
    type I<'a>: Iterator<&'a A>;
    fn iter<'a>(&self) -> I<'a>;
}

In this case, I is parameterized by a lifetime, but in other cases (like map) an associated type needs to be parameterized by a type.

In general, such higher-kinded types (HKTs) are a much-requested feature for Rust, and they would extend the reach of associated types. But the design and implementation of higher-kinded types is, by itself, a significant investment. The point of view of this RFC is that associated items bring the most important changes needed to stabilize our existing traits (and add a few key others), while HKTs will allow us to define important traits in the future but are not necessary for 1.0.

Encoding higher-kinded types

That said, it’s worth pointing out that variants of higher-kinded types can be encoded in the system being proposed here.

For example, the Iterable example above can be written in the following somewhat contorted style:

trait IterableOwned {
    type A;
    type I: Iterator<A>;
    fn iter_owned(self) -> I;
}

trait Iterable {
    fn iter<'a>(&'a self) -> <&'a Self>::I where &'a Self: IterableOwned {
        IterableOwned::iter_owned(self)
    }
}

The idea here is to define a trait that takes, as input type/lifetimes parameters, the parameters to any HKTs. In this case, the trait is implemented on the type &'a Self, which includes the lifetime parameter.

We can in fact generalize this technique to encode arbitrary HKTs:

// The kind * -> *
trait TypeToType<Input> {
    type Output;
}
type Apply<Name, Elt> where Name: TypeToType<Elt> = Name::Output;

struct Vec_;
struct DList_;

impl<T> TypeToType<T> for Vec_ {
    type Output = Vec<T>;
}

impl<T> TypeToType<T> for DList_ {
    type Output = DList<T>;
}

trait Mappable
{
    type E;
    type HKT where Apply<HKT, E> = Self;

    fn map<F>(self, f: E -> F) -> Apply<HKT, F>;
}

While the above demonstrates the versatility of associated types and where clauses, it is probably too much of a hack to be viable for use in libstd.

Associated consts in generic code

If the value of an associated const depends on a type parameter (including Self), it cannot be used in a constant expression. This restriction will almost certainly be lifted in the future, but this raises questions outside the scope of this RFC.

Staging

Associated lifetimes are probably not necessary for the 1.0 timeframe. While we currently have a few traits that are parameterized by lifetimes, most of these can go away once DST lands.

On the other hand, associated lifetimes are probably trivial to implement once associated types have been implemented.

Other interactions

Interaction with implied bounds

As part of the implied bounds idea, it may be desirable for this:

fn pick_node<G>(g: &G) -> &<G as Graph>::N

to be sugar for this:

fn pick_node<G: Graph>(g: &G) -> &<G as Graph>::N

But this feature can easily be added later, as part of a general implied bounds RFC.

Future-proofing: specialization of impls

In the future, we may wish to relax the “overlapping instances” rule so that one can provide “blanket” trait implementations and then “specialize” them for particular types. For example:

trait Sliceable {
    type Slice;
    // note: not using &self here to avoid need for HKT
    fn as_slice(self) -> Slice;
}

impl<'a, T> Sliceable for &'a T {
    type Slice = &'a T;
    fn as_slice(self) -> &'a T { self }
}

impl<'a, T> Sliceable for &'a Vec<T> {
    type Slice = &'a [T];
    fn as_slice(self) -> &'a [T] { self.as_slice() }
}

But then there’s a difficult question:

fn dice<A>(a: &A) -> &A::Slice where &A: Sliceable {
    a // is this allowed?
}

Here, the blanket and specialized implementations provide incompatible associated types. When working with the trait generically, what can we assume about the associated type? If we assume it is the blanket one, the type may change during monomorphization (when specialization takes effect)!

The RFC does allow generic code to “see” associated types provided by blanket implementations, so this is a potential problem.

Our suggested strategy is the following. If at some later point we wish to add specialization, traits would have to opt in explicitly. For such traits, we would not allow generic code to “see” associated types for blanket implementations; instead, output types would only be visible when all input types were concretely known. This approach is backwards-compatible with the RFC, and is probably a good idea in any case.

Alternatives

Multidispatch through tuple types

This RFC clarifies trait matching by making trait type parameters inputs to matching, and associated types outputs.

A more radical alternative would be to remove type parameters from traits, and instead support multiple input types through a separate multidispatch mechanism.

In this design, the Add trait would be written and implemented as follows:

// Lhs and Rhs are *inputs*
trait Add for (Lhs, Rhs) {
    type Sum; // Sum is an *output*
    fn add(&Lhs, &Rhs) -> Sum;
}

impl Add for (int, int) {
    type Sum = int;
    fn add(left: &int, right: &int) -> int { ... }
}

impl Add for (int, Complex) {
    type Sum = Complex;
    fn add(left: &int, right: &Complex) -> Complex { ... }
}

The for syntax in the trait definition is used for multidispatch traits, here saying that impls must be for pairs of types which are bound to Lhs and Rhs respectively. The add function can then be invoked in UFCS style by writing

Add::add(some_int, some_complex)

Advantages of the tuple approach:

  • It does not force a distinction between Self and other input types, which in some cases (including binary operators like Add) can be artificial.

  • Makes it possible to specify input types without specifying the trait: <(A, B)>::Sum rather than <A as Add<B>>::Sum.

Disadvantages of the tuple approach:

  • It’s more painful when you do want a method rather than a function.

  • Requires where clauses when used in bounds: where (A, B): Trait rather than A: Trait<B>.

  • It gives two ways to write single dispatch: either without for, or using for with a single-element tuple.

  • There’s a somewhat jarring distinction between single/multiple dispatch traits, making the latter feel “bolted on”.

  • The tuple syntax is unusual in acting as a binder of its types, as opposed to the Trait<A, B> syntax.

  • Relatedly, the generics syntax for traits is immediately understandable (a family of traits) based on other uses of generics in the language, while the tuple notation stands alone.

  • Less clear story for trait objects (although the fact that Self is the only erased input type in this RFC may seem somewhat arbitrary).

On balance, the generics-based approach seems like a better fit for the language design, especially in its interaction with methods and the object system.

A backwards-compatible version

Yet another alternative would be to allow trait type parameters to be either inputs or outputs, marking the inputs with a keyword in:

trait Add<in Rhs, Sum> {
    fn add(&Lhs, &Rhs) -> Sum;
}

This would provide a way of adding multidispatch now, and then adding associated items later on without breakage. If, in addition, output types had to come after all input types, it might even be possible to migrate output type parameters like Sum above into associated types later.

This is perhaps a reasonable fallback, but it seems better to introduce a clean design with both multidispatch and associated items together.

Unresolved questions

Super traits

This RFC largely ignores super traits.

Currently, the implementation of super traits treats them identically to a where clause that bounds Self, and this RFC does not propose to change that. However, a follow-up RFC should clarify that this is the intended semantics for super traits.

Note that this treatment of super traits is, in particular, consistent with the proposed scoping rules, which do not bring items from super traits into scope in the body of a subtrait; they must be accessed via Self::item_name.

Equality constraints in where clauses

This RFC allows equality constraints on types for associated types, but does not propose a similar feature for where clauses. That will be the subject of a follow-up RFC.

Multiple trait object bounds for the same trait

The design here makes it possible to write bounds or trait objects that mention the same trait, multiple times, with different inputs:

fn mulit_add<T: Add<int> + Add<Complex>>(t: T) -> T { ... }
fn mulit_add_obj(t: Box<Add<int> + Add<Complex>>) -> Box<Add<int> + Add<Complex>> { ... }

This seems like a potentially useful feature, and should be unproblematic for bounds, but may have implications for vtables that make it problematic for trait objects. Whether or not such trait combinations are allowed will likely depend on implementation concerns, which are not yet clear.

Generic associated consts in match patterns

It seems desirable to allow constants that depend on type parameters in match patterns, but it’s not clear how to do so while still checking exhaustiveness and reachability of the match arms. Most likely this requires new forms of where clause, to constrain associated constant values.

For now, we simply defer the question.

Generic associated consts in array sizes

It would be useful to be able to use trait-associated constants in generic code.

// Shouldn't this be OK?
const ALIAS_N: usize = <T>::N;
let x: [u8; <T>::N] = [0u8; ALIAS_N];
// Or...
let x: [u8; T::N + 1] = [0u8; T::N + 1];

However, this causes some problems. What should we do with the following case in type checking, where we need to prove that a generic is valid for any T?

let x: [u8; T::N + T::N] = [0u8; 2 * T::N];

We would like to handle at least some obvious cases (e.g. proving that T::N == T::N), but without trying to prove arbitrary statements about arithmetic. The question of how to do this is deferred.

Summary

This RFC adds overloaded slice notation:

  • foo[] for foo.as_slice()
  • foo[n..m] for foo.slice(n, m)
  • foo[n..] for foo.slice_from(n)
  • foo[..m] for foo.slice_to(m)
  • mut variants of all the above

via two new traits, Slice and SliceMut.

It also changes the notation for range match patterns to ..., to signify that they are inclusive whereas .. in slices are exclusive.

Motivation

There are two primary motivations for introducing this feature.

Ergonomics

Slicing operations, especially as_slice, are a very common and basic thing to do with vectors, and potentially many other kinds of containers. We already have notation for indexing via the Index trait, and this RFC is essentially a continuation of that effort.

The as_slice operator is particularly important. Since we’ve moved away from auto-slicing in coercions, explicit as_slice calls have become extremely common, and are one of the leading ergonomic/first impression problems with the language. There are a few other approaches to address this particular problem, but these alternatives have downsides that are discussed below (see “Alternatives”).

Error handling conventions

We are gradually moving toward a Python-like world where notation like foo[n] calls fail! when n is out of bounds, while corresponding methods like get return Option values rather than failing. By providing similar notation for slicing, we open the door to following the same convention throughout vector-like APIs.

Detailed design

The design is a straightforward continuation of the Index trait design. We introduce two new traits, for immutable and mutable slicing:

trait Slice<Idx, S> {
    fn as_slice<'a>(&'a self) -> &'a S;
    fn slice_from(&'a self, from: Idx) -> &'a S;
    fn slice_to(&'a self, to: Idx) -> &'a S;
    fn slice(&'a self, from: Idx, to: Idx) -> &'a S;
}

trait SliceMut<Idx, S> {
    fn as_mut_slice<'a>(&'a mut self) -> &'a mut S;
    fn slice_from_mut(&'a mut self, from: Idx) -> &'a mut S;
    fn slice_to_mut(&'a mut self, to: Idx) -> &'a mut S;
    fn slice_mut(&'a mut self, from: Idx, to: Idx) -> &'a mut S;
}

(Note, the mutable names here are part of likely changes to naming conventions that will be described in a separate RFC).

These traits will be used when interpreting the following notation:

Immutable slicing

  • foo[] for foo.as_slice()
  • foo[n..m] for foo.slice(n, m)
  • foo[n..] for foo.slice_from(n)
  • foo[..m] for foo.slice_to(m)

Mutable slicing

  • foo[mut] for foo.as_mut_slice()
  • foo[mut n..m] for foo.slice_mut(n, m)
  • foo[mut n..] for foo.slice_from_mut(n)
  • foo[mut ..m] for foo.slice_to_mut(m)

Like Index, uses of this notation will auto-deref just as if they were method invocations. So if T implements Slice<uint, [U]>, and s: Smaht<T>, then s[] compiles and has type &[U].

Note that slicing is “exclusive” (so [n..m] is the interval n <= x < m), while .. in match patterns is “inclusive”. To avoid confusion, we propose to change the match notation to ... to reflect the distinction. The reason to change the notation, rather than the interpretation, is that the exclusive (respectively inclusive) interpretation is the right default for slicing (respectively matching).

Rationale for the notation

The choice of square brackets for slicing is straightforward: it matches our indexing notation, and slicing and indexing are closely related.

Some other languages (like Python and Go – and Fortran) use : rather than .. in slice notation. The choice of .. here is influenced by its use elsewhere in Rust, for example for fixed-length array types [T, ..n]. The .. for slicing has precedent in Perl and D.

See Wikipedia for more on the history of slice notation in programming languages.

The mut qualifier

It may be surprising that mut is used as a qualifier in the proposed slice notation, but not for the indexing notation. The reason is that indexing includes an implicit dereference. If v: Vec<Foo> then v[n] has type Foo, not &Foo or &mut Foo. So if you want to get a mutable reference via indexing, you write &mut v[n]. More generally, this allows us to do resolution/typechecking prior to resolving the mutability.

This treatment of Index matches the C tradition, and allows us to write things like v[0] = foo instead of *v[0] = foo.

On the other hand, this approach is problematic for slicing, since in general it would yield an unsized type (under DST) – and of course, slicing is meant to give you a fat pointer indicating the size of the slice, which we don’t want to immediately deref. But the consequence is that we need to know the mutability of the slice up front, when we take it, since it determines the type of the expression.

Drawbacks

The main drawback is the increase in complexity of the language syntax. This seems minor, especially since the notation here is essentially “finishing” what was started with the Index trait.

Limitations in the design

Like the Index trait, this forces the result to be a reference via &, which may rule out some generalizations of slicing.

One way of solving this problem is for the slice methods to take self (by value) rather than &self, and in turn to implement the trait on &T rather than T. Whether this approach is viable in the long run will depend on the final rules for method resolution and auto-ref.

In general, the trait system works best when traits can be applied to types T rather than borrowed types &T. Ultimately, if Rust gains higher-kinded types (HKT), we could change the slice type S in the trait to be higher-kinded, so that it is a family of types indexed by lifetime. Then we could replace the &'a S in the return value with S<'a>. It should be possible to transition from the current Index and Slice trait designs to an HKT version in the future without breaking backwards compatibility by using blanket implementations of the new traits (say, IndexHKT) for types that implement the old ones.

Alternatives

For improving the ergonomics of as_slice, there are two main alternatives.

Coercions: auto-slicing

One possibility would be re-introducing some kind of coercion that automatically slices. We used to have a coercion from (in today’s terms) Vec<T> to &[T]. Since we no longer coerce owned to borrowed values, we’d probably want a coercion &Vec<T> to &[T] now:

fn use_slice(t: &[u8]) { ... }

let v = vec!(0u8, 1, 2);
use_slice(&v)           // automatically coerce here
use_slice(v.as_slice()) // equivalent

Unfortunately, adding such a coercion requires choosing between the following:

  • Tie the coercion to Vec and String. This would reintroduce special treatment of these otherwise purely library types, and would mean that other library types that support slicing would not benefit (defeating some of the purpose of DST).

  • Make the coercion extensible, via a trait. This is opening pandora’s box, however: the mechanism could likely be (ab)used to run arbitrary code during coercion, so that any invocation foo(a, b, c) might involve running code to pre-process each of the arguments. While we may eventually want such user-extensible coercions, it is a big step to take with a lot of potential downside when reasoning about code, so we should pursue more conservative solutions first.

Deref

Another possibility would be to make String implement Deref<str> and Vec<T> implement Deref<[T]>, once DST lands. Doing so would allow explicit coercions like:

fn use_slice(t: &[u8]) { ... }

let v = vec!(0u8, 1, 2);
use_slice(&*v)          // take advantage of deref
use_slice(v.as_slice()) // equivalent

There are at least two downsides to doing so, however:

  • It is not clear how the method resolution rules will ultimately interact with Deref. In particular, a leading proposal is that for a smart pointer s: Smaht<T> when you invoke s.m(...) only inherent methods m are considered for Smaht<T>; trait methods are only considered for the maximally-derefed value *s.

    With such a resolution strategy, implementing Deref for Vec would make it impossible to use trait methods on the Vec type except through UFCS, severely limiting the ability of programmers to usefully implement new traits for Vec.

  • The idea of Vec as a smart pointer around a slice, and the use of &*v as above, is somewhat counterintuitive, especially for such a basic type.

Ultimately, notation for slicing seems desirable on its own merits anyway, and if it can eliminate the need to implement Deref for Vec and String, all the better.

Summary

This is a conventions RFC for settling naming conventions when there are by value, by reference, and by mutable reference variants of an operation.

Motivation

Currently the libraries are not terribly consistent about how to signal mut variants of functions; sometimes it is by a mut_ prefix, sometimes a _mut suffix, and occasionally with _mut_ appearing in the middle. These inconsistencies make APIs difficult to remember.

While there are arguments in favor of each of the positions, we stand to gain a lot by standardizing, and to some degree we just need to make a choice.

Detailed design

Functions often come in multiple variants: immutably borrowed, mutably borrowed, and owned.

The canonical example is iterator methods:

  • iter works with immutably borrowed data
  • mut_iter works with mutably borrowed data
  • move_iter works with owned data

For iterators, the “default” (unmarked) variant is immutably borrowed. In other cases, the default is owned.

The proposed rules depend on which variant is the default, but use suffixes to mark variants in all cases.

The rules

Immutably borrowed by default

If foo uses/produces an immutable borrow by default, use:

  • The _mut suffix (e.g. foo_mut) for the mutably borrowed variant.
  • The _move suffix (e.g. foo_move) for the owned variant.

However, in the case of iterators, the moving variant can also be understood as an into conversion, into_iter, and for x in v.into_iter() reads arguably better than for x in v.iter_move(), so the convention is into_iter.

NOTE: This convention covers only the method names for iterators, not the names of the iterator types. That will be the subject of a follow up RFC.

Owned by default

If foo uses/produces owned data by default, use:

  • The _ref suffix (e.g. foo_ref) for the immutably borrowed variant.
  • The _mut suffix (e.g. foo_mut) for the mutably borrowed variant.

Exceptions

For mutably borrowed variants, if the mut qualifier is part of a type name (e.g. as_mut_slice), it should appear as it would appear in the type.

References to type names

Some places in the current libraries, we say things like as_ref and as_mut, and others we say get_ref and get_mut_ref.

Proposal: generally standardize on mut as a shortening of mut_ref.

The rationale

Why suffixes?

Using a suffix makes it easier to visually group variants together, especially when sorted alphabetically. It puts the emphasis on the functionality, rather than the qualifier.

Why move?

Historically, Rust has used move as a way to signal ownership transfer and to connect to C++ terminology. The main disadvantage is that it does not emphasize ownership, which is our current narrative. On the other hand, in Rust all data is owned, so using _owned as a qualifier is a bit strange.

The Copy trait poses a problem for any terminology about ownership transfer. The proposed mental model is that with Copy data you are “moving a copy”.

See Alternatives for more discussion.

Why mut rather then mut_ref?

It’s shorter, and pairs like as_ref and as_mut have a pleasant harmony that doesn’t place emphasis on one kind of reference over the other.

Alternatives

Prefix or mixed qualifiers

Using prefixes for variants is another possibility, but there seems to be little upside.

It’s possible to rationalize our current mix of prefixes and suffixes via grammatical distinctions, but this seems overly subtle and complex, and requires a strong command of English grammar to work well.

No suffix exception

The rules here make an exception when mut is part of a type name, as in as_mut_slice, but we could instead always place the qualifier as a suffix: as_slice_mut. This would make APIs more consistent in some ways, less in others: conversion functions would no longer consistently use a transcription of their type name.

This is perhaps not so bad, though, because as it is we often abbreviate type names. In any case, we need a convention (separate RFC) for how to refer to type names in methods.

owned instead of move

The overall narrative about Rust has been evolving to focus on ownership as the essential concept, with borrowing giving various lesser forms of ownership, so _owned would be a reasonable alternative to _move.

On the other hand, the ref variants do not say “borrowed”, so in some sense this choice is inconsistent. In addition, the terminology is less familiar to those coming from C++.

val instead of owned

Another option would be val or value instead of owned. This suggestion plays into the “by reference” and “by value” distinction, and so is even more congruent with ref than move is. On the other hand, it’s less clear/evocative than either move or owned.

Summary

This RFC improves interoperation between APIs with different error types. It proposes to:

  • Increase the flexibility of the try! macro for clients of multiple libraries with disparate error types.

  • Standardize on basic functionality that any error type should have by introducing an Error trait.

  • Support easy error chaining when crossing abstraction boundaries.

The proposed changes are all library changes; no language changes are needed – except that this proposal depends on multidispatch happening.

Motivation

Typically, a module (or crate) will define a custom error type encompassing the possible error outcomes for the operations it provides, along with a custom Result instance baking in this type. For example, we have io::IoError and io::IoResult<T> = Result<T, io::IoError>, and similarly for other libraries. Together with the try! macro, the story for interacting with errors for a single library is reasonably good.

However, we lack infrastructure when consuming or building on errors from multiple APIs, or abstracting over errors.

Consuming multiple error types

Our current infrastructure for error handling does not cope well with mixed notions of error.

Abstractly, as described by this issue, we cannot do the following:

fn func() -> Result<T, Error> {
    try!(may_return_error_type_A());
    try!(may_return_error_type_B());
}

Concretely, imagine a CLI application that interacts both with files and HTTP servers, using std::io and an imaginary http crate:

fn download() -> Result<(), CLIError> {
    let contents = try!(http::get(some_url));
    let file = try!(File::create(some_path));
    try!(file.write_str(contents));
    Ok(())
}

The download function can encounter both io and http errors, and wants to report them both under the common notion of CLIError. But the try! macro only works for a single error type at a time.

There are roughly two scenarios where multiple library error types need to be coalesced into a common type, each with different needs: application error reporting, and library error reporting

Application error reporting: presenting errors to a user

An application is generally the “last stop” for error handling: it’s the point at which remaining errors are presented to the user in some form, when they cannot be handled programmatically.

As such, the data needed for application-level errors is usually related to human interaction. For a CLI application, a short text description and longer verbose description are usually all that’s needed. For GUI applications, richer data is sometimes required, but usually not a full enum describing the full range of errors.

Concretely, then, for something like the download function above, for a CLI application, one might want CLIError to roughly be:

struct CLIError<'a> {
    description: &'a str,
    detail: Option<String>,
    ... // possibly more fields here; see detailed design
}

Ideally, one could use the try! macro as in the download example to coalesce a variety of error types into this single, simple struct.

Library error reporting: abstraction boundaries

When one library builds on others, it needs to translate from their error types to its own. For example, a web server framework may build on a library for accessing a SQL database, and needs some way to “lift” SQL errors to its own notion of error.

In general, a library may not want to reveal the upstream libraries it relies on – these are implementation details which may change over time. Thus, it is critical that the error type of upstream libraries not leak, and “lifting” an error from one library to another is a way of imposing an abstraction boundaries.

In some cases, the right way to lift a given error will depend on the operation and context. In other cases, though, there will be a general way to embed one kind of error in another (usually via a “cause chain”). Both scenarios should be supported by Rust’s error handling infrastructure.

Abstracting over errors

Finally, libraries sometimes need to work with errors in a generic way. For example, the serialize::Encoder type takes is generic over an arbitrary error type E. At the moment, such types are completely arbitrary: there is no Error trait giving common functionality expected of all errors. Consequently, error-generic code cannot meaningfully interact with errors.

(See this issue for a concrete case where a bound would be useful; note, however, that the design below does not cover this use-case, as explained in Alternatives.)

Languages that provide exceptions often have standard exception classes or interfaces that guarantee some basic functionality, including short and detailed descriptions and “causes”. We should begin developing similar functionality in libstd to ensure that we have an agreed-upon baseline error API.

Detailed design

We can address all of the problems laid out in the Motivation section by adding some simple library code to libstd, so this RFC will actually give a full implementation.

Note, however, that this implementation relies on the multidispatch proposal currently under consideration.

The proposal consists of two pieces: a standardized Error trait and extensions to the try! macro.

The Error trait

The standard Error trait follows very the widespread pattern found in Exception base classes in many languages:

pub trait Error: Send + Any {
    fn description(&self) -> &str;

    fn detail(&self) -> Option<&str> { None }
    fn cause(&self) -> Option<&Error> { None }
}

Every concrete error type should provide at least a description. By making this a slice-returning method, it is possible to define lightweight enum error types and then implement this method as returning static string slices depending on the variant.

The cause method allows for cause-chaining when an error crosses abstraction boundaries. The cause is recorded as a trait object implementing Error, which makes it possible to read off a kind of abstract backtrace (often more immediately helpful than a full backtrace).

The Any bound is needed to allow downcasting of errors. This RFC stipulates that it must be possible to downcast errors in the style of the Any trait, but leaves unspecified the exact implementation strategy. (If trait object upcasting was available, one could simply upcast to Any; otherwise, we will likely need to duplicate the downcast APIs as blanket impls on Error objects.)

It’s worth comparing the Error trait to the most widespread error type in libstd, IoError:

pub struct IoError {
    pub kind: IoErrorKind,
    pub desc: &'static str,
    pub detail: Option<String>,
}

Code that returns or asks for an IoError explicitly will be able to access the kind field and thus react differently to different kinds of errors. But code that works with a generic Error (e.g., application code) sees only the human-consumable parts of the error. In particular, application code will often employ Box<Error> as the error type when reporting errors to the user. The try! macro support, explained below, makes doing so ergonomic.

An extended try! macro

The other piece to the proposal is a way for try! to automatically convert between different types of errors.

The idea is to introduce a trait FromError<E> that says how to convert from some lower-level error type E to Self. The try! macro then passes the error it is given through this conversion before returning:

// E here is an "input" for dispatch, so conversions from multiple error
// types can be provided
pub trait FromError<E> {
    fn from_err(err: E) -> Self;
}

impl<E> FromError<E> for E {
    fn from_err(err: E) -> E {
        err
    }
}

impl<E: Error> FromError<E> for Box<Error> {
    fn from_err(err: E) -> Box<Error> {
        box err as Box<Error>
    }
}

macro_rules! try (
    ($expr:expr) => ({
        use error;
        match $expr {
            Ok(val) => val,
            Err(err) => return Err(error::FromError::from_err(err))
        }
    })
)

This code depends on multidispatch, because the conversion depends on both the source and target error types. (In today’s Rust, the two implementations of FromError given above would be considered overlapping.)

Given the blanket impl of FromError<E> for E, all existing uses of try! would continue to work as-is.

With this infrastructure in place, application code can generally use Box<Error> as its error type, and try! will take care of the rest:

fn download() -> Result<(), Box<Error>> {
    let contents = try!(http::get(some_url));
    let file = try!(File::create(some_path));
    try!(file.write_str(contents));
    Ok(())
}

Library code that defines its own error type can define custom FromError implementations for lifting lower-level errors (where the lifting should also perform cause chaining) – at least when the lifting is uniform across the library. The effect is that the mapping from one error type into another only has to be written one, rather than at every use of try!:

impl FromError<ErrorA> MyError { ... }
impl FromError<ErrorB> MyError { ... }

fn my_lib_func() -> Result<T, MyError> {
    try!(may_return_error_type_A());
    try!(may_return_error_type_B());
}

Drawbacks

The main drawback is that the try! macro is a bit more complicated.

Unresolved questions

Conventions

This RFC does not define any particular conventions around cause chaining or concrete error types. It will likely take some time and experience using the proposed infrastructure before we can settle these conventions.

Extensions

The functionality in the Error trait is quite minimal, and should probably grow over time. Some additional functionality might include:

Features on the Error trait

  • Generic creation of Errors. It might be useful for the Error trait to expose an associated constructor. See this issue for an example where this functionality would be useful.

  • Mutation of Errors. The Error trait could be expanded to provide setters as well as getters.

The main reason not to include the above two features is so that Error can be used with extremely minimal data structures, e.g. simple enums. For such data structures, it’s possible to produce fixed descriptions, but not mutate descriptions or other error properties. Allowing generic creation of any Error-bounded type would also require these enums to include something like a GenericError variant, which is unfortunate. So for now, the design sticks to the least common denominator.

Concrete error types

On the other hand, for code that doesn’t care about the footprint of its error types, it may be useful to provide something like the following generic error type:

pub struct WrappedError<E> {
    pub kind: E,
    pub description: String,
    pub detail: Option<String>,
    pub cause: Option<Box<Error>>
}

impl<E: Show> WrappedError<E> {
    pub fn new(err: E) {
        WrappedErr {
            kind: err,
            description: err.to_string(),
            detail: None,
            cause: None
        }
    }
}

impl<E> Error for WrappedError<E> {
    fn description(&self) -> &str {
        self.description.as_slice()
    }
    fn detail(&self) -> Option<&str> {
        self.detail.as_ref().map(|s| s.as_slice())
    }
    fn cause(&self) -> Option<&Error> {
        self.cause.as_ref().map(|c| &**c)
    }
}

This type can easily be added later, so again this RFC sticks to the minimal functionality for now.

Summary

Change syntax of subslices matching from ..xs to xs.. to be more consistent with the rest of the language and allow future backwards compatible improvements.

Small example:

match slice {
    [xs.., _] => xs,
    [] => fail!()
}

This is basically heavily stripped version of RFC 101.

Motivation

In Rust, symbol after .. token usually describes number of things, as in [T, ..N] type or in [e, ..N] expression. But in following pattern: [_, ..xs], xs doesn’t describe any number, but the whole subslice.

I propose to move dots to the right for several reasons (including one mentioned above):

  1. Looks more natural (but that might be subjective).
  2. Consistent with the rest of the language.
  3. C++ uses args... in variadic templates.
  4. It allows extending slice pattern matching as described in RFC 101.

Detailed design

Slice matching grammar would change to (assuming trailing commas; grammar syntax as in Rust manual):

slice_pattern : "[" [[pattern | subslice_pattern] ","]* "]" ;
subslice_pattern : ["mut"? ident]? ".." ["@" slice_pattern]? ;

To compare, currently it looks like:

slice_pattern : "[" [[pattern | subslice_pattern] ","]* "]" ;
subslice_pattern : ".." ["mut"? ident ["@" slice_pattern]?]? ;

Drawbacks

Backward incompatible.

Alternatives

Don’t do it at all.

Unresolved questions

Whether subslice matching combined with @ should be written as xs.. @[1, 2] or maybe in another way: xs @[1, 2]...

Summary

Restore the integer inference fallback that was removed. Integer literals whose type is unconstrained will default to i32, unlike the previous fallback to int. Floating point literals will default to f64.

Motivation

History lesson

Rust has had a long history with integer and floating-point literals. Initial versions of Rust required all literals to be explicitly annotated with a suffix (if no suffix is provided, then int or float was used; note that the float type has since been removed). This meant that, for example, if one wanted to count up all the numbers in a list, one would write 0u and 1u so as to employ unsigned integers:

let mut count = 0u; // let `count` be an unsigned integer
while cond() {
    ...
    count += 1u;    // `1u` must be used as well
}

This was particularly troublesome with arrays of integer literals, which could be quite hard to read:

let byte_array = [0u8, 33u8, 50u8, ...];

It also meant that code which was very consciously using 32-bit or 64-bit numbers was hard to read.

Therefore, we introduced integer inference: unlabeled integer literals are not given any particular integral type rather a fresh “integral type variable” (floating point literals work in an analogous way). The idea is that the vast majority of literals will eventually interact with an actual typed variable at some point, and hence we can infer what type they ought to have. For those cases where the type cannot be automatically selected, we decided to fallback to our older behavior, and have integer/float literals be typed as int/float (this is also what Haskell does). Some time later, we did various measurements and found that in real world code this fallback was rarely used. Therefore, we decided that to remove the fallback.

Experience with lack of fallback

Unfortunately, when doing the measurements that led us to decide to remove the int fallback, we neglected to consider coding “in the small” (specifically, we did not include tests in the measurements). It turns out that when writing small programs, which includes not only “hello world” sort of things but also tests, the lack of integer inference fallback is quite annoying. This is particularly troublesome since small program are often people’s first exposure to Rust. The problems most commonly occur when integers are “consumed” by printing them out to the screen or by asserting equality, both of which are very common in small programs and testing.

There are at least three common scenarios where fallback would be beneficial:

Accumulator loops. Here a counter is initialized to 0 and then incremented by 1. Eventually it is printed or compared against a known value.

let mut c = 0;
loop {
    ...;
    c += 1;
}
println!("{}", c); // Does not constrain type of `c`
assert_eq(c, 22);

Calls to range with constant arguments. Here a call to range like range(0, 10) is used to execute something 10 times. It is important that the actual counter is either unused or only used in a print out or comparison against another literal:

for _ in range(0, 10) {
}

Large constants. In small tests it is convenient to make dummy test data. This frequently takes the form of a vector or map of ints.

let mut m = HashMap::new();
m.insert(1, 2);
m.insert(3, 4);
assert_eq(m.find(&3).map(|&i| i).unwrap(), 4);

Lack of bugs

To our knowledge, there has not been a single bug exposed by removing the fallback to the int type. Moreover, such bugs seem to be extremely unlikely.

The primary reason for this is that, in production code, the i32 fallback is very rarely used. In a sense, the same measurements that were used to justify removing the int fallback also justify keeping it. As the measurements showed, the vast, vast majority of integer literals wind up with a constrained type, unless they are only used to print out and do assertions with. Specifically, any integer that is passed as a parameter, returned from a function, or stored in a struct or array, must wind up with a specific type.

Rationale for the choice of defaulting to i32

In contrast to the first revision of the RFC, the fallback type suggested is i32. This is justified by a case analysis which showed that there does not exist a compelling reason for having a signed pointer-sized integer type as the default.

There are reasons for using i32 instead: It’s familiar to programmers from the C programming language (where the default int type is 32-bit in the major calling conventions), it’s faster than 64-bit integers in arithmetic today, and is superior in memory usage while still providing a reasonable range of possible values.

To expand on the performance argument: i32 obviously uses half of the memory of i64 meaning half the memory bandwidth used, half as much cache consumption and twice as much vectorization – additionally arithmetic (like multiplication and division) is faster on some of the modern CPUs.

Case analysis

This is an analysis of cases where int inference might be thought of as useful:

Indexing into an array with unconstrained integer literal:

let array = [0u8, 1, 2, 3];
let index = 3;
array[index]

In this case, index is already automatically inferred to be a uint.

Using a default integer for tests, tutorials, etc.: Examples of this include “The Guide”, the Rust API docs and the Rust standard library unit tests. This is better served by a smaller, faster and platform independent type as default.

Using an integer for an upper bound or for simply printing it: This is also served very well by i32.

Counting of loop iterations: This is a part where int is as badly suited as i32, so at least the move to i32 doesn’t create new hazards (note that the number of elements of a vector might not necessarily fit into an int).

In addition to all the points above, having a platform-independent type obviously results in less differences between the platforms in which the programmer “doesn’t care” about the integer type they are using.

Future-proofing for overloaded literals

It is possible that, in the future, we will wish to allow vector and strings literals to be overloaded so that they can be resolved to user-defined types. In that case, for backwards compatibility, it will be necessary for those literals to have some sort of fallback type. (This is a relatively weak consideration.)

Detailed design

Integral literals are currently type-checked by creating a special class of type variable. These variables are subject to unification as normal, but can only unify with integral types. This RFC proposes that, at the end of type inference, when all constraints are known, we will identify all integral type variables that have not yet been bound to anything and bind them to i32. Similarly, floating point literals will fallback to f64.

For those who wish to be very careful about which integral types they employ, a new lint (unconstrained_literal) will be added which defaults to allow. This lint is triggered whenever the type of an integer or floating point literal is unconstrained.

Downsides

Although there seems to be little motivation for int to be the default, there might be use cases where int is a more correct fallback than i32.

Additionally, it might seem weird to some that i32 is a default, when int looks like the default from other languages. The name of int however is not in the scope of this RFC.

Alternatives

  • No fallback. Status quo.

  • Fallback to something else. We could potentially fallback to int like the original RFC suggested or some other integral type rather than i32.

  • Fallback in a more narrow range of cases. We could attempt to identify integers that are “only printed” or “only compared”. There is no concrete proposal in this direction and it seems to lead to an overly complicated design.

  • Default type parameters influencing inference. There is a separate, follow-up proposal being prepared that uses default type parameters to influence inference. This would allow some examples, like range(0, 10) to work even without integral fallback, because the range function itself could specify a fallback type. However, this does not help with many other examples.

History

2014-11-07: Changed the suggested fallback from int to i32, add rationale.

Summary

Rust currently includes feature-gated support for type parameters that specify a default value. This feature is not well-specified. The aim of this RFC is to fully specify the behavior of defaulted type parameters:

  1. Type parameters in any position can specify a default.
  2. Within fn bodies, defaulted type parameters are used to drive inference.
  3. Outside of fn bodies, defaulted type parameters supply fixed defaults.
  4. _ can be used to omit the values of type parameters and apply a suitable default:
    • In a fn body, any type parameter can be omitted in this way, and a suitable type variable will be used.
    • Outside of a fn body, only defaulted type parameters can be omitted, and the specified default is then used.

Points 2 and 4 extend the current behavior of type parameter defaults, aiming to address some shortcomings of the current implementation.

This RFC would remove the feature gate on defaulted type parameters.

Motivation

Why defaulted type parameters

Defaulted type parameters are very useful in two main scenarios:

  1. Extended a type without breaking existing clients.
  2. Allowing customization in ways that many or most users do not care about.

Often, these two scenarios occur at the same time. A classic historical example is the HashMap type from Rust’s standard library. This type now supports the ability to specify custom hashers. For most clients, this is not particularly important and this initial versions of the HashMap type were not customizable in this regard. But there are some cases where having the ability to use a custom hasher can make a huge difference. Having the ability to specify defaults for type parameters allowed the HashMap type to add a new type parameter H representing the hasher type without breaking any existing clients and also without forcing all clients to specify what hasher to use.

However, customization occurs in places other than types. Consider the function range(). In early versions of Rust, there was a distinct range function for each integral type (e.g. uint::range, int::range, etc). These functions were eventually consolidated into a single range() function that is defined generically over all “enumerable” types:

trait Enumerable : Add<Self,Self> + PartialOrd + Clone + One;
pub fn range<A:Enumerable>(start: A, stop: A) -> Range<A> {
    Range{state: start, stop: stop, one: One::one()}
}

This version is often more convenient to use, particularly in a generic context.

However, the generic version does have the downside that when the bounds of the range are integral, inference sometimes lacks enough information to select a proper type:

// ERROR -- Type argument unconstrained, what integral type did you want?
for x in range(0, 10) { ... }

Thus users are forced to write:

for x in range(0u, 10u) { ... }

This RFC describes how to integrate default type parameters with inference such that the type parameter on range can specify a default (uint, for example):

pub fn range<A:Enumerable=uint>(start: A, stop: A) -> Range<A> {
    Range{state: start, stop: stop, one: One::one()}
}

Using this definition, a call like range(0, 10) is perfectly legal. If it turns out that the type argument is not other constraint, uint will be used instead.

Extending types without breaking clients.

Without defaults, once a library is released to “the wild”, it is not possible to add type parameters to a type without breaking all existing clients. However, it frequently happens that one wants to take an existing type and make it more flexible that it used to be. This often entails adding a new type parameter so that some type which was hard-coded before can now be customized. Defaults provide a means to do this while having older clients transparently fallback to the older behavior.

Historical example: Extending HashMap to support various hash algorithms.

Detailed Design

Remove feature gate

This RFC would remove the feature gate on defaulted type parameters.

Type parameters with defaults

Defaults can be placed on any type parameter, whether it is declared on a type definition (struct, enum), type alias (type), trait definition (trait), trait implementation (impl), or a function or method (fn).

Once a given type parameter declares a default value, all subsequent type parameters in the list must declare default values as well:

// OK. All defaulted type parameters come at the end.
fn foo<A,B=uint,C=uint>() { .. }

// ERROR. B has a default, but C does not.
fn foo<A,B=uint,C>() { .. }

The default value of a type parameter X may refer to other type parameters declared on the same item. However, it may only refer to type parameters declared before X in the list of type parameters:

// OK. Default value of `B` refers to `A`, which is not defaulted.
fn foo<A,B=A>() { .. }

// OK. Default value of `C` refers to `B`, which comes before
// `C` in the list of parameters.
fn foo<A,B=uint,C=B>() { .. }

// ERROR. Default value of `B` refers to `C`, which comes AFTER
// `B` in the list of parameters.
fn foo<A,B=C,C=uint>() { .. }

Instantiating defaults

This section specifies how to interpret a reference to a generic type. Rather than writing out a rather tedious (and hard to understand) description of the algorithm, the rules are instead specified by a series of examples. The high-level idea of the rules is as follows:

  • Users must always provide some value for non-defaulted type parameters. Defaulted type parameters may be omitted.
  • The _ notation can always be used to explicitly omit the value of a type parameter:
    • Inside a fn body, any type parameter may be omitted. Inference is used.
    • Outside a fn body, only defaulted type parameters may be omitted. The default value is used.
    • Motivation: This is consistent with Rust tradition, which generally requires explicit types or a mechanical defaulting process outside of fn bodies.

References to generic types

We begin with examples of references to the generic type Foo:

struct Foo<A,B,C=DefaultHasher,D=C> { ... }

Foo defines four type parameters, the final two of which are defaulted. First, let us consider what happens outside of a fn body. It is mandatory to supply explicit values for all non-defaulted type parameters:

// ERROR: 2 parameters required, 0 provided.
fn f(_: &Foo) { ... }

Defaulted type parameters are filled in based on the defaults given:

// Legal: Equivalent to `Foo<int,uint,DefaultHasher,DefaultHasher>`
fn f(_: &Foo<int,uint>) { ... }

Naturally it is legal to specify explicit values for the defaulted type parameters if desired:

// Legal: Equivalent to `Foo<int,uint,uint,char,u8>`
fn f(_: &Foo<int,uint,char,u8>) { ... }

It is also legal to provide just one of the defaulted type parameters and not the other:

// Legal: Equivalent to `Foo<int,uint,char,char>`
fn f(_: &Foo<int,uint,char>) { ... }

If the user wishes to supply the value of the type parameter D explicitly, but not C, then _ can be used to request the default:

// Legal: Equivalent to `Foo<int,uint,DefaultHasher,uint>`
fn f(_: &Foo<int,uint,_,uint>) { ... }

Note that, outside of a fn body, _ can only be used with defaulted type parameters:

// ERROR: outside of a fn body, `_` cannot be
// used for a non-defaulted type parameter
fn f(_: &Foo<int,_>) { ... }

Inside a fn body, the rules are much the same, except that _ is legal everywhere. Every reference to _ creates a fresh type variable $n. If the type parameter whose value is omitted has an associate default, that default is used as the fallback for $n (see the section “Type variables with fallbacks” for more information). Here are some examples:

fn f() {
    // Error: `Foo` requires at least 2 type parameters, 0 supplied.
    let x: Foo = ...;

    // All of these 4 examples are OK and equivalent. Each
    // results in a type `Foo<$0,$1,$2,$3>` and `$0`-`$4` are type
    // variables. `$2` has a fallback of `DefaultHasher` and `$3`
    // has a fallback of `$2`.
    let x: Foo<_,_> = ...;
    let x: Foo<_,_,_> = ...;
    let x: Foo<_,_,_,_> = ...;

    // Results in a type `Foo<int,uint,$0,char>` where `$0`
    // has a fallback of `DefaultHasher`.
    let x: Foo<int,uint,_,char> = ...;
}

References to generic traits

The rules for traits are the same as the rules for types. Consider a trait Foo:

trait Foo<A,B,C=uint,D=C> { ... }

References to this trait can omit values for C and D in precisely the same way as was shown for types:

// All equivalent to Foo<i8,u8,uint,uint>:
fn foo<T:Foo<i8,u8>>() { ... }
fn foo<T:Foo<i8,u8,_>>() { ... }
fn foo<T:Foo<i8,u8,_,_>>() { ... }

// Equivalent to Foo<i8,u8,char,char>:
fn foo<T:Foo<i8,u8,char,_>>() { ... }

References to generic functions

The rules for referencing generic functions are the same as for types, except that it is legal to omit values for all type parameters if desired. In that case, the behavior is the same as it would be if _ were used as the value for every type parameter. Note that functions can only be referenced from within a fn body.

References to generic impls

Users never explicitly “reference” an impl. Rather, the trait matching system implicitly instantiates impls as part of trait matching. This implies that all type parameters are always instantiated with type variables. These type variables are assigned fallbacks according to the defaults given.

Type variables with fallbacks

We extend the inference system so that when a type variable is created, it can optionally have a fallback value, which is another type.

In the type checker, whenever we create a fresh type variable to represent a type parameter with an associated default, we will use that default as the fallback value for this type variable.

Example:

fn foo<A,B=A>(a: A, b: B) { ... }

fn bar() {
    // Here, the values of the type parameters are given explicitly.
    let f: fn(uint, uint) = foo::<uint, uint>;

    // Here the value of the first type parameter is given explicitly,
    // but not the second. Because the second specifies a default, this
    // is permitted. The type checker will create a fresh variable `$0`
    // and attempt to infer the value of this defaulted type parameter.
    let g: fn(uint, $0) = foo::<uint>;

    // Here, the values of the type parameters are not given explicitly,
    // and hence the type checker will create fresh variables
    // `$1` and `$2` for both of them.
    let h: fn($1, $2) = foo;
}

In this snippet, there are three references to the generic function foo, each of which specifies progressively fewer types. As a result, the type checker winds up creating three type variables, which are referred to in the example as $0, $1, and $2 (not that this $ notation is just for explanatory purposes and is not actual Rust syntax).

The fallback values of $0, $1, and $2 are as follows:

  • $0 was created to represent the type parameter B defined on foo. This means that $0 will have a fallback value of uint, since the type variable A was specified to be uint in the expression that created $0.
  • $1 was created to represent the type parameter A, which has no default. Therefore $1 has no fallback.
  • $2 was created to represent the type parameter B. It will have the fallback value of $1, which was the value of A within the expression where $2 was created.

Trait resolution, fallbacking, and inference

Prior to this RFC, type-checking a function body proceeds roughly as follows:

  1. The function body is analyzed. This results in an accumulated set of type variables, constraints, and trait obligations.
  2. Those trait obligations are then resolved until a fixed point is reached.
  3. If any trait obligations remain unresolved, an error is reported.
  4. If any type variables were never bound to a concrete value, an error is reported.

To accommodate fallback, the new procedure is somewhat different:

  1. The function body is analyzed. This results in an accumulated set of type variables, constraints, and trait obligations.
  2. Execute in a loop:
  3. Run trait resolution until a fixed point is reached.
  4. Create a (initially empty) set UB of unbound type and integral/float variables. This set represents the set of variables for which fallbacks should be applied.
  5. Add all unbound integral and float variables to the set UB
  6. For each type variable X:
    • If X has no fallback defined, skip.
    • If X is not bound, add X to UB
    • If X is bound to an unbound integral variable I, add X to UB and remove I from UB (if present).
    • If X is bound to an unbound float variable F, add X to UB and remove F from UB (if present).
  7. If UB is the empty set, break out of the loop.
  8. For each member of UB:
    • If the member is an integral type variable I, set I to int.
    • If the member is a float variable F, set I to f64.
    • Otherwise, the member must be a variable X with a defined fallback. Set X to its fallback.
      • Note that this “set” operations can fail, which indicates conflicting defaults. A suitable error message should be given.
  9. If any type parameters still have no value assigned to them, report an error.
  10. If any trait obligations could not be resolved, report an error.

There are some subtle points to this algorithm:

When defaults are to be applied, we first gather up the set of variables that have applicable defaults (step 2.2) and then later unconditionally apply those defaults (step 2.4). In particular, we do not loop over each type variable, check whether it is unbound, and apply the default only if it is unbound. The reason for this is that it can happen that there are contradictory defaults and we want to ensure that this results in an error:

fn foo<F:Default=uint>() -> F { }
fn bar<B=int>(b: B) { }
fn baz() {
    // Here, F is instantiated with $0=uint
    let x: $0 = foo();

    // Here, B is instantiated with $1=uint, and constraint $0 <: $1 is added.
    bar(x);
}

In this example, two type variables are created. $0 is the value of F in the call to foo() and $1 is the value of B in the call to bar(). The fact that x, which has type $0, is passed as an argument to bar() will add the constraint that $0 <: $1, but at no point are any concrete types given. Therefore, once type checking is complete, we will apply defaults. Using the algorithm given above, we will determine that both $0 and $1 are unbound and have suitable defaults. We will then unify $0 with uint. This will succeed and, because $0 <: $1, cause $1 to be unified with uint. Next, we will try to unify $1 with its default, int. This will lead to an error. If we combined the checking of whether $1 was unbound with the unification with the default, we would have first unified $0 and then decided that $1 did not require unification.

In the general case, a loop is required to continue resolving traits and applying defaults in sequence. Resolving traits can lead to unifications, so it is clear that we must resolve all traits that we can before we apply any defaults. However, it is also true that adding defaults can create new trait obligations that must be resolved.

Here is an example where processing trait obligations creates defaults, and processing defaults created trait obligations:

trait Foo { }
trait Bar { }

impl<T:Bar=uint> Foo for Vec<T> { } // Impl 1
impl Bar for uint { } // Impl 2

fn takes_foo<F:Foo>(f: F) { }

fn main() {
    let x = Vec::new(); // x: Vec<$0>
    takes_foo(x); // adds oblig Vec<$0> : Foo
}

When we finish type checking main, we are left with a variable $0 and a trait obligation Vec<$0> : Foo. Processing the trait obligation selects the impl 1 as the way to fulfill this trait obligation. This results in:

  1. a new type variable $1, which represents the parameter T on the impl. $1 has a default, uint.
  2. the constraint that $0=$1.
  3. a new trait obligation $1 : Bar.

We cannot process the new trait obligation yet because the type variable $1 is still unbound. (We know that it is equated with $0, but we do not have any concrete types yet, just variables.) After trait resolution reaches a fixed point, defaults are applied. $1 is equated with uint which in turn propagates to $0. At this point, there is still an outstanding trait obligation uint : Bar. This trait obligation can be resolved to impl 2.

The previous example consisted of “1.5” iterations of the loop. That is, although trait resolution runs twice, defaults are only needed one time:

  1. Trait resolution executed to resolve Vec<$0> : Foo.
  2. Defaults were applied to unify $1 = $0 = uint.
  3. Trait resolution executed to resolve uint : Bar
  4. No more defaults to apply, done.

The next example does 2 full iterations of the loop.

trait Foo { }
trait Bar<U> { }
trait Baz { }

impl<U,T:Bar<U>=Vec<U>> Foo for Vec<T> { } // Impl 1
impl<V=uint> Bar for Vec<V> { } // Impl 2

fn takes_foo<F:Foo>(f: F) { }

fn main() {
    let x = Vec::new(); // x: Vec<$0>
    takes_foo(x); // adds oblig Vec<$0> : Foo
}

Here the process is as follows:

  1. Trait resolution executed to resolve Vec<$0> : Foo. The result is two fresh variables, $1 (for U) and $2=Vec<$1> (for $T), the constraint that $0=$2, and the obligation $2 : Bar<$1>.
  2. Defaults are applied to unify $2 = $0 = Vec<$1>.
  3. Trait resolution executed to resolve $2 : Bar<$1>. The result is a fresh variable $3=uint (for $V) and the constraint that $1=$3.
  4. Defaults are applied to unify $3 = $1 = uint.

It should be clear that one can create examples in this vein so as to require any number of loops.

Interaction with integer/float literal fallback. This RFC gives defaulted type parameters precedence over integer/float literal fallback. This seems preferable because such types can be more specific. Below are some examples. See also the alternatives section.

// Here the type of the integer literal 22 is inferred
// to `int` using literal fallback.
fn foo<T>(t: T) { ... }
foo(22)
// Here the type of the integer literal 22 is inferred
// to `uint` because the default on `T` overrides the
// standard integer literal fallback.
fn foo<T=uint>(t: T) { ... }
foo(22)
// Here the type of the integer literal 22 is inferred
// to `char`, leading to an error. This can be resolved
// by using an explicit suffix like `22i`.
fn foo<T=char>(t: T) { ... }
foo(22)

Termination. Any time that there is a loop, one must inquire after termination. In principle, the loop above could execute indefinitely. This is because trait resolution is not guaranteed to terminate – basically there might be a cycle between impls such that we continue creating new type variables and new obligations forever. The trait matching system already defends against this with a recursion counter. That same recursion counter is sufficient to guarantee termination even when the default mechanism is added to the mix. This is because the default mechanism can never itself create new trait obligations: it can only cause previous ambiguous trait obligations to now be matchable (because unbound variables become bound). But the actual need to iteration through the loop is still caused by trait matching generating recursive obligations, which have an associated depth limit.

Compatibility analysis

One of the major design goals of defaulted type parameters is to permit new parameters to be added to existing types or methods in a backwards compatible way. This remains possible under the current design.

Note though that adding a default to an existing type parameter can lead to type errors in clients. This can occur if clients were already relying on an inference fallback from some other source and there is now an ambiguity. Naturally clients can always fix this error by specifying the value of the type parameter in question manually.

Downsides and alternatives

Avoid inference

Rather than adding the notion of fallbacks to type variables, defaults could be mechanically added, even within fn bodies, as they are today. But this is disappointing because it means that examples like range(0,10), where defaults could inform inference, still require explicit annotation. Without the notion of fallbacks, it is also difficult to say what defaulted type parameters in methods or impls should mean.

More advanced interaction between integer literal inference

There were some other proposals to have a more advanced interaction between custom fallbacks and literal inference. For example, it is possible to imagine that we allow literal inference to take precedence over type default fallbacks, unless the fallback is itself integral. The problem is that this is both complicated and possibly not forwards compatible if we opt to allow a more general notion of literal inference in the future (in other words, if integer literals may be mapped to more than just the built-in integral types). Furthermore, these rules would create strictly fewer errors, and hence can be added in the future if desired.

Notation

Allowing _ notation outside of fn body means that it’s meaning changes somewhat depending on context. However, this is consistent with the meaning of omitted lifetimes, which also change in the same way (mechanical default outside of fn body, inference within).

An alternative design is to use the K=V notation proposed in the associated items RFC for specify the values of default type parameters. However, this is somewhat odd, because default type parameters appear in a positional list, and thus it is surprising that values for the non-defaulted parameters are given positionally, but values for the defaulted type parameters are given with labels.

Another alternative would to simply prohibit users from specifying the value of a defaulted type parameter unless values are given for all previous defaulted typed parameters. But this is clearly annoying in those cases where defaulted type parameters represent distinct axes of customization.

Hat Tip

eddyb introduced defaulted type parameters and also opened the first pull request that used them to inform inference.

Summary

Introduce a new while let PAT = EXPR { BODY } construct. This allows for using a refutable pattern match (with optional variable binding) as the condition of a loop.

Motivation

Just as if let was inspired by Swift, it turns out Swift supports while let as well. This was not discovered until much too late to include it in the if let RFC. It turns out that this sort of looping is actually useful on occasion. For example, the desugaring for loop is actually a variant on this; if while let existed it could have been implemented to map for PAT in EXPR { BODY } to

// the match here is so `for` can accept an rvalue for the iterator,
// and was used in the "real" desugaring version.
match &mut EXPR {
    i => {
        while let Some(PAT) = i.next() {
            BODY
        }
    }
}

(note that the non-desugared form of for is no longer equivalent).

More generally, this construct can be used any time looping + pattern-matching is desired.

This also makes the language a bit more consistent; right now, any condition that can be used with if can be used with while. The new if let adds a form of if that doesn’t map to while. Supporting while let restores the equivalence of these two control-flow constructs.

Detailed design

while let operates similarly to if let, in that it desugars to existing syntax. Specifically, the syntax

['ident:] while let PAT = EXPR {
    BODY
}

desugars to

['ident:] loop {
    match EXPR {
        PAT => BODY,
        _ => break
    }
}

Just as with if let, an irrefutable pattern given to while let is considered an error. This is largely an artifact of the fact that the desugared match ends up with an unreachable pattern, and is not actually a goal of this syntax. The error may be suppressed in the future, which would be a backwards-compatible change.

Just as with if let, while let will be introduced under a feature gate (named while_let).

Drawbacks

Yet another addition to the grammar. Unlike if let, it’s not obvious how useful this syntax will be.

Alternatives

As with if let, this could plausibly be done with a macro, but it would be ugly and produce bad error messages.

while let could be extended to support alternative patterns, just as match arms do. This is not part of the main proposal for the same reason it was left out of if let, which is that a) it looks weird, and b) it’s a bit of an odd coupling with the let keyword as alternatives like this aren’t going to be introducing variable bindings. However, it would make while let more general and able to replace more instances of loop { match { ... } } than is possible with the main design.

Unresolved questions

None.

  • Start Date: 2014-08-28
  • RFC PR: (https://github.com/rust-lang/rfcs/pull/216)
  • Rust Issue: (https://github.com/rust-lang/rust/issues/17320)

Summary

Add additional iterator-like Entry objects to collections. Entries provide a composable mechanism for in-place observation and mutation of a single element in the collection, without having to “re-find” the element multiple times. This deprecates several “internal mutation” methods like hashmap’s find_or_insert_with.

Motivation

As we approach 1.0, we’d like to normalize the standard APIs to be consistent, composable, and simple. However, this currently stands in opposition to manipulating the collections in an efficient manner. For instance, if one wishes to build an accumulating map on top of one of the concrete maps, they need to distinguish between the case when the element they’re inserting is already in the map, and when it’s not. One way to do this is the following:

if map.contains_key(&key) {
    *map.find_mut(&key).unwrap() += 1;
} else {
    map.insert(key, 1);
}

However, searches for key twice on every operation. The second search can be squeezed out the update re-do by matching on the result of find_mut, but the insert case will always require a re-search.

To solve this problem, Rust currently has an ad-hoc mix of “internal mutation” methods which take multiple values or closures for the collection to use contextually. Hashmap in particular has the following methods:

fn find_or_insert<'a>(&'a mut self, k: K, v: V) -> &'a mut V
fn find_or_insert_with<'a>(&'a mut self, k: K, f: |&K| -> V) -> &'a mut V
fn insert_or_update_with<'a>(&'a mut self, k: K, v: V, f: |&K, &mut V|) -> &'a mut V
fn find_with_or_insert_with<'a, A>(&'a mut self, k: K, a: A, found: |&K, &mut V, A|, not_found: |&K, A| -> V) -> &'a mut V

Not only are these methods fairly complex to use, but they’re over-engineered and combinatorially explosive. They all seem to return a mutable reference to the region accessed “just in case”, and find_with_or_insert_with takes a magic argument a to try to work around the fact that the two closures it requires can’t both close over the same value (even though only one will ever be called). find_with_or_insert_with is also actually performing the role of insert_with_or_update_with, suggesting that these aren’t well understood.

Rust has been in this position before: internal iteration. Internal iteration was (author’s note: I’m told) confusing and complicated. However the solution was simple: external iteration. You get all the benefits of internal iteration, but with a much simpler interface, and greater composability. Thus, this RFC proposes the same solution to the internal mutation problem.

Detailed design

A fully tested “proof of concept” draft of this design has been implemented on top of hashmap, as it seems to be the worst offender, while still being easy to work with. It sits as a pull request here.

All the internal mutation methods are replaced with a single method on a collection: entry. The signature of entry will depend on the specific collection, but generally it will be similar to the signature for searching in that structure. entry will in turn return an Entry object, which captures the state of a completed search, and allows mutation of the area.

For convenience, we will use the hashmap draft as an example.

/// Get an Entry for where the given key would be inserted in the map
pub fn entry<'a>(&'a mut self, key: K) -> Entry<'a, K, V>;

/// A view into a single occupied location in a HashMap
pub struct OccupiedEntry<'a, K, V>{ ... }

/// A view into a single empty location in a HashMap
pub struct VacantEntry<'a, K, V>{ ... }

/// A view into a single location in a HashMap
pub enum Entry<'a, K, V> {
    /// An occupied Entry
    Occupied(OccupiedEntry<'a, K, V>),
    /// A vacant Entry
    Vacant(VacantEntry<'a, K, V>),
}

Of course, the real meat of the API is in the Entry’s interface (impl details removed):

impl<'a, K, V> OccupiedEntry<'a, K, V> {
    /// Gets a reference to the value of this Entry
    pub fn get(&self) -> &V;

    /// Gets a mutable reference to the value of this Entry
    pub fn get_mut(&mut self) -> &mut V;

    /// Converts the entry into a mutable reference to its value
    pub fn into_mut(self) -> &'a mut V;

    /// Sets the value stored in this Entry
    pub fn set(&mut self, value: V) -> V;

    /// Takes the value stored in this Entry
    pub fn take(self) -> V;
}

impl<'a, K, V> VacantEntry<'a, K, V> {
    /// Set the value stored in this Entry, and returns a reference to it
    pub fn set(self, value: V) -> &'a mut V;
}

There are definitely some strange things here, so let’s discuss the reasoning!

First, entry takes a key by value, because this is the observed behaviour of the internal mutation methods. Further, taking the key up-front allows implementations to avoid validating provided keys if they require an owned key later for insertion. This key is effectively a guarantor of the entry.

Taking the key by-value might change once collections reform lands, and Borrow and ToOwned are available. For now, it’s an acceptable solution, because in particular, the primary use case of this functionality is when you’re not sure if you need to insert, in which case you should be prepared to insert. Otherwise, find_mut is likely sufficient.

The result is actually an enum, that will either be Occupied or Vacant. These two variants correspond to concrete types for when the key matched something in the map, and when the key didn’t, respectively.

If there isn’t a match, the user has exactly one option: insert a value using set, which will also insert the guarantor, and destroy the Entry. This is to avoid the costs of maintaining the structure, which otherwise isn’t particularly interesting anymore.

If there is a match, a more robust set of options is provided. get and get_mut provide access to the value found in the location. set behaves as the vacant variant, but without destroying the entry. It also yields the old value. take simply removes the found value, and destroys the entry for similar reasons as set.

Let’s look at how we one now writes insert_or_update:

There are two options. We can either do the following:

// cleaner, and more flexible if logic is more complex
let val = match map.entry(key) {
    Vacant(entry) => entry.set(0),
    Occupied(entry) => entry.into_mut(),
};
*val += 1;

or

// closer to the original, and more compact
match map.entry(key) {
    Vacant(entry) => { entry.set(1); },
    Occupied(mut entry) => { *entry.get_mut() += 1; },
}

Either way, one can now write something equivalent to the “intuitive” inefficient code, but it is now as efficient as the complex insert_or_update methods. In fact, this matches so closely to the inefficient manipulation that users could reasonable ignore Entries until performance becomes an issue, at which point it’s an almost trivial migration. Closures also aren’t needed to dance around the fact that one may want to avoid generating some values unless they have to, because that falls naturally out of normal control flow.

If you look at the actual patch that does this, you’ll see that Entry itself is exceptionally simple to implement. Most of the logic is trivial. The biggest amount of work was just capturing the search state correctly, and even that was mostly a cut-and-paste job.

With Entries, the gate is also opened for… adaptors! Really want insert_or_update back? That can be written on top of this generically with ease. However, such discussion is out-of-scope for this RFC. Adaptors can be tackled in a back-compat manner after this has landed, and usage is observed. Also, this proposal does not provide any generic trait for Entries, preferring concrete implementations for the time-being.

Drawbacks

  • More structs, and more methods in the short-term

  • More collection manipulation “modes” for the user to think about

  • insert_or_update_with is kind of convenient for avoiding the kind of boiler-plate found in the examples

Alternatives

  • Just put our foot down, say “no efficient complex manipulations”, and drop all the internal mutation stuff without a replacement.

  • Try to build out saner/standard internal manipulation methods.

  • Try to make this functionality a subset of Cursors, which would be effectively a bi-directional mut_iter where the returned references borrow the cursor preventing aliasing/safety issues, so that mutation can be performed at the location of the cursor. However, preventing invalidation would be more expensive, and it’s not clear that cursor semantics would make sense on e.g. a HashMap, as you can’t insert any key in any location.

  • This RFC originally [proposed a design without enums that was substantially more complex] (https://github.com/Gankro/rust/commit/6d6804a6d16b13d07934f0a217a3562384e55612). However it had some interesting ideas about Key manipulation, so we mention it here for historical purposes.

Unresolved questions

Naming bikesheds!

Summary

When a struct type S has no fields (a so-called “empty struct”), allow it to be defined via either struct S; or struct S {}. When defined via struct S;, allow instances of it to be constructed and pattern-matched via either S or S {}. When defined via struct S {}, require instances to be constructed and pattern-matched solely via S {}.

Motivation

Today, when writing code, one must treat an empty struct as a special case, distinct from structs that include fields. That is, one must write code like this:

struct S2 { x1: int, x2: int }
struct S0; // kind of different from the above.

let s2 = S2 { x1: 1, x2: 2 };
let s0 = S0; // kind of different from the above.

match (s2, s0) {
    (S2 { x1: y1, x2: y2 },
     S0) // you can see my pattern here
     => { println!("Hello from S2({}, {}) and S0", y1, y2); }
}

While this yields code that is relatively free of extraneous curly-braces, this special case handling of empty structs presents problems for two cases of interest: automatic code generators (including, but not limited to, Rust macros) and conditionalized code (i.e. code with cfg attributes; see the CFG problem appendix). The heart of the code-generator argument is: Why force all to-be-written code-generators and macros with special-case handling of the empty struct case (in terms of whether or not to include the surrounding braces), especially since that special case is likely to be forgotten (yielding a latent bug in the code generator).

The special case handling of empty structs is also a problem for programmers who actively add and remove fields from structs during development; such changes cause a struct to switch from being empty and non-empty, and the associated revisions of changing removing and adding curly braces is aggravating (both in effort revising the code, and also in extra noise introduced into commit histories).

This RFC proposes an approach similar to the one we used circa February 2013, when both S0 and S0 { } were accepted syntaxes for an empty struct. The parsing ambiguity that motivated removing support for S0 { } is no longer present (see the Ancient History appendix). Supporting empty braces in the syntax for empty structs is easy to do in the language now.

Detailed design

There are two kinds of empty structs: Braced empty structs and flexible empty structs. Flexible empty structs are a slight generalization of the structs that we have today.

Flexible empty structs are defined via the syntax struct S; (as today).

Braced empty structs are defined via the syntax struct S { } (“new”).

Both braced and flexible empty structs can be constructed via the expression syntax S { } (“new”). Flexible empty structs, as today, can also be constructed via the expression syntax S.

Both braced and flexible empty structs can be pattern-matched via the pattern syntax S { } (“new”). Flexible empty structs, as today, can also be pattern-matched via the pattern syntax S.

Braced empty struct definitions solely affect the type namespace, just like normal non-empty structs. Flexible empty structs affect both the type and value namespaces.

As a matter of style, using braceless syntax is preferred for constructing and pattern-matching flexible empty structs. For example, pretty-printer tools are encouraged to emit braceless forms if they know that the corresponding struct is a flexible empty struct. (Note that pretty printers that handle incomplete fragments may not have such information available.)

There is no ambiguity introduced by this change, because we have already introduced a restriction to the Rust grammar to force the use of parentheses to disambiguate struct literals in such contexts. (See Rust RFC 25).

The expectation is that when migrating code from a flexible empty struct to a non-empty struct, it can start by first migrating to a braced empty struct (and then have a tool indicate all of the locations where braces need to be added); after that step has been completed, one can then take the next step of adding the actual field.

Drawbacks

Some people like “There is only one way to do it.” But, there is precedent in Rust for violating “one way to do it” in favor of syntactic convenience or regularity; see the Precedent for flexible syntax in Rust appendix. Also, see the Always Require Braces alternative below.

I have attempted to summarize the previous discussion from RFC PR 147 in the Recent History appendix; some of the points there include drawbacks to this approach and to the Always Require Braces alternative.

Alternatives

Always Require Braces

Alternative 1: “Always Require Braces”. Specifically, require empty curly braces on empty structs. People who like the current syntax of curly-brace free structs can encode them this way: enum S0 { S0 } This would address all of the same issues outlined above. (Also, the author (pnkfelix) would be happy to take this tack.)

The main reason not to take this tack is that some people may like writing empty structs without braces, but do not want to switch to the unary enum version described in the previous paragraph. See “I wouldn’t want to force noisier syntax …” in the Recent History appendix.

Status quo

Alternative 2: Status quo. Macros and code-generators in general will need to handle empty structs as a special case. We may continue hitting bugs like CFG parse bug. Some users will be annoyed but most will probably cope.

Synonymous in all contexts

Alternative 3: An earlier version of this RFC proposed having struct S; be entirely synonymous with struct S { }, and the expression S { } be synonymous with S.

This was deemed problematic, since it would mean that S { } would put an entry into both the type and value namespaces, while S { x: int } would only put an entry into the type namespace. Thus the current draft of the RFC proposes the “flexible” versus “braced” distinction for empty structs.

Never synonymous

Alternative 4: Treat struct S; as requiring S at the expression and pattern sites, and struct S { } as requiring S { } at the expression and pattern sites.

This in some ways follows a principle of least surprise, but it also is really hard to justify having both syntaxes available for empty structs with no flexibility about how they are used. (Note again that one would have the option of choosing between enum S { S }, struct S;, or struct S { }, each with their own idiosyncrasies about whether you have to write S or S { }.) I would rather adopt “Always Require Braces” than “Never Synonymous”

Empty Tuple Structs

One might say “why are you including support for curly braces, but not parentheses?” Or in other words, “what about empty tuple structs?”

The code-generation argument could be applied to tuple-structs as well, to claim that we should allow the syntax S0(). I am less inclined to add a special case for that; I think tuple-structs are less frequently used (especially with many fields); they are largely for ad-hoc data such as newtype wrappers, not for code generators.

Note that we should not attempt to generalize this RFC as proposed to include tuple structs, i.e. so that given struct S0 {}, the expressions T0, T0 {}, and T0() would be synonymous. The reason is that given a tuple struct struct T2(int, int), the identifier T2 is already bound to a constructor function:

fn main() {
    #[deriving(Show)]
    struct T2(int, int);

    fn foo<S:std::fmt::Show>(f: |int, int| -> S) {
        println!("Hello from {} and {}", f(2,3), f(4,5));
    }
    foo(T2);
}

So if we were to attempt to generalize the leniency of this RFC to tuple structs, we would be in the unfortunate situation given struct T0(); of trying to treat T0 simultaneously as an instance of the struct and as a constructor function. So, the handling of empty structs proposed by this RFC does not generalize to tuple structs.

(Note that if we adopt alternative 1, Always Require Braces, then the issue of how tuple structs are handled is totally orthogonal – we could add support for struct T0() as a distinct type from struct S0 {}, if we so wished, or leave it aside.)

Unresolved questions

None

Appendices

The CFG problem

A program like this works today:

fn main() {
    #[deriving(Show)]
    struct Svaries {
        x: int,
        y: int,

        #[cfg(zed)]
        z: int,
    }

    let s = match () {
        #[cfg(zed)]      _ => Svaries { x: 3, y: 4, z: 5 },
        #[cfg(not(zed))] _ => Svaries { x: 3, y: 4 },
    };
    println!("Hello from {}", s)
}

Observe what happens when one modifies the above just a bit:

    struct Svaries {
        #[cfg(eks)]
        x: int,
        #[cfg(why)]
        y: int,

        #[cfg(zed)]
        z: int,
    }

Now, certain cfg settings yield an empty struct, even though it is surrounded by braces. Today this leads to a CFG parse bug when one attempts to actually construct such a struct.

If we want to support situations like this properly, we will probably need to further extend the cfg attribute so that it can be placed before individual fields in a struct constructor, like this:

// You cannot do this today,
// but maybe in the future (after a different RFC)
let s = Svaries {
    #[cfg(eks)] x: 3,
    #[cfg(why)] y: 4,
    #[cfg(zed)] z: 5,
};

Supporting such a syntax consistently in the future should start today with allowing empty braces as legal code. (Strictly speaking, it is not necessary that we add support for empty braces at the parsing level to support this feature at the semantic level. But supporting empty-braces in the syntax still seems like the most consistent path to me.)

Ancient History

A parsing ambiguity was the original motivation for disallowing the syntax S {} in favor of S for constructing an instance of an empty struct. The ambiguity and various options for dealing with it were well documented on the rust-dev thread. Both syntaxes were simultaneously supported at the time.

In particular, at the time that mailing list thread was created, the code match match x {} ... would be parsed as match (x {}) ..., not as (match x {}) ... (see Rust PR 5137); likewise, if x {} would be parsed as an if-expression whose test component is the struct literal x {}. Thus, at the time of Rust PR 5137, if the input to a match or if was an identifier expression, one had to put parentheses around the identifier to force it to be interpreted as input to the match/if, and not as a struct constructor.

Of the options for resolving this discussed on the mailing list thread, the one selected (removing S {} construction expressions) was chosen as the most expedient option.

At that time, the option of “Place a parser restriction on those contexts where { terminates the expression and say that struct literals cannot appear there unless they are in parentheses.” was explicitly not chosen, in favor of continuing to use the disambiguation rule in use at the time, namely that the presence of a label (e.g. S { a_label: ... }) was the way to distinguish a struct constructor from an identifier followed by a control block, and thus, “there must be one label.”

Naturally, if the construction syntax were to be disallowed, it made sense to also remove the struct S {} declaration syntax.

Things have changed since the time of that mailing list thread; namely, we have now adopted the aforementioned parser restriction Rust RFC 25. (The text of RFC 25 does not explicitly address match, but we have effectively expanded it to include a curly-brace delimited block of match-arms in the definition of “block”.) Today, one uses parentheses around struct literals in some contexts (such as for e in (S {x: 3}) { ... } or match (S {x: 3}) { ... }

Note that there was never an ambiguity for uses of struct S0 { } in item position. The issue was solely about expression position prior to the adoption of Rust RFC 25.

Precedent for flexible syntax in Rust

There is precedent in Rust for violating “one way to do it” in favor of syntactic convenience or regularity.

For example, one can often include an optional trailing comma, for example in: let x : &[int] = [3, 2, 1, ];.

One can also include redundant curly braces or parentheses, for example in:

println!("hi: {}", { if { x.len() > 2 } { ("whoa") } else { ("there") } });

One can even mix the two together when delimiting match arms:

    let z: int = match x {
        [3, 2] => { 3 }
        [3, 2, 1] => 2,
        _ => { 1 },
    };

We do have lints for some style violations (though none catch the cases above), but lints are different from fundamental language restrictions.

Recent history

There was a previous RFC PR that was effectively the same in spirit to this one. It was closed because it was not sufficient well fleshed out for further consideration by the core team. However, to save people the effort of reviewing the comments on that PR (and hopefully stave off potential bikeshedding on this PR), I here summarize the various viewpoints put forward on the comment thread there, and note for each one, whether that viewpoint would be addressed by this RFC (accept both syntaxes), by Always Require Braces, or by Status Quo.

Note that this list of comments is just meant to summarize the list of views; it does not attempt to reflect the number of commenters who agreed or disagreed with a particular point. (But since the RFC process is not a democracy, the number of commenters should not matter anyway.)

  • “+1” ==> Favors: This RFC (or potentially Always Require Braces; I think the content of RFC PR 147 shifted over time, so it is hard to interpret the “+1” comments now).
  • “I find let s = S0; jarring, think its an enum initially.” ==> Favors: Always Require Braces
  • “Frequently start out with an empty struct and add fields as I need them.” ==> Favors: This RFC or Always Require Braces
  • “Foo{} suggests is constructing something that it’s not; all uses of the value Foo are indistinguishable from each other” ==> Favors: Status Quo
  • “I find it strange anyone would prefer let x = Foo{}; over let x = Foo;” ==> Favors Status Quo; strongly opposes Always Require Braces.
  • “I agree that ‘instantiation-should-follow-declaration’, that is, structs declared ;, (), {} should only be instantiated [via] ;, (), { } respectively” ==> Opposes leniency of this RFC in that it allows expression to use include or omit {} on an empty struct, regardless of declaration form, and vice-versa.
  • “The code generation argument is reasonable, but I wouldn’t want to force noisier syntax on all ‘normal’ code just to make macros work better.” ==> Favors: This RFC

Summary

Rename “task failure” to “task panic”, and fail! to panic!.

Motivation

The current terminology of “task failure” often causes problems when writing or speaking about code. You often want to talk about the possibility of an operation that returns a Result “failing”, but cannot because of the ambiguity with task failure. Instead, you have to speak of “the failing case” or “when the operation does not succeed” or other circumlocutions.

Likewise, we use a “Failure” header in rustdoc to describe when operations may fail the task, but it would often be helpful to separate out a section describing the “Err-producing” case.

We have been steadily moving away from task failure and toward Result as an error-handling mechanism, so we should optimize our terminology accordingly: Result-producing functions should be easy to describe.

Detailed design

Not much more to say here than is in the summary: rename “task failure” to “task panic” in documentation, and fail! to panic! in code.

The choice of panic emerged from a discuss thread and workweek discussion. It has precedent in a language setting in Go, and of course goes back to Kernel panics.

With this choice, we can use “failure” to refer to an operation that produces Err or None, “panic” for unwinding at the task level, and “abort” for aborting the entire process.

The connotations of panic seem fairly accurate: the process is not immediately ending, but it is rapidly fleeing from some problematic circumstance (by killing off tasks) until a recovery point.

Drawbacks

The term “panic” is a bit informal, which some consider a drawback.

Making this change is likely to be a lot of work.

Alternatives

Other choices include:

  • throw! or unwind!. These options reasonably describe the current behavior of task failure, but “throw” suggests general exception handling, and both place the emphasis on the mechanism rather than the policy. We also are considering eventually adding a flag that allows fail! to abort the process, which would make these terms misleading.

  • abort!. Ambiguous with process abort.

  • die!. A reasonable choice, but it’s not immediately obvious what is being killed.

Summary

This RFC proposes to remove the runtime system that is currently part of the standard library, which currently allows the standard library to support both native and green threading. In particular:

  • The libgreen crate and associated support will be moved out of tree, into a separate Cargo package.

  • The librustrt (the runtime) crate will be removed entirely.

  • The std::io implementation will be directly welded to native threads and system calls.

  • The std::io module will remain completely cross-platform, though separate platform-specific modules may be added at a later time.

Motivation

Background: thread/task models and I/O

Many languages/libraries offer some notion of “task” as a unit of concurrent execution, possibly distinct from native OS threads. The characteristics of tasks vary along several important dimensions:

  • 1:1 vs M:N. The most fundamental question is whether a “task” always corresponds to an OS-level thread (the 1:1 model), or whether there is some userspace scheduler that maps tasks onto worker threads (the M:N model). Some kernels – notably, Windows – support a 1:1 model where the scheduling is performed in userspace, which combines some of the advantages of the two models.

    In the M:N model, there are various choices about whether and when blocked tasks can migrate between worker threads. One basic downside of the model, however, is that if a task takes a page fault, the entire worker thread is essentially blocked until the fault is serviced. Choosing the optimal number of worker threads is difficult, and some frameworks attempt to do so dynamically, which has costs of its own.

  • Stack management. In the 1:1 model, tasks are threads and therefore must be equipped with their own stacks. In M:N models, tasks may or may not need their own stack, but there are important tradeoffs:

    • Techniques like segmented stacks allow stack size to grow over time, meaning that tasks can be equipped with their own stack but still be lightweight. Unfortunately, segmented stacks come with a significant performance and complexity cost.

    • On the other hand, if tasks are not equipped with their own stack, they either cannot be migrated between underlying worker threads (the case for frameworks like Java’s fork/join), or else must be implemented using continuation-passing style (CPS), where each blocking operation takes a closure representing the work left to do. (CPS essentially moves the needed parts of the stack into the continuation closure.) The upside is that such tasks can be extremely lightweight – essentially just the size of a closure.

  • Blocking and I/O support. In the 1:1 model, a task can block freely without any risk for other tasks, since each task is an OS thread. In the M:N model, however, blocking in the OS sense means blocking the worker thread. (The same applies to long-running loops or page faults.)

    M:N models can deal with blocking in a couple of ways. The approach taken in Java’s fork/join framework, for example, is to dynamically spin up/down worker threads. Alternatively, special task-aware blocking operations (including I/O) can be provided, which are mapped under the hood to nonblocking operations, allowing the worker thread to continue. Unfortunately, this latter approach helps only with explicit blocking; it does nothing for loops, page faults and the like.

Where Rust is now

Rust has gradually migrated from a “green” threading model toward a native threading model:

  • In Rust’s green threading, tasks are scheduled M:N and are equipped with their own stack. Initially, Rust used segmented stacks to allow growth over time, but that was removed in favor of pre-allocated stacks, which means Rust’s green threads are not “lightweight”. The treatment of blocking is described below.

  • In Rust’s native threading model, tasks are 1:1 with OS threads.

Initially, Rust supported only the green threading model. Later, native threading was added and ultimately became the default.

In today’s Rust, there is a single I/O API – std::io – that provides blocking operations only and works with both threading models. Rust is somewhat unusual in allowing programs to mix native and green threading, and furthermore allowing some degree of interoperation between the two. This feat is achieved through the runtime system – librustrt – which exposes:

  • The Runtime trait, which abstracts over the scheduler (via methods like deschedule and spawn_sibling) as well as the entire I/O API (via local_io).

  • The rtio module, which provides a number of traits that define the standard I/O abstraction.

  • The Task struct, which includes a Runtime trait object as the dynamic entry point into the runtime.

In this setup, libstd works directly against the runtime interface. When invoking an I/O or scheduling operation, it first finds the current Task, and then extracts the Runtime trait object to actually perform the operation.

On native tasks, blocking operations simply block. On green tasks, blocking operations are routed through the green scheduler and/or underlying event loop and nonblocking I/O.

The actual scheduler and I/O implementations – libgreen and libnative – then live as crates “above” libstd.

The problems

While the situation described above may sound good in principle, there are several problems in practice.

Forced co-evolution. With today’s design, the green and native threading models must provide the same I/O API at all times. But there is functionality that is only appropriate or efficient in one of the threading models.

For example, the lightest-weight M:N task models are essentially just collections of closures, and do not provide any special I/O support. This style of lightweight tasks is used in Servo, but also shows up in java.util.concurrent’s exectors and Haskell’s par monad, among many others. These lighter weight models do not fit into the current runtime system.

On the other hand, green threading systems designed explicitly to support I/O may also want to provide low-level access to the underlying event loop – an API surface that doesn’t make sense for the native threading model.

Under the native model we want to provide direct non-blocking and/or asynchronous I/O support – as a systems language, Rust should be able to work directly with what the OS provides without imposing global abstraction costs. These APIs may involve some platform-specific abstractions (epoll, kqueue, IOCP) for maximal performance. But integrating them cleanly with a green threading model may be difficult or impossible – and at the very least, makes it difficult to add them quickly and seamlessly to the current I/O system.

In short, the current design couples threading and I/O models together, and thus forces the green and native models to supply a common I/O interface – despite the fact that they are pulling in different directions.

Overhead. The current Rust model allows runtime mixtures of the green and native models. The implementation achieves this flexibility by using trait objects to model the entire I/O API. Unfortunately, this flexibility has several downsides:

  • Binary sizes. A significant overhead caused by the trait object design is that the entire I/O system is included in any binary that statically links to libstd. See this comment for more details.

  • Task-local storage. The current implementation of task-local storage is designed to work seamlessly across native and green threads, and its performs substantially suffers as a result. While it is feasible to provide a more efficient form of “hybrid” TLS that works across models, doing so is far more difficult than simply using native thread-local storage.

  • Allocation and dynamic dispatch. With the current design, any invocation of I/O involves at least dynamic dispatch, and in many cases allocation, due to the use of trait objects. However, in most cases these costs are trivial when compared to the cost of actually doing the I/O (or even simply making a syscall), so they are not strong arguments against the current design.

Problematic I/O interactions. As the documentation for libgreen explains, only some I/O and synchronization methods work seamlessly across native and green tasks. For example, any invocation of native code that calls blocking I/O has the potential to block the worker thread running the green scheduler. In particular, std::io objects created on a native task cannot safely be used within a green task. Thus, even though std::io presents a unified I/O API for green and native tasks, it is not fully interoperable.

Embedding Rust. When embedding Rust code into other contexts – whether calling from C code or embedding in high-level languages – there is a fair amount of setup needed to provide the “runtime” infrastructure that libstd relies on. If libstd was instead bound to the native threading and I/O system, the embedding setup would be much simpler.

Maintenance burden. Finally, libstd is made somewhat more complex by providing such a flexible threading model. As this RFC will explain, moving to a strictly native threading model will allow substantial simplification and reorganization of the structure of Rust’s libraries.

Detailed design

To mitigate the above problems, this RFC proposes to tie std::io directly to the native threading model, while moving libgreen and its supporting infrastructure into an external Cargo package with its own I/O API.

The near-term plan

std::io and native threading

The plan is to entirely remove librustrt, including all of the traits. The abstraction layers will then become:

  • Highest level: libstd, providing cross-platform, high-level I/O and scheduling abstractions. The crate will depend on libnative (the opposite of today’s situation).

  • Mid-level: libnative, providing a cross-platform Rust interface for I/O and scheduling. The API will be relatively low-level, compared to libstd. The crate will depend on libsys.

  • Low-level: libsys (renamed from liblibc), providing platform-specific Rust bindings to system C APIs.

In this scheme, the actual API of libstd will not change significantly. But its implementation will invoke functions in libnative directly, rather than going through a trait object.

A goal of this work is to minimize the complexity of embedding Rust code in other contexts. It is not yet clear what the final embedding API will look like.

Green threading

Despite tying libstd to native threading, however, libgreen will still be supported – at least initially. The infrastructure in libgreen and friends will move into its own Cargo package.

Initially, the green threading package will support essentially the same interface it does today; there are no immediate plans to change its API, since the focus will be on first improving the native threading API. Note, however, that the I/O API will be exposed separately within libgreen, as opposed to the current exposure through std::io.

The long-term plan

Ultimately, a large motivation for the proposed refactoring is to allow the APIs for native I/O to grow.

In particular, over time we should expose more of the underlying system capabilities under the native threading model. Whenever possible, these capabilities should be provided at the libstd level – the highest level of cross-platform abstraction. However, an important goal is also to provide nonblocking and/or asynchronous I/O, for which system APIs differ greatly. It may be necessary to provide additional, platform-specific crates to expose this functionality. Ideally, these crates would interoperate smoothly with libstd, so that for example a libposix crate would allow using an poll operation directly against a std::io::fs::File value, for example.

We also wish to expose “lowering” operations in libstd – APIs that allow you to get at the file descriptor underlying a std::io::fs::File, for example.

On the other hand, we very much want to explore and support truly lightweight M:N task models (that do not require per-task stacks) – supporting efficient data parallelism with work stealing for CPU-bound computations. These lightweight models will not provide any special support for I/O. But they may benefit from a notion of “task-local storage” and interfacing with the task scheduler when explicitly synchronizing between tasks (via channels, for example).

All of the above long-term plans will require substantial new design and implementation work, and the specifics are out of scope for this RFC. The main point, though, is that the refactoring proposed by this RFC will make it much more plausible to carry out such work.

Finally, a guiding principle for the above work is uncompromising support for native system APIs, in terms of both functionality and performance. For example, it must be possible to use thread-local storage without significant overhead, which is very much not the case today. Any abstractions to support M:N threading models – including the now-external libgreen package – must respect this constraint.

Drawbacks

The main drawback of this proposal is that green I/O will be provided by a forked interface of std::io. This change makes green threading “second class”, and means there’s more to learn when using both models together.

This setup also somewhat increases the risk of invoking native blocking I/O on a green thread – though of course that risk is very much present today. One way of mitigating this risk in general is the Java executor approach, where the native “worker” threads that are executing the green thread scheduler are monitored for blocking, and new worker threads are spun up as needed.

Unresolved questions

There are may unresolved questions about the exact details of the refactoring, but these are considered implementation details since the libstd interface itself will not substantially change as part of this RFC.

Summary

The || unboxed closure form should be split into two forms—|| for nonescaping closures and move || for escaping closures—and the capture clauses and self type specifiers should be removed.

Motivation

Having to specify ref and the capture mode for each unboxed closure is inconvenient (see Rust PR rust-lang/rust#16610). It would be more convenient for the programmer if the type of the closure and the modes of the upvars could be inferred. This also eliminates the “line-noise” syntaxes like |&:|, which are arguably unsightly.

Not all knobs can be removed, however—the programmer must manually specify whether each closure is escaping or nonescaping. To see this, observe that no sensible default for the closure || (*x).clone() exists: if the function is nonescaping, it’s a closure that returns a copy of x every time but does not move x into it; if the function is escaping, it’s a closure that returns a copy of x and takes ownership of x.

Therefore, we need two forms: one for nonescaping closures and one for escaping closures. Nonescaping closures are the commonest, so they get the || syntax that we have today, and a new move || syntax will be introduced for escaping closures.

Detailed design

For unboxed closures specified with ||, the capture modes of the free variables are determined as follows:

  1. Any variable which is closed over and borrowed mutably is by-reference and mutably borrowed.

  2. Any variable of a type that does not implement Copy which is moved within the closure is captured by value.

  3. Any other variable which is closed over is by-reference and immutably borrowed.

The trait that the unboxed closure implements is FnOnce if any variables were moved out of the closure; otherwise FnMut if there are any variables that are closed over and mutably borrowed; otherwise Fn.

The ref prefix for unboxed closures is removed, since it is now essentially implied.

We introduce a new grammar production, move ||. The value returned by a move || implements FnOnce, FnMut, or Fn, as determined above; thus, for example, move |x: int, y| x + y produces an unboxed closure that implements the Fn(int, int) -> int trait (and thus the FnOnce(int, int) -> int trait by inheritance). Free variables referenced by a move || closure are always captured by value.

In the trait reference grammar, we will change the |&:| sugar to Fn(), the |&mut:| sugar to FnMut(), and the |:| sugar to FnOnce(). Thus what was before written fn foo<F:|&mut: int, int| -> int>() will be fn foo<F:FnMut(int, int) -> int>().

It is important to note that the trait reference syntax and closure construction syntax are purposefully distinct. This is because either the || form or the move || form can construct any of FnOnce, FnMut, or Fn closures.

Drawbacks

  1. Having two syntaxes for closures could be seen as unfortunate.

  2. move becomes a keyword.

Alternatives

  1. Keep the status quo: |:|/|&mut:/|&:| are the only ways to create unboxed closures, and ref must be used to get by-reference upvars.

  2. Use some syntax other than move || for escaping closures.

  3. Keep the |:|/|&mut:/|&:| syntax only for trait reference sugar.

  4. Use fn() syntax for trait reference sugar.

Unresolved questions

There may be unforeseen complications in doing the inference.

  • Start Date: 2014-09-16
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/234
  • Rust Issue #: https://github.com/rust-lang/rust/issues/17323

Summary

Make enum variants part of both the type and value namespaces.

Motivation

We might, post-1.0, want to allow using enum variants as types. This would be backwards incompatible, because if a module already has a value with the same name as the variant in scope, then there will be a name clash.

Detailed design

Enum variants would always be part of both the type and value namespaces. Variants would not, however, be usable as types - we might want to allow this later, but it is out of scope for this RFC.

Data

Occurrences of name clashes in the Rust repo:

  • Key in rustrt::local_data

  • InAddr in native::io::net

  • Ast in regex::parse

  • Class in regex::parse

  • Native in regex::re

  • Dynamic in regex::re

  • Zero in num::bigint

  • String in term::terminfo::parm

  • String in serialize::json

  • List in serialize::json

  • Object in serialize::json

  • Argument in fmt_macros

  • Metadata in rustc_llvm

  • ObjectFile in rustc_llvm

  • ‘ItemDecorator’ in syntax::ext::base

  • ‘ItemModifier’ in syntax::ext::base

  • FunctionDebugContext in rustc::middle::trans::debuginfo

  • AutoDerefRef in rustc::middle::ty

  • MethodParam in rustc::middle::typeck

  • MethodObject in rustc::middle::typeck

That’s a total of 20 in the compiler and libraries.

Drawbacks

Prevents the common-ish idiom of having a struct with the same name as a variant and then having a value of that struct be the variant’s data.

Alternatives

Don’t do it. That would prevent us making changes to the typed-ness of enums in the future. If we accept this RFC, but at some point we decide we never want to do anything with enum variants and types, we could always roll back this change backwards compatibly.

Unresolved questions

N/A

Summary

This is a combined conventions and library stabilization RFC. The goal is to establish a set of naming and signature conventions for std::collections.

The major components of the RFC include:

  • Removing most of the traits in collections.

  • A general proposal for solving the “equiv” problem, as well as improving MaybeOwned.

  • Patterns for overloading on by-need values and predicates.

  • Initial, forwards-compatible steps toward Iterable.

  • A coherent set of API conventions across the full variety of collections.

A big thank-you to @Gankro, who helped collect API information and worked through an initial pass of some of the proposals here.

Motivation

This RFC aims to improve the design of the std::collections module in preparation for API stabilization. There are a number of problems that need to be addressed, as spelled out in the subsections below.

Collection traits

The collections module defines several traits:

  • Collection
  • Mutable
  • MutableSeq
  • Deque
  • Map, MutableMap
  • Set, MutableSet

There are several problems with the current trait design:

  • Most important: the traits do not provide iterator methods like iter. It is not possible to do so in a clean way without higher-kinded types, as the RFC explains in more detail below.

  • The split between mutable and immutable traits is not well-motivated by any of the existing collections.

  • The methods defined in these traits are somewhat anemic compared to the suite of methods provided on the concrete collections that implement them.

Divergent APIs

Despite the current collection traits, the APIs of various concrete collections has diverged; there is not a globally coherent design, and there are many inconsistencies.

One problem in particular is the lack of clear guiding principles for the API design. This RFC proposes a few along the way.

Providing slice APIs on Vec and String

The String and Vec types each provide a limited subset of the methods provides on string and vector slices, but there is not a clear reason to limit the API in this way. Today, one has to write things like some_str.as_slice().contains(...), which is not ergonomic or intuitive.

The Equiv problem

There is a more subtle problem related to slices. It’s common to use a HashMap with owned String keys, but then the natural API for things like lookup is not very usable:

fn find(&self, k: &K) -> Option<&V>

The problem is that, since K will be String, the find function requests a &String value – whereas one typically wants to work with the more flexible &str slices. In particular, using find with a literal string requires something like:

map.find(&"some literal".to_string())

which is unergonomic and requires an extra allocation just to get a borrow that, in some sense, was already available.

The current HashMap API works around this problem by providing an additional set of methods that uses a generic notion of “equivalence” of values that have different types:

pub trait Equiv<T> {
    fn equiv(&self, other: &T) -> bool;
}

impl Equiv<str> for String {
    fn equiv(&self, other: &str) -> bool {
        self.as_slice() == other
    }
}

fn find_equiv<Q: Hash<S> + Equiv<K>>(&self, k: &Q) -> Option<&V>

There are a few downsides to this approach:

  • It requires a duplicated _equiv variant of each method taking a reference to the key. (This downside could likely be mitigated using multidispatch.)

  • Its correctness depends on equivalent values producing the same hash, which is not checked.

  • String-keyed hash maps are very common, so newcomers are likely to run headlong into the problem. First, find will fail to work in the expected way. But the signature of find_equiv is more difficult to understand than find, and it it’s not immediately obvious that it solves the problem.

  • It is the right API for HashMap, but not helpful for e.g. TreeMap, which would want an analog for Ord.

The TreeMap API currently deals with this problem in an entirely different way:

/// Returns the value for which f(key) returns Equal.
/// f is invoked with current key and guides tree navigation.
/// That means f should be aware of natural ordering of the tree.
fn find_with(&self, f: |&K| -> Ordering) -> Option<&V>

Besides being less convenient – you cannot write map.find_with("some literal") – this function navigates the tree according to an ordering that may have no relationship to the actual ordering of the tree.

MaybeOwned

Sometimes a function does not know in advance whether it will need or produce an owned copy of some data, or whether a borrow suffices. A typical example is the from_utf8_lossy function:

fn from_utf8_lossy<'a>(v: &'a [u8]) -> MaybeOwned<'a>

This function will return a string slice if the input was correctly utf8 encoded – without any allocation. But if the input has invalid utf8 characters, the function allocates a new String and inserts utf8 “replacement characters” instead. Hence, the return type is an enum:

pub enum MaybeOwned<'a> {
    Slice(&'a str),
    Owned(String),
}

This interface makes it possible to allocate only when necessary, but the MaybeOwned type (and connected machinery) are somewhat ad hoc – and specialized to String/str. It would be somewhat more palatable if there were a single “maybe owned” abstraction usable across a wide range of types.

Iterable

A frequently-requested feature for the collections module is an Iterable trait for “values that can be iterated over”. There are two main motivations:

  • Abstraction. Today, you can write a function that takes a single Iterator, but you cannot write a function that takes a container and then iterates over it multiple times (perhaps with differing mutability levels). An Iterable trait could allow that.

  • Ergonomics. You’d be able to write

    for v in some_vec { ... }

    rather than

    for v in some_vec.iter() { ... }

    and consume_iter(some_vec) rather than consume_iter(some_vec.iter()).

Detailed design

The collections today

The concrete collections currently available in std fall into roughly three categories:

  • Sequences

    • Vec
    • String
    • Slices
    • Bitv
    • DList
    • RingBuf
    • PriorityQueue
  • Sets

    • HashSet
    • TreeSet
    • TrieSet
    • EnumSet
    • BitvSet
  • Maps

    • HashMap
    • TreeMap
    • TrieMap
    • LruCache
    • SmallIntMap

The primary goal of this RFC is to establish clean and consistent APIs that apply across each group of collections.

Before diving into the details, there is one high-level changes that should be made to these collections. The PriorityQueue collection should be renamed to BinaryHeap, following the convention that concrete collections are named according to their implementation strategy, not the abstract semantics they implement. We may eventually want PriorityQueue to be a trait that’s implemented by multiple concrete collections.

The LruCache could be renamed for a similar reason (it uses a HashMap in its implementation), However, the implementation is actually generic with respect to this underlying map, and so in the long run (with HKT and other language changes) LruCache should probably add a type parameter for the underlying map, defaulted to HashMap.

Design principles

  • Centering on Iterators. The Iterator trait is a strength of Rust’s collections library. Because so many APIs can produce iterators, adding an API that consumes one is very powerful – and conversely as well. Moreover, iterators are highly efficient, since you can chain several layers of modification without having to materialize intermediate results. Thus, whenever possible, collection APIs should strive to work with iterators.

    In particular, some existing convenience methods avoid iterators for either performance or ergonomic reasons. We should instead improve the ergonomics and performance of iterators, so that these extra convenience methods are not necessary and so that all collections can benefit.

  • Minimizing method variants. One problem with some of the current collection APIs is the proliferation of method variants. For example, HashMap include seven methods that begin with the name find! While each method has a motivation, the API as a whole can be bewildering, especially to newcomers.

    When possible, we should leverage the trait system, or find other abstractions, to reduce the need for method variants while retaining their ergonomics and power.

  • Conservatism. It is easier to add APIs than to take them away. This RFC takes a fairly conservative stance on what should be included in the collections APIs. In general, APIs should be very clearly motivated by a wide variety of use cases, either for expressiveness, performance, or ergonomics.

Removing the traits

This RFC proposes a somewhat radical step for the collections traits: rather than reform them, we should eliminate them altogether – for now.

Unlike inherent methods, which can easily be added and deprecated over time, a trait is “forever”: there are very few backwards-compatible modifications to traits. Thus, for something as fundamental as collections, it is prudent to take our time to get the traits right.

Lack of iterator methods

In particular, there is one way in which the current traits are clearly wrong: they do not provide standard methods like iter, despite these being fundamental to working with collections in Rust. Sadly, this gap is due to inexpressiveness in the language, which makes directly defining iterator methods in a trait impossible:

trait Iter {
    type A;
    type I: Iterator<&'a A>;    // what is the lifetime here?
    fn iter<'a>(&'a self) -> I; // and how to connect it to self?
}

The problem is that, when implementing this trait, the return type I of iter should depend on the lifetime of self. For example, the corresponding method in Vec looks like the following:

impl<T> Vec<T> {
    fn iter(&'a self) -> Items<'a, T> { ... }
}

This means that, given a Vec<T>, there isn’t a single type Items<T> for iteration – rather, there is a family of types, one for each input lifetime. In other words, the associated type I in the Iter needs to be “higher-kinded”: not just a single type, but rather a family:

trait Iter {
    type A;
    type I<'a>: Iterator<&'a A>;
    fn iter<'a>(&self) -> I<'a>;
}

In this case, I is parameterized by a lifetime, but in other cases (like map) an associated type needs to be parameterized by a type.

In general, such higher-kinded types (HKTs) are a much-requested feature for Rust. But the design and implementation of higher-kinded types is, by itself, a significant investment.

HKT would also allow for parameterization over smart pointer types, which has many potential use cases in the context of collections.

Thus, the goal in this RFC is to do the best we can without HKT for now, while allowing a graceful migration if or when HKT is added.

Persistent/immutable collections

Another problem with the current collection traits is the split between immutable and mutable versions. In the long run, we will probably want to provide persistent collections (which allow non-destructive “updates” that create new collections that share most of their data with the old ones).

However, persistent collection APIs have not been thoroughly explored in Rust; it would be hasty to standardize on a set of traits until we have more experience.

Downsides of removal

There are two main downsides to removing the traits without a replacement:

  1. It becomes impossible to write code using generics over a “kind” of collection (like Map).

  2. It becomes more difficult to ensure that the collections share a common API.

For point (1), first, if the APIs are sufficiently consistent it should be possible to transition code from e.g. a TreeMap to a HashMap by changing very few lines of code. Second, generic programming is currently quite limited, given the inability to iterate. Finally, generic programming over collections is a large design space (with much precedent in C++, for example), and we should take our time and gain more experience with a variety of concrete collections before settling on a design.

For point (2), first, the current traits have failed to keep the APIs in line, as we will see below. Second, this RFC is the antidote: we establish a clear set of conventions and APIs for concrete collections up front, and stabilize on those, which should make it easy to add traits later on.

Why not leave the traits as “experimental”?

An alternative to removal would be to leave the traits intact, but marked as experimental, with the intent to radically change them later.

Such a strategy doesn’t buy much relative to removal (given the arguments above), but risks the traits becoming “de facto” stable if people begin using them en masse.

Solving the _equiv and MaybeOwned problems

The basic problem that leads to _equiv methods is that:

  • &String and &str are not the same type.
  • The &str type is more flexible and hence more widely used.
  • Code written for a generic type T that takes a reference &T will therefore not be suitable when T is instantiated with String.

A similar story plays out for &Vec<T> and &[T], and with DST and custom slice types the same problem will arise elsewhere.

The Borrow trait

This RFC proposes to use a trait, Borrow to connect borrowed and owned data in a generic fashion:

/// A trait for borrowing.
trait Borrow<Sized? B> {
    /// Immutably borrow from an owned value.
    fn borrow(&self) -> &B;

    /// Mutably borrow from an owned value.
    fn borrow_mut(&mut self) -> &mut B;
}

// The Sized bound means that this impl does not overlap with the impls below.
impl<T: Sized> Borrow<T> for T {
    fn borrow(a: &T) -> &T {
        a
    }
    fn borrow_mut(a: &mut T) -> &mut T {
        a
    }
}

impl Borrow<str> for String {
    fn borrow(s: &String) -> &str {
        s.as_slice()
    }
    fn borrow_mut(s: &mut String) -> &mut str {
        s.as_mut_slice()
    }
}

impl<T> Borrow<[T]> for Vec<T> {
    fn borrow(s: &Vec<T>) -> &[T] {
        s.as_slice()
    }
    fn borrow_mut(s: &mut Vec<T>) -> &mut [T] {
        s.as_mut_slice()
    }
}

(Note: thanks to @epdtry for suggesting this variation! The original proposal is listed in the Alternatives.)

A primary goal of the design is allowing a blanket impl for non-sliceable types (the first impl above). This blanket impl ensures that all new sized, cloneable types are automatically borrowable; new impls are required only for new unsized types, which are rare. The Sized bound on the initial impl means that we can freely add impls for unsized types (like str and [T]) without running afoul of coherence.

Because of the blanket impl, the Borrow trait can largely be ignored except when it is actually used – which we describe next.

Using Borrow to replace _equiv methods

With the Borrow trait in place, we can eliminate the _equiv method variants by asking map keys to be Borrow:

impl<K,V> HashMap<K,V> where K: Hash + Eq {
    fn find<Q>(&self, k: &Q) -> &V where K: Borrow<Q>, Q: Hash + Eq { ... }
    fn contains_key<Q>(&self, k: &Q) -> bool where K: Borrow<Q>, Q: Hash + Eq { ... }
    fn insert(&mut self, k: K, v: V) -> Option<V> { ... }

    ...
}

The benefits of this approach over _equiv are:

  • The Borrow trait captures the borrowing relationship between an owned data structure and both references to it and slices from it – once and for all. This means that it can be used anywhere we need to program generically over “borrowed” data. In particular, the single trait works for both HashMap and TreeMap, and should work for other kinds of data structures as well. It also helps generalize MaybeOwned, for similar reasons (see below.)

    A very important consequence is that the map methods using Borrow can potentially be put into a common Map trait that’s implemented by HashMap, TreeMap, and others. While we do not propose to do so now, we definitely want to do so later on.

  • When using a HashMap<String, T>, all of the basic methods like find, contains_key and insert “just work”, without forcing you to think about &String vs &str.

  • We don’t need separate _equiv variants of methods. (However, this could probably be addressed with multidispatch by providing a blanket Equiv implementation.)

On the other hand, this approach retains some of the downsides of _equiv:

  • The signature for methods like find and contains_key is more complex than their current signatures. There are two counterpoints. First, over time the Borrow trait is likely to become a well-known concept, so the signature will not appear completely alien. Second, what is perhaps more important than the signature is that, when using find on HashMap<String, T>, various method arguments just work as expected.

  • The API does not guarantee “coherence”: the Hash and Eq (or Ord, for TreeMap) implementations for the owned and borrowed keys might differ, breaking key invariants of the data structure. This is already the case with _equiv.

The Alternatives section includes a variant of Borrow that doesn’t suffer from these downsides, but has some downsides of its own.

Clone-on-write (Cow) pointers

A side-benefit of the Borrow trait is that we can give a more general version of the MaybeOwned as a “clone-on-write” smart pointer:

/// A generalization of Clone.
trait FromBorrow<Sized? B>: Borrow<B> {
    fn from_borrow(b: &B) -> Self;
}

/// A clone-on-write smart pointer
pub enum Cow<'a, T, B> where T: FromBorrow<B> {
    Shared(&'a B),
    Owned(T)
}

impl<'a, T, B> Cow<'a, T, B> where T: FromBorrow<B> {
    pub fn new(shared: &'a B) -> Cow<'a, T, B> {
        Shared(shared)
    }

    pub fn new_owned(owned: T) -> Cow<'static, T, B> {
        Owned(owned)
    }

    pub fn is_owned(&self) -> bool {
        match *self {
            Owned(_) => true,
            Shared(_) => false
        }
    }

    pub fn to_owned_mut(&mut self) -> &mut T {
        match *self {
            Shared(shared) => {
                *self = Owned(FromBorrow::from_borrow(shared));
                self.to_owned_mut()
            }
            Owned(ref mut owned) => owned
        }
    }

    pub fn into_owned(self) -> T {
        match self {
            Shared(shared) => FromBorrow::from_borrow(shared),
            Owned(owned) => owned
        }
    }
}

impl<'a, T, B> Deref<B> for Cow<'a, T, B> where T: FromBorrow<B>  {
    fn deref(&self) -> &B {
        match *self {
            Shared(shared) => shared,
            Owned(ref owned) => owned.borrow()
        }
    }
}

impl<'a, T, B> DerefMut<B> for Cow<'a, T, B> where T: FromBorrow<B> {
    fn deref_mut(&mut self) -> &mut B {
        self.to_owned_mut().borrow_mut()
    }
}

The type Cow<'a, String, str> is roughly equivalent to today’s MaybeOwned<'a> (and Cow<'a, Vec<T>, [T]> to MaybeOwnedVector<'a, T>).

By implementing Deref and DerefMut, the Cow type acts as a smart pointer – but in particular, the mut variant actually clones if the pointed-to value is not currently owned. Hence “clone on write”.

One slight gotcha with the design is that &mut str is not very useful, while &mut String is (since it allows extending the string, for example). On the other hand, Deref and DerefMut must deref to the same underlying type, and for Deref to not require cloning, it must yield a &str value.

Thus, the Cow pointer offers a separate to_owned_mut method that yields a mutable reference to the owned version of the type.

Note that, by not using into_owned, the Cow pointer itself may be owned by some other data structure (perhaps as part of a collection) and will internally track whether an owned copy is available.

Altogether, this RFC proposes to introduce Borrow and Cow as above, and to deprecate MaybeOwned and MaybeOwnedVector. The API changes for the collections are discussed below.

IntoIterator (and Iterable)

As discussed in earlier, some form of an Iterable trait is desirable for both expressiveness and ergonomics. Unfortunately, a full treatment of Iterable requires HKT for similar reasons to the collection traits. However, it’s possible to get some of the way there in a forwards-compatible fashion.

In particular, the following two traits work fine (with associated items):

trait Iterator {
    type A;
    fn next(&mut self) -> Option<A>;
    ...
}

trait IntoIterator {
    type A;
    type I: Iterator<A = A>;

    fn into_iter(self) -> I;
}

Because IntoIterator consumes self, lifetimes are not an issue.

It’s tempting to also define a trait like:

trait Iterable<'a> {
    type A;
    type I: Iterator<&'a A>;

    fn iter(&'a self) -> I;
}

(along the lines of those proposed by an earlier RFC).

The problem with Iterable as defined above is that it’s locked to a particular lifetime up front. But in many cases, the needed lifetime is not even nameable in advance:

fn iter_through_rc<I>(c: Rc<I>) where I: Iterable<?> {
    // the lifetime of the borrow is established here,
    // so cannot even be named in the function signature
    for x in c.iter() {
        // ...
    }
}

To make this kind of example work, you’d need to be able to say something like:

where <'a> I: Iterable<'a>

that is, that I implements Iterable for every lifetime 'a. While such a feature is feasible to add to where clauses, the HKT solution is undoubtedly cleaner.

Fortunately, we can have our cake and eat it too. This RFC proposes the IntoIterator trait above, together with the following blanket impl:

impl<I: Iterator> IntoIterator for I {
    type A = I::A;
    type I = I;
    fn into_iter(self) -> I {
        self
    }
}

which means that taking IntoIterator is strictly more flexible than taking Iterator. Note that in other languages (like Java), iterators are not iterable because the latter implies an unlimited number of iterations. But because IntoIterator consumes self, it yields only a single iteration, so all is good.

For individual collections, one can then implement IntoIterator on both the collection and borrows of it:

impl<T> IntoIterator for Vec<T> {
    type A = T;
    type I = MoveItems<T>;
    fn into_iter(self) -> MoveItems<T> { ... }
}

impl<'a, T> IntoIterator for &'a Vec<T> {
    type A = &'a T;
    type I = Items<'a, T>;
    fn into_iter(self) -> Items<'a, T> { ... }
}

impl<'a, T> IntoIterator for &'a mut Vec<T> {
    type A = &'a mut T;
    type I = ItemsMut<'a, T>;
    fn into_iter(self) -> ItemsMut<'a, T> { ... }
}

If/when HKT is added later on, we can add an Iterable trait and a blanket impl like the following:

// the HKT version
trait Iterable {
    type A;
    type I<'a>: Iterator<&'a A>;
    fn iter<'a>(&'a self) -> I<'a>;
}

impl<'a, C: Iterable> IntoIterator for &'a C {
    type A = &'a C::A;
    type I = C::I<'a>;
    fn into_iter(self) -> I {
        self.iter()
    }
}

This gives a clean migration path: once Vec implements Iterable, it can drop the IntoIterator impls for borrowed vectors, since they will be covered by the blanket implementation. No code should break.

Likewise, if we add a feature like the “universal” where clause mentioned above, it can be used to deal with embedded lifetimes as in the iter_through_rc example; and if the HKT version of Iterable is later added, thanks to the suggested blanket impl for IntoIterator that where clause could be changed to use Iterable instead, again without breakage.

Benefits of IntoIterator

What do we gain by incorporating IntoIterator today?

This RFC proposes that for loops should use IntoIterator rather than Iterator. With the blanket impl of IntoIterator for any Iterator, this is not a breaking change. However, given the IntoIterator impls for Vec above, we would be able to write:

let v: Vec<Foo> = ...

for x in &v { ... }     // iterate over &Foo
for x in &mut v { ... } // iterate over &mut Foo
for x in v { ... }      // iterate over Foo

Similarly, methods that currently take slices or iterators can be changed to take IntoIterator instead, immediately becoming more general and more ergonomic.

In general, IntoIterator will allow us to move toward more Iterator-centric APIs today, in a way that’s compatible with HKT tomorrow.

Additional methods

Another typical desire for an Iterable trait is to offer defaulted versions of methods that basically re-export iterator methods on containers (see the earlier RFC). Usually these methods would go through a reference iterator (i.e. the iter method) rather than a moving iterator.

It is possible to add such methods using the design proposed above, but there are some drawbacks. For example, should Vec::map produce an iterator, or a new vector? It would be possible to do the latter generically, but only with HKT. (See this discussion.)

This RFC only proposes to add the following method via IntoIterator, as a convenience for a common pattern:

trait IterCloned {
    type A;
    type I: Iterator<A>;
    fn iter_cloned(self) -> I;
}

impl<'a, T, I: IntoIterator> IterCloned for I where I::A = &'a T {
    type A = T;
    type I = ClonedItems<I>;
    fn into_iter(self) -> I { ... }
}

(The iter_cloned method will help reduce the number of method variants in general for collections, as we will see below).

We will leave to later RFCs the incorporation of additional methods. Notice, in particular, that such methods can wait until we introduce an Iterable trait via HKT without breaking backwards compatibility.

Minimizing variants: ByNeed and Predicate traits

There are several kinds of methods that, in their most general form take closures, but for which convenience variants taking simpler data are common:

  • Taking values by need. For example, consider the unwrap_or and unwrap_or_else methods in Option:

    fn unwrap_or(self, def: T) -> T
    fn unwrap_or_else(self, f: || -> T) -> T

    The unwrap_or_else method is the most general: it invokes the closure to compute a default value only when self is None. When the default value is expensive to compute, this by-need approach helps. But often the default value is cheap, and closures are somewhat annoying to write, so unwrap_or provides a convenience wrapper.

  • Taking predicates. For example, a method like contains often shows up (inconsistently!) in two variants:

    fn contains(&self, elem: &T) -> bool; // where T: PartialEq
    fn contains_fn(&self, pred: |&T| -> bool) -> bool;

    Again, the contains_fn version is the more general, but it’s convenient to provide a specialized variant when the element type can be compared for equality, to avoid writing explicit closures.

As it turns out, with multidispatch) it is possible to use a trait to express these variants through overloading:

trait ByNeed<T> {
    fn compute(self) -> T;
}

impl<T> ByNeed<T> for T {
    fn compute(self) -> T {
        self
    }
}

// Due to multidispatch, this impl does NOT overlap with the above one
impl<T> ByNeed<T> for || -> T {
    fn compute(self) -> T {
        self()
    }
}

impl<T> Option<T> {
    fn unwrap_or<U>(self, def: U) where U: ByNeed<T> { ... }
    ...
}
trait Predicate<T> {
    fn check(&self, &T) -> bool;
}

impl<T: Eq> Predicate<T> for &T {
    fn check(&self, t: &T) -> bool {
        *self == t
    }
}

impl<T> Predicate<T> for |&T| -> bool {
    fn check(&self, t: &T) -> bool {
        (*self)(t)
    }
}

impl<T> Vec<T> {
    fn contains<P>(&self, pred: P) where P: Predicate<T> { ... }
    ...
}

Since these two patterns are particularly common throughout std, this RFC proposes adding both of the above traits, and using them to cut down on the number of method variants.

In particular, some methods on string slices currently work with CharEq, which is similar to Predicate<char>:

pub trait CharEq {
    fn matches(&mut self, char) -> bool;
    fn only_ascii(&self) -> bool;
}

The difference is the only_ascii method, which is used to optimize certain operations when the predicate only holds for characters in the ASCII range.

To keep these optimizations intact while connecting to Predicate, this RFC proposes the following restructuring of CharEq:

pub trait CharPredicate: Predicate<char> {
    fn only_ascii(&self) -> bool {
        false
    }
}

Why not leverage unboxed closures?

A natural question is: why not use the traits for unboxed closures to achieve a similar effect? For example, you could imagine writing a blanket impl for Fn(&T) -> bool for any T: PartialEq, which would allow PartialEq values to be used anywhere a predicate-like closure was requested.

The problem is that these blanket impls will often conflict. In particular, any type T could implement Fn() -> T, and that single blanket impl would preclude any others (at least, assuming that unboxed closure traits treat the argument and return types as associated (output) types).

In addition, the explicit use of traits like Predicate makes the intended semantics more clear, and the overloading less surprising.

The APIs

Now we’ll delve into the detailed APIs for the various concrete collections. These APIs will often be given in tabular form, grouping together common APIs across multiple collections. When writing these function signatures:

  • We will assume a type parameter T for Vec, BinaryHeap, DList and RingBuf; we will also use this parameter for APIs on String, where it should be understood as char.

  • We will assume type parameters K: Borrow and V for HashMap and TreeMap; for TrieMap and SmallIntMap the K is assumed to be uint

  • We will assume a type parameter K: Borrow for HashSet and TreeSet; for BitvSet it is assumed to be uint.

We will begin by outlining the most widespread APIs in tables, making it easy to compare names and signatures across different kinds of collections. Then we will focus on some APIs specific to particular classes of collections – e.g. sets and maps. Finally, we will briefly discuss APIs that are specific to a single concrete collection.

Construction

All of the collections should support a static function:

fn new() -> Self

that creates an empty version of the collection; the constructor may take arguments needed to set up the collection, e.g. the capacity for LruCache.

Several collections also support separate constructors for providing capacities in advance; these are discussed below.

The FromIterator trait

All of the collections should implement the FromIterator trait:

pub trait FromIterator {
    type A:
    fn from_iter<T>(T) -> Self where T: IntoIterator<A = A>;
}

Note that this varies from today’s FromIterator by consuming an IntoIterator rather than Iterator. As explained above, this choice is strictly more general and will not break any existing code.

This constructor initializes the collection with the contents of the iterator. For maps, the iterator is over key/value pairs, and the semantics is equivalent to inserting those pairs in order; if keys are repeated, the last value is the one left in the map.

Insertion

The table below gives methods for inserting items into various concrete collections:

OperationCollections
fn push(&mut self, T)Vec, BinaryHeap, String
fn push_front(&mut self, T)DList, RingBuf
fn push_back(&mut self, T)DList, RingBuf
fn insert(&mut self, uint, T)Vec, RingBuf, String
fn insert(&mut self, K::Owned) -> boolHashSet, TreeSet, TrieSet, BitvSet
fn insert(&mut self, K::Owned, V) -> Option<V>HashMap, TreeMap, TrieMap, SmallIntMap
fn append(&mut self, Self)DList
fn prepend(&mut self, Self)DList

There are a few changes here from the current state of affairs:

  • The DList and RingBuf data structures no longer provide push, but rather push_front and push_back. This change is based on (1) viewing them as deques and (2) not giving priority to the “front” or the “back”.

  • The insert method on maps returns the value previously associated with the key, if any. Previously, this functionality was provided by a swap method, which has been dropped (consolidating needless method variants.)

Aside from these changes, a number of insertion methods will be deprecated (e.g. the append and append_one methods on Vec). These are discussed further in the section on “specialized operations” below.

The Extend trait (was: Extendable)

In addition to the standard insertion operations above, all collections will implement the Extend trait. This trait was previously called Extendable, but in general we prefer to avoid -able suffixes and instead name the trait using a verb (or, especially, the key method offered by the trait.)

The Extend trait allows data from an arbitrary iterator to be inserted into a collection, and will be defined as follows:

pub trait Extend: FromIterator {
    fn extend<T>(&mut self, T) where T: IntoIterator<A = Self::A>;
}

As with FromIterator, this trait has been modified to take an IntoIterator value.

Deletion

The table below gives methods for removing items into various concrete collections:

OperationCollections
fn clear(&mut self)all
fn pop(&mut self) -> Option<T>Vec, BinaryHeap, String
fn pop_front(&mut self) -> Option<T>DList, RingBuf
fn pop_back(&mut self) -> Option<T>DList, RingBuf
fn remove(&mut self, uint) -> Option<T>Vec, RingBuf, String
fn remove(&mut self, &K) -> boolHashSet, TreeSet, TrieSet, BitvSet
fn remove(&mut self, &K) -> Option<V>HashMap, TreeMap, TrieMap, SmallIntMap
fn truncate(&mut self, len: uint)Vec, String, Bitv, DList, RingBuf
fn retain<P>(&mut self, f: P) where P: Predicate<T>Vec, DList, RingBuf
fn dedup(&mut self)Vec, DList, RingBuf where T: PartialEq

As with the insertion methods, there are some differences from today’s API:

  • The DList and RingBuf data structures no longer provide pop, but rather pop_front and pop_back – similarly to the push methods.

  • The remove method on maps returns the value previously associated with the key, if any. Previously, this functionality was provided by a separate pop method, which has been dropped (consolidating needless method variants.)

  • The retain method takes a Predicate.

  • The truncate, retain and dedup methods are offered more widely.

Again, some of the more specialized methods are not discussed here; see “specialized operations” below.

Inspection/mutation

The next table gives methods for inspection and mutation of existing items in collections:

OperationCollections
fn len(&self) -> uintall
fn is_empty(&self) -> boolall
fn get(&self, uint) -> Option<&T>[T], Vec, RingBuf
fn get_mut(&mut self, uint) -> Option<&mut T>[T], Vec, RingBuf
fn get(&self, &K) -> Option<&V>HashMap, TreeMap, TrieMap, SmallIntMap
fn get_mut(&mut self, &K) -> Option<&mut V>HashMap, TreeMap, TrieMap, SmallIntMap
fn contains<P>(&self, P) where P: Predicate<T>[T], str, Vec, String, DList, RingBuf, BinaryHeap
fn contains(&self, &K) -> boolHashSet, TreeSet, TrieSet, EnumSet
fn contains_key(&self, &K) -> boolHashMap, TreeMap, TrieMap, SmallIntMap

The biggest changes from the current APIs are:

  • The find and find_mut methods have been renamed to get and get_mut. Further, all get methods return Option values and do not invoke fail!. This is part of a general convention described in the next section (on the Index traits).

  • The contains method is offered more widely.

  • There is no longer an equivalent of find_copy (which should be called find_clone). Instead, we propose to add the following method to the Option<&'a T> type where T: Clone:

    fn cloned(self) -> Option<T> {
        self.map(|x| x.clone())
    }

    so that some_map.find_copy(key) will instead be written some_map.find(key).cloned(). This method chain is slightly longer, but is more clear and allows us to drop the _copy variants. Moreover, all users of Option benefit from the new convenience method.

The Index trait

The Index and IndexMut traits provide indexing notation like v[0]:

pub trait Index {
    type Index;
    type Result;
    fn index(&'a self, index: &Index) -> &'a Result;
}

pub trait IndexMut {
    type Index;
    type Result;
    fn index_mut(&'a mut self, index: &Index) -> &'a mut Result;
}

These traits will be implemented for: [T], Vec, RingBuf, HashMap, TreeMap, TrieMap, SmallIntMap.

As a general convention, implementation of the Index traits will fail the task if the index is invalid (out of bounds or key not found); they will therefore return direct references to values. Any collection implementing Index (resp. IndexMut) should also provide a get method (resp. get_mut) as a non-failing variant that returns an Option value.

This allows us to keep indexing notation maximally concise, while still providing convenient non-failing variants (which can be used to provide a check for index validity).

Iteration

Every collection should provide the standard trio of iteration methods:

fn iter(&'a self) -> Items<'a>;
fn iter_mut(&'a mut self) -> ItemsMut<'a>;
fn into_iter(self) -> ItemsMove;

and in particular implement the IntoIterator trait on both the collection type and on (mutable) references to it.

Capacity management

many of the collections have some notion of “capacity”, which may be fixed, grow explicitly, or grow implicitly:

  • No capacity/fixed capacity: DList, TreeMap, TreeSet, TrieMap, TrieSet, slices, EnumSet
  • Explicit growth: LruCache
  • Implicit growth: Vec, RingBuf, HashMap, HashSet, BitvSet, BinaryHeap

Growable collections provide functions for capacity management, as follows.

Explicit growth

For explicitly-grown collections, the normal constructor (new) takes a capacity argument. Capacity can later be inspected or updated as follows:

fn capacity(&self) -> uint
fn set_capacity(&mut self, capacity: uint)

(Note, this renames LruCache::change_capacity to set_capacity, the prevailing style for setter method.)

Implicit growth

For implicitly-grown collections, the normal constructor (new) does not take a capacity, but there is an explicit with_capacity constructor, along with other functions to work with the capacity later on:

fn with_capacity(uint) -> Self
fn capacity(&self) -> uint
fn reserve(&mut self, additional: uint)
fn reserve_exact(&mut self, additional: uint)
fn shrink_to_fit(&mut self)

There are some important changes from the current APIs:

  • The reserve and reserve_exact methods now take as an argument the extra space to reserve, rather than the final desired capacity, as this usage is vastly more common. The reserve function may grow the capacity by a larger amount than requested, to ensure amortization, while reserve_exact will reserve exactly the requested additional capacity. The reserve_additional methods are deprecated.

  • The with_capacity constructor does not take any additional arguments, for uniformity with new. This change affects Bitv in particular.

Bounded iterators

Some of the maps (e.g. TreeMap) currently offer specialized iterators over their entries starting at a given key (called lower_bound) and above a given key (called upper_bound), along with _mut variants. While the functionality is worthwhile, the names are not very clear, so this RFC proposes the following replacement API (thanks to @Gankro for the suggestion):

Bound<T> {
    /// An inclusive bound
    Included(T),

    /// An exclusive bound
    Excluded(T),

    Unbounded,
}

/// Creates a double-ended iterator over a sub-range of the collection's items,
/// starting at min, and ending at max. If min is `Unbounded`, then it will
/// be treated as "negative infinity", and if max is `Unbounded`, then it will
/// be treated as "positive infinity". Thus range(Unbounded, Unbounded) will yield
/// the whole collection.
fn range(&self, min: Bound<&T>, max: Bound<&T>) -> RangedItems<'a, T>;

fn range_mut(&self, min: Bound<&T>, max: Bound<&T>) -> RangedItemsMut<'a, T>;

These iterators should be provided for any maps over ordered keys (TreeMap, TrieMap and SmallIntMap).

In addition, analogous methods should be provided for sets over ordered keys (TreeSet, TrieSet, BitvSet).

Set operations

Comparisons

All sets should offer the following methods, as they do today:

fn is_disjoint(&self, other: &Self) -> bool;
fn is_subset(&self, other: &Self) -> bool;
fn is_superset(&self, other: &Self) -> bool;

Combinations

Sets can also be combined using the standard operations – union, intersection, difference and symmetric difference (exclusive or). Today’s APIs for doing so look like this:

fn union<'a>(&'a self, other: &'a Self) -> I;
fn intersection<'a>(&'a self, other: &'a Self) -> I;
fn difference<'a>(&'a self, other: &'a Self) -> I;
fn symmetric_difference<'a>(&'a self, other: &'a Self) -> I;

where the I type is an iterator over keys that varies by concrete set. Working with these iterators avoids materializing intermediate sets when they’re not needed; the collect method can be used to create sets when they are. This RFC proposes to keep these names intact, following the RFC on iterator conventions.

Sets should also implement the BitOr, BitAnd, BitXor and Sub traits from std::ops, allowing overloaded notation |, &, |^ and - to be used with sets. These are equivalent to invoking the corresponding iter_ method and then calling collect, but for some sets (notably BitvSet) a more efficient direct implementation is possible.

Unfortunately, we do not yet have a set of traits corresponding to operations |=, &=, etc, but again in some cases doing the update in place may be more efficient. Right now, BitvSet is the only concrete set offering such operations:

fn union_with(&mut self, other: &BitvSet)
fn intersect_with(&mut self, other: &BitvSet)
fn difference_with(&mut self, other: &BitvSet)
fn symmetric_difference_with(&mut self, other: &BitvSet)

This RFC punts on the question of naming here: it does not propose a new set of names. Ideally, we would add operations like |= in a separate RFC, and use those conventionally for sets. If not, we will choose fallback names during the stabilization of BitvSet.

Map operations

Combined methods

The HashMap type currently provides a somewhat bewildering set of find/insert variants:

fn find_or_insert(&mut self, k: K, v: V) -> &mut V
fn find_or_insert_with<'a>(&'a mut self, k: K, f: |&K| -> V) -> &'a mut V
fn insert_or_update_with<'a>(&'a mut self, k: K, v: V, f: |&K, &mut V|) -> &'a mut V
fn find_with_or_insert_with<'a, A>(&'a mut self, k: K, a: A, found: |&K, &mut V, A|, not_found: |&K, A| -> V) -> &'a mut V

These methods are used to couple together lookup and insertion/update operations, thereby avoiding an extra lookup step. However, the current set of method variants seems overly complex.

There is another RFC already in the queue addressing this problem in a very nice way, and this RFC defers to that one

Key and value iterators

In addition to the standard iterators, maps should provide by-reference convenience iterators over keys and values:

fn keys(&'a self) -> Keys<'a, K>
fn values(&'a self) -> Values<'a, V>

While these iterators are easy to define in terms of the main iter method, they are used often enough to warrant including convenience methods.

Specialized operations

Many concrete collections offer specialized operations beyond the ones given above. These will largely be addressed through the API stabilization process (which focuses on local API issues, as opposed to general conventions), but a few broad points are addressed below.

Relating Vec and String to slices

One goal of this RFC is to supply all of the methods on (mutable) slices on Vec and String. There are a few ways to achieve this, so concretely the proposal is for Vec<T> to implement Deref<[T]> and DerefMut<[T]>, and String to implement Deref<str>. This will automatically allow all slice methods to be invoked from vectors and strings, and will allow writing &*v rather than v.as_slice().

In this scheme, Vec and String are really “smart pointers” around the corresponding slice types. While counterintuitive at first, this perspective actually makes a fair amount of sense, especially with DST.

(Initially, it was unclear whether this strategy would play well with method resolution, but the planned resolution rules should work fine.)

String API

One of the key difficulties with the String API is that strings use utf8 encoding, and some operations are only efficient when working at the byte level (and thus taking this encoding into account).

As a general principle, we will move the API toward the following convention: index-related operations always work in terms of bytes, other operations deal with chars by default (but can have suffixed variants for working at other granularities when appropriate.)

DList

The DList type offers a number of specialized methods:

swap_remove, insert_when, insert_ordered, merge, rotate_forward and rotate_backward

Prior to stabilizing the DList API, we will attempt to simplify its API surface, possibly by using idea from the collection views RFC.

Minimizing method variants via iterators

Partitioning via FromIterator

One place we can move toward iterators is functions like partition and partitioned on vectors and slices:

// on Vec<T>
fn partition(self, f: |&T| -> bool) -> (Vec<T>, Vec<T>);

// on [T] where T: Clone
fn partitioned(&self, f: |&T| -> bool) -> (Vec<T>, Vec<T>);

These two functions transform a vector/slice into a pair of vectors, based on a “partitioning” function that says which of the two vectors to place elements into. The partition variant works by moving elements of the vector, while partitioned clones elements.

There are a few unfortunate aspects of an API like this one:

  • It’s specific to vectors/slices, although in principle both the source and target containers could be more general.

  • The fact that two variants have to be exposed, for owned versus clones, is somewhat unfortunate.

This RFC proposes the following alternative design:

pub enum Either<T, U> {
    pub Left(T),
    pub Right(U),
}

impl<A, B> FromIterator for (A, B) where A: Extend, B: Extend {
    fn from_iter<I>(mut iter: I) -> (A, B) where I: IntoIterator<Either<T, U>> {
        let mut left: A = FromIterator::from_iter(None::<T>);
        let mut right: B = FromIterator::from_iter(None::<U>);

        for item in iter {
            match item {
                Left(t) => left.extend(Some(t)),
                Right(u) => right.extend(Some(u)),
            }
        }

        (left, right)
    }
}

trait Iterator {
    ...
    fn partition(self, |&A| -> bool) -> Partitioned<A> { ... }
}

// where Partitioned<A>: Iterator<A = Either<A, A>>

This design drastically generalizes the partitioning functionality, allowing it be used with arbitrary collections and iterators, while removing the by-reference and by-value distinction.

Using this design, you have:

// The following two lines are equivalent:
let (u, w) = v.partition(f);
let (u, w): (Vec<T>, Vec<T>) = v.into_iter().partition(f).collect();

// The following two lines are equivalent:
let (u, w) = v.as_slice().partitioned(f);
let (u, w): (Vec<T>, Vec<T>) = v.iter_cloned().partition(f).collect();

There is some extra verbosity, mainly due to the type annotations for collect, but the API is much more flexible, since the partitioned data can now be collected into other collections (or even differing collections). In addition, partitioning is supported for any iterator.

Removing methods like from_elem, from_fn, grow, and grow_fn

Vectors and some other collections offer constructors and growth functions like the following:

fn from_elem(length: uint, value: T) -> Vec<T>
fn from_fn(length: uint, op: |uint| -> T) -> Vec<T>
fn grow(&mut self, n: uint, value: &T)
fn grow_fn(&mut self, n: uint, f: |uint| -> T)

These extra variants can easily be dropped in favor of iterators, and this RFC proposes to do so.

The iter module already contains a Repeat iterator; this RFC proposes to add a free function repeat to iter as a convenience for iter::Repeat::new.

With that in place, we have:

// Equivalent:
let v = Vec::from_elem(n, a);
let v = Vec::from_iter(repeat(a).take(n));

// Equivalent:
let v = Vec::from_fn(n, f);
let v = Vec::from_iter(range(0, n).map(f));

// Equivalent:
v.grow(n, a);
v.extend(repeat(a).take(n));

// Equivalent:
v.grow_fn(n, f);
v.extend(range(0, n).map(f));

While these replacements are slightly longer, an important aspect of ergonomics is memorability: by placing greater emphasis on iterators, programmers will quickly learn the iterator APIs and have those at their fingertips, while remembering ad hoc method variants like grow_fn is more difficult.

Long-term: removing push_all and push_all_move

The push_all and push_all_move methods on vectors are yet more API variants that could, in principle, go through iterators:

// The following are *semantically* equivalent
v.push_all(some_slice);
v.extend(some_slice.iter_cloned());

// The following are *semantically* equivalent
v.push_all_move(some_vec);
v.extend(some_vec);

However, currently the push_all and push_all_move methods can rely on the exact size of the container being pushed, in order to elide bounds checks. We do not currently have a way to “trust” methods like len on iterators to elide bounds checks. A separate RFC will introduce the notion of a “trusted” method which should support such optimization and allow us to deprecate the push_all and push_all_move variants. (This is unlikely to happen before 1.0, so the methods will probably still be included with “experimental” status, and likely with different names.)

Alternatives

Borrow and the Equiv problem

Variants of Borrow

The original version of Borrow was somewhat more subtle:

/// A trait for borrowing.
/// If `T: Borrow` then `&T` represents data borrowed from `T::Owned`.
trait Borrow for Sized? {
    /// The type being borrowed from.
    type Owned;

    /// Immutably borrow from an owned value.
    fn borrow(&Owned) -> &Self;

    /// Mutably borrow from an owned value.
    fn borrow_mut(&mut Owned) -> &mut Self;
}

trait ToOwned: Borrow {
    /// Produce a new owned value, usually by cloning.
    fn to_owned(&self) -> Owned;
}

impl<A: Sized> Borrow for A {
    type Owned = A;
    fn borrow(a: &A) -> &A {
        a
    }
    fn borrow_mut(a: &mut A) -> &mut A {
        a
    }
}

impl<A: Clone> ToOwned for A {
    fn to_owned(&self) -> A {
        self.clone()
    }
}

impl Borrow for str {
    type Owned = String;
    fn borrow(s: &String) -> &str {
        s.as_slice()
    }
    fn borrow_mut(s: &mut String) -> &mut str {
        s.as_mut_slice()
    }
}

impl ToOwned for str {
    fn to_owned(&self) -> String {
        self.to_string()
    }
}

impl<T> Borrow for [T] {
    type Owned = Vec<T>;
    fn borrow(s: &Vec<T>) -> &[T] {
        s.as_slice()
    }
    fn borrow_mut(s: &mut Vec<T>) -> &mut [T] {
        s.as_mut_slice()
    }
}

impl<T> ToOwned for [T] {
    fn to_owned(&self) -> Vec<T> {
        self.to_vec()
    }
}

impl<K,V> HashMap<K,V> where K: Borrow + Hash + Eq {
    fn find(&self, k: &K) -> &V { ... }
    fn insert(&mut self, k: K::Owned, v: V) -> Option<V> { ... }
    ...
}

pub enum Cow<'a, T> where T: ToOwned {
    Shared(&'a T),
    Owned(T::Owned)
}

This approach ties Borrow directly to the borrowed data, and uses an associated type to uniquely determine the corresponding owned data type.

For string keys, we would use HashMap<str, V>. Then, the find method would take an &str key argument, while insert would take an owned String. On the other hand, for some other type Foo a HashMap<Foo, V> would take &Foo for find and Foo for insert. (More discussion on the choice of ownership is given in the alternatives section.

Benefits of this alternative:

  • Unlike the current _equiv or find_with methods, or the proposal in the RFC, this approach guarantees coherence about hashing or ordering. For example, HashMap above requires that K (the borrowed key type) is Hash, and will produce hashes from owned keys by first borrowing from them.

  • Unlike the proposal in this RFC, the signature of the methods for maps is very simple – essentially the same as the current find, insert, etc.

  • Like the proposal in this RFC, there is only a single Borrow trait, so it would be possible to standardize on a Map trait later on and include these APIs. The trait could be made somewhat simpler with this alternative form of Borrow, but can be provided in either case; see these comments for details.

  • The Cow data type is simpler than in the RFC’s proposal, since it does not need a type parameter for the owned data.

Drawbacks of this alternative:

  • It’s quite subtle that you want to use HashMap<str, T> rather than HashMap<String, T>. That is, if you try to use a map in the “obvious way” you will not be able to use string slices for lookup, which is part of what this RFC is trying to achieve. The same applies to Cow.

  • The design is somewhat less flexible than the one in the RFC, because (1) there is a fixed choice of owned type corresponding to each borrowed type and (2) you cannot use multiple borrow types for lookups at different types (e.g. using &String sometimes and &str other times). On the other hand, these restrictions guarantee coherence of hashing/equality/comparison.

  • This version of Borrow, mapping from borrowed to owned data, is somewhat less intuitive.

On the balance, the approach proposed in the RFC seems better, because using the map APIs in the obvious ways works by default.

The HashMapKey trait and friends

An earlier proposal for solving the _equiv problem was given in the associated items RFC):

trait HashMapKey : Clone + Hash + Eq {
    type Query: Hash = Self;
    fn compare(&self, other: &Query) -> bool { self == other }
    fn query_to_key(q: &Query) -> Self { q.clone() };
}

impl HashMapKey for String {
    type Query = str;
    fn compare(&self, other: &str) -> bool {
        self.as_slice() == other
    }
    fn query_to_key(q: &str) -> String {
        q.into_string()
    }
}

impl<K,V> HashMap<K,V> where K: HashMapKey {
    fn find(&self, q: &K::Query) -> &V { ... }
}

This solution has several drawbacks, however:

  • It requires a separate trait for different kinds of maps – one for HashMap, one for TreeMap, etc.

  • It requires that a trait be implemented on a given key without providing a blanket implementation. Since you also need different traits for different maps, it’s easy to imagine cases where a out-of-crate type you want to use as a key doesn’t implement the key trait, forcing you to newtype.

  • It doesn’t help with the MaybeOwned problem.

Daniel Micay’s hack

@strcat has a PR that makes it possible to, for example, coerce a &str to an &String value.

This provides some help for the _equiv problem, since the _equiv methods could potentially be dropped. However, there are a few downsides:

  • Using a map with string keys is still a bit more verbose:

    map.find("some static string".as_string()) // with the hack
    map.find("some static string")             // with this RFC
  • The solution is specialized to strings and vectors, and does not necessarily support user-defined unsized types or slices.

  • It doesn’t help with the MaybeOwned problem.

  • It exposes some representation interplay between slices and references to owned values, which we may not want to commit to or reveal.

For IntoIterator

Handling of for loops

The fact that for x in v moves elements from v, while for x in v.iter() yields references, may be a bit surprising. On the other hand, moving is the default almost everywhere in Rust, and with the proposed approach you get to use & and &mut to easily select other forms of iteration.

(See @huon’s comment for additional drawbacks.)

Unfortunately, it’s a bit tricky to make for use by-ref iterators instead. The problem is that an iterator is IntoIterator, but it is not Iterable (or whatever we call the by-reference trait). Why? Because IntoIterator gives you an iterator that can be used only once, while Iterable allows you to ask for iterators repeatedly.

If for demanded an Iterable, then for x in v.iter() and for x in v.iter_mut() would cease to work – we’d have to find some other approach. It might be doable, but it’s not obvious how to do it.

Input versus output type parameters

An important aspect of the IntoIterator design is that the element type is an associated type, not an input type.

This is a tradeoff:

  • Making it an associated type means that the for examples work, because the type of Self uniquely determines the element type for iteration, aiding type inference.

  • Making it an input type would forgo those benefits, but would allow some additional flexibility. For example, you could implement IntoIterator<A> for an iterator on &A when A is cloned, therefore implicitly cloning as needed to make the ownership work out (and obviating the need for iter_cloned). However, we have generally kept away from this kind of implicit magic, especially when it can involve hidden costs like cloning, so the more explicit design given in this RFC seems best.

Downsides

Design tradeoffs were discussed inline.

Unresolved questions

Unresolved conventions/APIs

As mentioned above, this RFC does not resolve the question of what to call set operations that update the set in place.

It likewise does not settle the APIs that appear in only single concrete collections. These will largely be handled through the API stabilization process, unless radical changes are proposed.

Finally, additional methods provided via the IntoIterator API are left for future consideration.

Coercions

Using the Borrow trait, it might be possible to safely add a coercion for auto-slicing:

  If T: Borrow:
    coerce  &'a T::Owned      to  &'a T
    coerce  &'a mut T::Owned  to  &'a mut T

For sized types, this coercion is forced to be trivial, so the only time it would involve running user code is for unsized values.

A general story about such coercions will be left to a follow-up RFC.

Summary

This is a conventions RFC for formalizing the basic conventions around error handling in Rust libraries.

The high-level overview is:

  • For catastrophic errors, abort the process or fail the task depending on whether any recovery is possible.

  • For contract violations, fail the task. (Recover from programmer errors at a coarse grain.)

  • For obstructions to the operation, use Result (or, less often, Option). (Recover from obstructions at a fine grain.)

  • Prefer liberal function contracts, especially if reporting errors in input values may be useful to a function’s caller.

This RFC follows up on two earlier attempts by giving more leeway in when to fail the task.

Motivation

Rust provides two basic strategies for dealing with errors:

  • Task failure, which unwinds to at least the task boundary, and by default propagates to other tasks through poisoned channels and mutexes. Task failure works well for coarse-grained error handling.

  • The Result type, which allows functions to signal error conditions through the value that they return. Together with a lint and the try! macro, Result works well for fine-grained error handling.

However, while there have been some general trends in the usage of the two handling mechanisms, we need to have formal guidelines in order to ensure consistency as we stabilize library APIs. That is the purpose of this RFC.

For the most part, the RFC proposes guidelines that are already followed today, but it tries to motivate and clarify them.

Detailed design

Errors fall into one of three categories:

  • Catastrophic errors, e.g. out-of-memory.
  • Contract violations, e.g. wrong input encoding, index out of bounds.
  • Obstructions, e.g. file not found, parse error.

The basic principle of the conventions is that:

  • Catastrophic errors and programming errors (bugs) can and should only be recovered at a coarse grain, i.e. a task boundary.
  • Obstructions preventing an operation should be reported at a maximally fine grain – to the immediate invoker of the operation.

Catastrophic errors

An error is catastrophic if there is no meaningful way for the current task to continue after the error occurs.

Catastrophic errors are extremely rare, especially outside of libstd.

Canonical examples: out of memory, stack overflow.

For catastrophic errors, fail the task.

For errors like stack overflow, Rust currently aborts the process, but could in principle fail the task, which (in the best case) would allow reporting and recovery from a supervisory task.

Contract violations

An API may define a contract that goes beyond the type checking enforced by the compiler. For example, slices support an indexing operation, with the contract that the supplied index must be in bounds.

Contracts can be complex and involve more than a single function invocation. For example, the RefCell type requires that borrow_mut not be called until all existing borrows have been relinquished.

For contract violations, fail the task.

A contract violation is always a bug, and for bugs we follow the Erlang philosophy of “let it crash”: we assume that software will have bugs, and we design coarse-grained task boundaries to report, and perhaps recover, from these bugs.

Contract design

One subtle aspect of these guidelines is that the contract for a function is chosen by an API designer – and so the designer also determines what counts as a violation.

This RFC does not attempt to give hard-and-fast rules for designing contracts. However, here are some rough guidelines:

  • Prefer expressing contracts through static types whenever possible.

  • It must be possible to write code that uses the API without violating the contract.

  • Contracts are most justified when violations are inarguably bugs – but this is surprisingly rare.

  • Consider whether the API client could benefit from the contract-checking logic. The checks may be expensive. Or there may be useful programming patterns where the client does not want to check inputs before hand, but would rather attempt the operation and then find out whether the inputs were invalid.

  • When a contract violation is the only kind of error a function may encounter – i.e., there are no obstructions to its success other than “bad” inputs – using Result or Option instead is especially warranted. Clients can then use unwrap to assert that they have passed valid input, or re-use the error checking done by the API for their own purposes.

  • When in doubt, use loose contracts and instead return a Result or Option.

Obstructions

An operation is obstructed if it cannot be completed for some reason, even though the operation’s contract has been satisfied. Obstructed operations may have (documented!) side effects – they are not required to roll back after encountering an obstruction. However, they should leave the data structures in a “coherent” state (satisfying their invariants, continuing to guarantee safety, etc.).

Obstructions may involve external conditions (e.g., I/O), or they may involve aspects of the input that are not covered by the contract.

Canonical examples: file not found, parse error.

For obstructions, use Result

The Result<T,E> type represents either a success (yielding T) or failure (yielding E). By returning a Result, a function allows its clients to discover and react to obstructions in a fine-grained way.

What about Option?

The Option type should not be used for “obstructed” operations; it should only be used when a None return value could be considered a “successful” execution of the operation.

This is of course a somewhat subjective question, but a good litmus test is: would a reasonable client ever ignore the result? The Result type provides a lint that ensures the result is actually inspected, while Option does not, and this difference of behavior can help when deciding between the two types.

Another litmus test: can the operation be understood as asking a question (possibly with sideeffects)? Operations like pop on a vector can be viewed as asking for the contents of the first element, with the side effect of removing it if it exists – with an Option return value.

Do not provide both Result and fail! variants.

An API should not provide both Result-producing and failing versions of an operation. It should provide just the Result version, allowing clients to use try! or unwrap instead as needed. This is part of the general pattern of cutting down on redundant variants by instead using method chaining.

There is one exception to this rule, however. Some APIs are strongly oriented around failure, in the sense that their functions/methods are explicitly intended as assertions. If there is no other way to check in advance for the validity of invoking an operation foo, however, the API may provide a foo_catch variant that returns a Result.

The main examples in libstd that currently provide both variants are:

  • Channels, which are the primary point of failure propagation between tasks. As such, calling recv() is an assertion that the other end of the channel is still alive, which will propagate failures from the other end of the channel. On the other hand, since there is no separate way to atomically test whether the other end has hung up, channels provide a recv_opt variant that produces a Result.

    Note: the _opt suffix would be replaced by a _catch suffix if this RFC is accepted.

  • RefCell, which provides a dynamic version of the borrowing rules. Calling the borrow() method is intended as an assertion that the cell is in a borrowable state, and will fail! otherwise. On the other hand, there is no separate way to check the state of the RefCell, so the module provides a try_borrow variant that produces a Result.

    Note: the try_ prefix would be replaced by a _catch catch if this RFC is accepted.

(Note: it is unclear whether these APIs will continue to provide both variants.)

Drawbacks

The main drawbacks of this proposal are:

  • Task failure remains somewhat of a landmine: one must be sure to document, and be aware of, all relevant function contracts in order to avoid task failure.

  • The choice of what to make part of a function’s contract remains somewhat subjective, so these guidelines cannot be used to decisively resolve disagreements about an API’s design.

The alternatives mentioned below do not suffer from these problems, but have drawbacks of their own.

Alternatives

Two alternative designs have been given in earlier RFCs, both of which take a much harder line on using fail! (or, put differently, do not allow most functions to have contracts).

As was pointed out by @SiegeLord, however, mixing what might be seen as contract violations with obstructions can make it much more difficult to write obstruction-robust code; see the linked comment for more detail.

Naming

There are numerous possible suffixes for a Result-producing variant:

  • _catch, as proposed above. As @lilyball points out, this name connotes exception handling, which could be considered misleading. However, since it effectively prevents further unwinding, catching an exception may indeed be the right analogy.

  • _result, which is straightforward but not as informative/suggestive as some of the other proposed variants.

  • try_ prefix. Also connotes exception handling, but has an unfortunately overlap with the common use of try_ for nonblocking variants (which is in play for recv in particular).

Summary

This is a conventions RFC for settling the location of unsafe APIs relative to the types they work with, as well as the use of raw submodules.

The brief summary is:

  • Unsafe APIs should be made into methods or static functions in the same cases that safe APIs would be.

  • raw submodules should be used only to define explicit low-level representations.

Motivation

Many data structures provide unsafe APIs either for avoiding checks or working directly with their (otherwise private) representation. For example, string provides:

  • An as_mut_vec method on String that provides a Vec<u8> view of the string. This method makes it easy to work with the byte-based representation of the string, but thereby also allows violation of the utf8 guarantee.

  • A raw submodule with a number of free functions, like from_parts, that constructs a String instances from a raw-pointer-based representation, a from_utf8 variant that does not actually check for utf8 validity, and so on. The unifying theme is that all of these functions avoid checking some key invariant.

The problem is that currently, there is no clear/consistent guideline about which of these APIs should live as methods/static functions associated with a type, and which should live in a raw submodule. Both forms appear throughout the standard library.

Detailed design

The proposed convention is:

  • When an unsafe function/method is clearly “about” a certain type (as a way of constructing, destructuring, or modifying values of that type), it should be a method or static function on that type. This is the same as the convention for placement of safe functions/methods. So functions like string::raw::from_parts would become static functions on String.

  • raw submodules should only be used to define low-level types/representations (and methods/functions on them). Methods for converting to/from such low-level types should be available directly on the high-level types. Examples: core::raw, sync::raw.

The benefits are:

  • Ergonomics. You can gain easy access to unsafe APIs merely by having a value of the type (or, for static functions, importing the type).

  • Consistency and simplicity. The rules for placement of unsafe APIs are the same as those for safe APIs.

The perspective here is that marking APIs unsafe is enough to deter their use in ordinary situations; they don’t need to be further distinguished by placement into a separate module.

There are also some naming conventions to go along with unsafe static functions and methods:

  • When an unsafe function/method is an unchecked variant of an otherwise safe API, it should be marked using an _unchecked suffix.

    For example, the String module should provide both from_utf8 and from_utf8_unchecked constructors, where the latter does not actually check the utf8 encoding. The string::raw::slice_bytes and string::raw::slice_unchecked functions should be merged into a single slice_unchecked method on strings that checks neither bounds nor utf8 boundaries.

  • When an unsafe function/method produces or consumes a low-level representation of a data structure, the API should use raw in its name. Specifically, from_raw_parts is the typical name used for constructing a value from e.g. a pointer-based representation.

  • Otherwise, consider using a name that suggests why the API is unsafe. In some cases, like String::as_mut_vec, other stronger conventions apply, and the unsafe qualifier on the signature (together with API documentation) is enough.

The unsafe methods and static functions for a given type should be placed in their own impl block, at the end of the module defining the type; this will ensure that they are grouped together in rustdoc. (Thanks @lilyball for the suggestion.)

Drawbacks

One potential drawback of these conventions is that the documentation for a module will be cluttered with rarely-used unsafe APIs, whereas the raw submodule approach neatly groups these APIs. But rustdoc could easily be changed to either hide or separate out unsafe APIs by default, and in the meantime the impl block grouping should help.

More specifically, the convention of placing unsafe constructors in raw makes them very easy to find. But the usual from_ convention, together with the naming conventions suggested above, should make it fairly easy to discover such constructors even when they’re supplied directly as static functions.

More generally, these conventions give unsafe APIs more equal status with safe APIs. Whether this is a drawback depends on your philosophy about the status of unsafe programming. But on a technical level, the key point is that the APIs are marked unsafe, so users still have to opt-in to using them. Ed note: from my perspective, low-level/unsafe programming is important to support, and there is no reason to penalize its ergonomics given that it’s opt-in anyway.

Alternatives

There are a few alternatives:

  • Rather than providing unsafe APIs directly as methods/static functions, they could be grouped into a single extension trait. For example, the String type could be accompanied by a StringRaw extension trait providing APIs for working with raw string representations. This would allow a clear grouping of unsafe APIs, while still providing them as methods/static functions and allowing them to easily be imported with e.g. use std::string::StringRaw. On the other hand, it still further penalizes the raw APIs (beyond marking them unsafe), and given that rustdoc could easily provide API grouping, it’s unclear exactly what the benefit is.

  • (Suggested by @lilyball):

    Use raw for functions that construct a value of the type without checking for one or more invariants.

    The advantage is that it’s easy to find such invariant-ignoring functions. The disadvantage is that their ergonomics is worsened, since they much be separately imported or referenced through a lengthy path:

    // Compare the ergonomics:
    string::raw::slice_unchecked(some_string, start, end)
    some_string.slice_unchecked(start, end)
  • Another suggestion by @lilyball is to keep the basic structure of raw submodules, but use associated types to improve the ergonomics. Details (and discussions of pros/cons) are in this comment.

  • Use raw submodules to group together all manipulation of low-level representations. No module in std currently does this; existing modules provide some free functions in raw, and some unsafe methods, without a clear driving principle. The ergonomics of moving everything into free functions in a raw submodule are quite poor.

Unresolved questions

The core::raw module provides structs with public representations equivalent to several built-in and library types (boxes, closures, slices, etc.). It’s not clear whether the name of this module, or the location of its contents, should change as a result of this RFC. The module is a special case, because not all of the types it deals with even have corresponding modules/type declarations – so it probably suffices to leave decisions about it to the API stabilization process.

Summary

Add the following coercions:

  • From &T to &U when T: Deref<U>.
  • From &mut T to &U when T: Deref<U>.
  • From &mut T to &mut U when T: DerefMut<U>

These coercions eliminate the need for “cross-borrowing” (things like &**v) and calls to as_slice.

Motivation

Rust currently supports a conservative set of implicit coercions that are used when matching the types of arguments against those given for a function’s parameters. For example, if T: Trait then &T is implicitly coerced to &Trait when used as a function argument:

trait MyTrait { ... }
struct MyStruct { ... }
impl MyTrait for MyStruct { ... }

fn use_trait_obj(t: &MyTrait) { ... }
fn use_struct(s: &MyStruct) {
    use_trait_obj(s)    // automatically coerced from &MyStruct to &MyTrait
}

In older incarnations of Rust, in which types like vectors were built in to the language, coercions included things like auto-borrowing (taking T to &T), auto-slicing (taking Vec<T> to &[T]) and “cross-borrowing” (taking Box<T> to &T). As built-in types migrated to the library, these coercions have disappeared: none of them apply today. That means that you have to write code like &**v to convert &Box<T> or Rc<RefCell<T>> to &T and v.as_slice() to convert Vec<T> to &T.

The ergonomic regression was coupled with a promise that we’d improve things in a more general way later on.

“Later on” has come! The premise of this RFC is that (1) we have learned some valuable lessons in the interim and (2) there is a quite conservative kind of coercion we can add that dramatically improves today’s ergonomic state of affairs.

Detailed design

Design principles

The centrality of ownership and borrowing

As Rust has evolved, a theme has emerged: ownership and borrowing are the focal point of Rust’s design, and the key enablers of much of Rust’s achievements.

As such, reasoning about ownership/borrowing is a central aspect of programming in Rust.

In the old coercion model, borrowing could be done completely implicitly, so an invocation like:

foo(bar, baz, quux)

might move bar, immutably borrow baz, and mutably borrow quux. To understand the flow of ownership, then, one has to be aware of the details of all function signatures involved – it is not possible to see ownership at a glance.

When auto-borrowing was removed, this reasoning difficulty was cited as a major motivator:

Code readability does not necessarily benefit from autoref on arguments:

let a = ~Foo;
foo(a); // reading this code looks like it moves `a`
fn foo(_: &Foo) {} // ah, nevermind, it doesn't move `a`!

let mut a = ~[ ... ];
sort(a); // not only does this not move `a`, but it mutates it!

Having to include an extra & or &mut for arguments is a slight inconvenience, but it makes it much easier to track ownership at a glance. (Note that ownership is not entirely explicit, due to self and macros; see the appendix.)

This RFC takes as a basic principle: Coercions should never implicitly borrow from owned data.

This is a key difference from the cross-borrowing RFC.

Limit implicit execution of arbitrary code

Another positive aspect of Rust’s current design is that a function call like foo(bar, baz) does not invoke arbitrary code (general implicit coercions, as found in e.g. Scala). It simply executes foo.

The tradeoff here is similar to the ownership tradeoff: allowing arbitrary implicit coercions means that a programmer must understand the types of the arguments given, the types of the parameters, and all applicable coercion code in order to understand what code will be executed. While arbitrary coercions are convenient, they come at a substantial cost in local reasoning about code.

Of course, method dispatch can implicitly execute code via Deref. But Deref is a pretty specialized tool:

  • Each type T can only deref to one other type.

    (Note: this restriction is not currently enforced, but will be enforceable once associated types land.)

  • Deref makes all the methods of the target type visible on the source type.

  • The source and target types are both references, limiting what the deref code can do.

These characteristics combined make Deref suitable for smart pointer-like types and little else. They make Deref implementations relatively rare. And as a consequence, you generally know when you’re working with a type implementing Deref.

This RFC takes as a basic principle: Coercions should narrowly limit the code they execute.

Coercions through Deref are considered narrow enough.

The proposal

The idea is to introduce a coercion corresponding to Deref/DerefMut, but only for already-borrowed values:

  • From &T to &U when T: Deref<U>.
  • From &mut T to &U when T: Deref<U>.
  • From &mut T to &mut U when T: DerefMut<U>

These coercions are applied recursively, similarly to auto-deref for method dispatch.

Here is a simple pseudocode algorithm for determining the applicability of coercions. Let HasBasicCoercion(T, U) be a procedure for determining whether T can be coerced to U using today’s coercion rules (i.e. without deref). The general HasCoercion(T, U) procedure would work as follows:

HasCoercion(T, U):

  if HasBasicCoercion(T, U) then
      true
  else if T = &V and V: Deref<W> then
      HasCoercion(&W, U)
  else if T = &mut V and V: Deref<W> then
      HasCoercion(&W, U)
  else if T = &mut V and V: DerefMut<W> then
      HasCoercion(&W, U)
  else
      false

Essentially, the procedure looks for applicable “basic” coercions at increasing levels of deref from the given argument, just as method resolution searches for applicable methods at increasing levels of deref.

Unlike method resolution, however, this coercion does not automatically borrow.

Benefits of the design

Under this coercion design, we’d see the following ergonomic improvements for “cross-borrowing”:

fn use_ref(t: &T) { ... }
fn use_mut(t: &mut T) { ... }

fn use_rc(t: Rc<T>) {
    use_ref(&*t);  // what you have to write today
    use_ref(&t);   // what you'd be able to write
}

fn use_mut_box(t: &mut Box<T>) {
    use_mut(&mut *t); // what you have to write today
    use_mut(t);       // what you'd be able to write

    use_ref(*t);      // what you have to write today
    use_ref(t);       // what you'd be able to write
}

fn use_nested(t: &Box<T>) {
    use_ref(&**t);  // what you have to write today
    use_ref(t);     // what you'd be able to write (note: recursive deref)
}

In addition, if Vec<T>: Deref<[T]> (as proposed here), slicing would be automatic:

fn use_slice(s: &[u8]) { ... }

fn use_vec(v: Vec<u8>) {
    use_slice(v.as_slice());    // what you have to write today
    use_slice(&v);              // what you'd be able to write
}

fn use_vec_ref(v: &Vec<u8>) {
    use_slice(v.as_slice());    // what you have to write today
    use_slice(v);               // what you'd be able to write
}

Characteristics of the design

The design satisfies both of the principles laid out in the Motivation:

  • It does not introduce implicit borrows of owned data, since it only applies to already-borrowed data.

  • It only applies to Deref types, which means there is only limited potential for implicitly running unknown code; together with the expectation that programmers are generally aware when they are using Deref types, this should retain the kind of local reasoning Rust programmers can do about function/method invocations today.

There is a conceptual model implicit in the design here: & is a “borrow” operator, and richer coercions are available between borrowed types. This perspective is in opposition to viewing & primarily as adding a layer of indirection – a view that, given compiler optimizations, is often inaccurate anyway.

Drawbacks

As with any mechanism that implicitly invokes code, deref coercions make it more complex to fully understand what a given piece of code is doing. The RFC argued inline that the design conserves local reasoning in practice.

As mentioned above, this coercion design also changes the mental model surrounding &, and in particular somewhat muddies the idea that it creates a pointer. This change could make Rust more difficult to learn (though note that it puts more attention on ownership), though it would make it more convenient to use in the long run.

Alternatives

The main alternative that addresses the same goals as this RFC is the cross-borrowing RFC, which proposes a more aggressive form of deref coercion: it would allow converting e.g. Box<T> to &T and Vec<T> to &[T] directly. The advantage is even greater convenience: in many cases, even & is not necessary. The disadvantage is the change to local reasoning about ownership:

let v = vec![0u8, 1, 2];
foo(v); // is v moved here?
bar(v); // is v still available?

Knowing whether v is moved in the call to foo requires knowing foo’s signature, since the coercion would implicitly borrow from the vector.

Appendix: ownership in Rust today

In today’s Rust, ownership transfer/borrowing is explicit for all function/method arguments. It is implicit only for:

  • self on method invocations. In practice, the name and context of a method invocation is almost always sufficient to infer its move/borrow semantics.

  • Macro invocations. Since macros can expand into arbitrary code, macro invocations can appear to move when they actually borrow.

Summary

Add syntactic sugar for working with the Result type which models common exception handling constructs.

The new constructs are:

  • An ? operator for explicitly propagating “exceptions”.

  • A catch { ... } expression for conveniently catching and handling “exceptions”.

The idea for the ? operator originates from RFC PR 204 by @aturon.

Motivation and overview

Rust currently uses the enum Result type for error handling. This solution is simple, well-behaved, and easy to understand, but often gnarly and inconvenient to work with. We would like to solve the latter problem while retaining the other nice properties and avoiding duplication of functionality.

We can accomplish this by adding constructs which mimic the exception-handling constructs of other languages in both appearance and behavior, while improving upon them in typically Rustic fashion. Their meaning can be specified by a straightforward source-to-source translation into existing language constructs, plus a very simple and obvious new one. (They may also, but need not necessarily, be implemented in this way.)

These constructs are strict additions to the existing language, and apart from the issue of keywords, the legality and behavior of all currently existing Rust programs is entirely unaffected.

The most important additions are a postfix ? operator for propagating “exceptions” and a catch {..} expression for catching them. By an “exception”, for now, we essentially just mean the Err variant of a Result, though the Unresolved Questions includes some discussion of extending to other types.

? operator

The postfix ? operator can be applied to Result values and is equivalent to the current try!() macro. It either returns the Ok value directly, or performs an early exit and propagates the Err value further out. (So given my_result: Result<Foo, Bar>, we have my_result?: Foo.) This allows it to be used for e.g. conveniently chaining method calls which may each “throw an exception”:

foo()?.bar()?.baz()

Naturally, in this case the types of the “exceptions thrown by” foo() and bar() must unify. Like the current try!() macro, the ? operator will also perform an implicit “upcast” on the exception type.

When used outside of a catch block, the ? operator propagates the exception to the caller of the current function, just like the current try! macro does. (If the return type of the function isn’t a Result, then this is a type error.) When used inside a catch block, it propagates the exception up to the innermost catch block, as one would expect.

Requiring an explicit ? operator to propagate exceptions strikes a very pleasing balance between completely automatic exception propagation, which most languages have, and completely manual propagation, which we’d have apart from the try! macro. It means that function calls remain simply function calls which return a result to their caller, with no magic going on behind the scenes; and this also increases flexibility, because one gets to choose between propagation with ? or consuming the returned Result directly.

The ? operator itself is suggestive, syntactically lightweight enough to not be bothersome, and lets the reader determine at a glance where an exception may or may not be thrown. It also means that if the signature of a function changes with respect to exceptions, it will lead to type errors rather than silent behavior changes, which is a good thing. Finally, because exceptions are tracked in the type system, and there is no silent propagation of exceptions, and all points where an exception may be thrown are readily apparent visually, this also means that we do not have to worry very much about “exception safety”.

Exception type upcasting

In a language with checked exceptions and subtyping, it is clear that if a function is declared as throwing a particular type, its body should also be able to throw any of its subtypes. Similarly, in a language with structural sum types (a.k.a. anonymous enums, polymorphic variants), one should be able to throw a type with fewer cases in a function declaring that it may throw a superset of those cases. This is essentially what is achieved by the common Rust practice of declaring a custom error enum with From impls for each of the upstream error types which may be propagated:

enum MyError {
    IoError(io::Error),
    JsonError(json::Error),
    OtherError(...)
}

impl From<io::Error> for MyError { ... }
impl From<json::Error> for MyError { ... }

Here io::Error and json::Error can be thought of as subtypes of MyError, with a clear and direct embedding into the supertype.

The ? operator should therefore perform such an implicit conversion, in the nature of a subtype-to-supertype coercion. The present RFC uses the std::convert::Into trait for this purpose (which has a blanket impl forwarding from From). The precise requirements for a conversion to be “like” a subtyping coercion are an open question; see the “Unresolved questions” section.

catch expressions

This RFC also introduces an expression form catch {..}, which serves to “scope” the ? operator. The catch operator executes its associated block. If no exception is thrown, then the result is Ok(v) where v is the value of the block. Otherwise, if an exception is thrown, then the result is Err(e). Note that unlike other languages, a catch block always catches all errors, and they must all be coercible to a single type, as a Result only has a single Err type. This dramatically simplifies thinking about the behavior of exception-handling code.

Note that catch { foo()? } is essentially equivalent to foo(). catch can be useful if you want to coalesce multiple potential exceptions – catch { foo()?.bar()?.baz()? } – into a single Result, which you wish to then e.g. pass on as-is to another function, rather than analyze yourself. (The last example could also be expressed using a series of and_then calls.)

Detailed design

The meaning of the constructs will be specified by a source-to-source translation. We make use of an “early exit from any block” feature which doesn’t currently exist in the language, generalizes the current break and return constructs, and is independently useful.

Early exit from any block

The capability can be exposed either by generalizing break to take an optional value argument and break out of any block (not just loops), or by generalizing return to take an optional lifetime argument and return from any block, not just the outermost block of the function. This feature is only used in this RFC as an explanatory device, and implementing the RFC does not require exposing it, so I am going to arbitrarily choose the break syntax for the following and won’t discuss the question further.

So we are extending break with an optional value argument: break 'a EXPR. This is an expression of type ! which causes an early return from the enclosing block specified by 'a, which then evaluates to the value EXPR (of course, the type of EXPR must unify with the type of the last expression in that block). This works for any block, not only loops.

[Note: This was since added in RFC 2046]

A completely artificial example:

'a: {
    let my_thing = if have_thing() {
        get_thing()
    } else {
        break 'a None
    };
    println!("found thing: {}", my_thing);
    Some(my_thing)
}

Here if we don’t have a thing, we escape from the block early with None.

If no value is specified, it defaults to (): in other words, the current behavior. We can also imagine there is a magical lifetime 'fn which refers to the lifetime of the whole function: in this case, break 'fn is equivalent to return.

Again, this RFC does not propose generalizing break in this way at this time: it is only used as a way to explain the meaning of the constructs it does propose.

Definition of constructs

Finally we have the definition of the new constructs in terms of a source-to-source translation.

In each case except the first, I will provide two definitions: a single-step “shallow” desugaring which is defined in terms of the previously defined new constructs, and a “deep” one which is “fully expanded”.

Of course, these could be defined in many equivalent ways: the below definitions are merely one way.

  • Construct:

     EXPR?
    

    Shallow:

    match EXPR {
        Ok(a)  => a,
        Err(e) => break 'here Err(e.into())
    }

    Where 'here refers to the innermost enclosing catch block, or to 'fn if there is none.

    The ? operator has the same precedence as ..

  • Construct:

    catch {
        foo()?.bar()
    }

    Shallow:

    'here: {
        Ok(foo()?.bar())
    }

    Deep:

    'here: {
        Ok(match foo() {
            Ok(a) => a,
            Err(e) => break 'here Err(e.into())
        }.bar())
    }

The fully expanded translations get quite gnarly, but that is why it’s good that you don’t have to write them!

In general, the types of the defined constructs should be the same as the types of their definitions.

(As noted earlier, while the behavior of the constructs can be specified using a source-to-source translation in this manner, they need not necessarily be implemented this way.)

As a result of this RFC, both Into and Result would have to become lang items.

Laws

Without any attempt at completeness, here are some things which should be true:

  • catch { foo() } = Ok(foo())
  • catch { Err(e)? } = Err(e.into())
  • catch { try_foo()? } = try_foo().map_err(Into::into)

(In the above, foo() is a function returning any type, and try_foo() is a function returning a Result.)

Feature gates

The two major features here, the ? syntax and catch expressions, will be tracked by independent feature gates. Each of the features has a distinct motivation, and we should evaluate them independently.

Unresolved questions

These questions should be satisfactorily resolved before stabilizing the relevant features, at the latest.

Optional match sugar

Originally, the RFC included the ability to match the errors caught by a catch by writing catch { .. } match { .. }, which could be translated as follows:

  • Construct:

    catch {
        foo()?.bar()
    } match {
        A(a) => baz(a),
        B(b) => quux(b)
    }

    Shallow:

    match (catch {
        foo()?.bar()
    }) {
        Ok(a) => a,
        Err(e) => match e {
            A(a) => baz(a),
            B(b) => quux(b)
        }
    }

    Deep:

    match ('here: {
        Ok(match foo() {
            Ok(a) => a,
            Err(e) => break 'here Err(e.into())
        }.bar())
    }) {
        Ok(a) => a,
        Err(e) => match e {
            A(a) => baz(a),
            B(b) => quux(b)
        }
    }

However, it was removed for the following reasons:

  • The catch (originally: try) keyword adds the real expressive “step up” here, the match (originally: catch) was just sugar for unwrap_or.
  • It would be easy to add further sugar in the future, once we see how catch is used (or not used) in practice.
  • There was some concern about potential user confusion about two aspects:
    • catch { } yields a Result<T,E> but catch { } match { } yields just T;
    • catch { } match { } handles all kinds of errors, unlike try/catch in other languages which let you pick and choose.

It may be worth adding such a sugar in the future, or perhaps a variant that binds irrefutably and does not immediately lead into a match block.

Choice of keywords

The RFC to this point uses the keyword catch, but there are a number of other possibilities, each with different advantages and drawbacks:

  • try { ... } catch { ... }

  • try { ... } match { ... }

  • try { ... } handle { ... }

  • catch { ... } match { ... }

  • catch { ... } handle { ... }

  • catch ... (without braces or a second clause)

Among the considerations:

  • Simplicity. Brevity.

  • Following precedent from existing, popular languages, and familiarity with respect to their analogous constructs.

  • Fidelity to the constructs’ actual behavior. For instance, the first clause always catches the “exception”; the second only branches on it.

  • Consistency with the existing try!() macro. If the first clause is called try, then try { } and try!() would have essentially inverse meanings.

  • Language-level backwards compatibility when adding new keywords. I’m not sure how this could or should be handled.

Semantics for “upcasting”

What should the contract for a From/Into impl be? Are these even the right traits to use for this feature?

Two obvious, minimal requirements are:

  • It should be pure: no side effects, and no observation of side effects. (The result should depend only on the argument.)

  • It should be total: no panics or other divergence, except perhaps in the case of resource exhaustion (OOM, stack overflow).

The other requirements for an implicit conversion to be well-behaved in the context of this feature should be thought through with care.

Some further thoughts and possibilities on this matter, only as brainstorming:

  • It should be “like a coercion from subtype to supertype”, as described earlier. The precise meaning of this is not obvious.

  • A common condition on subtyping coercions is coherence: if you can compound-coerce to go from A to Z indirectly along multiple different paths, they should all have the same end result.

  • It should be lossless, or in other words, injective: it should map each observably-different element of the input type to observably-different elements of the output type. (Observably-different means that it is possible to write a program which behaves differently depending on which one it gets, modulo things that “shouldn’t count” like observing execution time or resource usage.)

  • It should be unambiguous, or preserve the meaning of the input: impl From<u8> for u32 as x as u32 feels right; as (x as u32) * 12345 feels wrong, even though this is perfectly pure, total, and injective. What this means precisely in the general case is unclear.

  • The types converted between should the “same kind of thing”: for instance, the existing impl From<u32> for Ipv4Addr feels suspect on this count. (This perhaps ties into the subtyping angle: Ipv4Addr is clearly not a supertype of u32.)

Forwards-compatibility

If we later want to generalize this feature to other types such as Option, as described below, will we be able to do so while maintaining backwards-compatibility?

Monadic do notation

There have been many comparisons drawn between this syntax and monadic do notation. Before stabilizing, we should determine whether we plan to make changes to better align this feature with a possible do notation (for example, by removing the implicit Ok at the end of a catch block). Note that such a notation would have to extend the standard monadic bind to accommodate rich control flow like break, continue, and return.

Drawbacks

  • Increases the syntactic surface area of the language.

  • No expressivity is added, only convenience. Some object to “there’s more than one way to do it” on principle.

  • If at some future point we were to add higher-kinded types and syntactic sugar for monads, a la Haskell’s do or Scala’s for, their functionality may overlap and result in redundancy. However, a number of challenges would have to be overcome for a generic monadic sugar to be able to fully supplant these features: the integration of higher-kinded types into Rust’s type system in the first place, the shape of a Monad trait in a language with lifetimes and move semantics, interaction between the monadic control flow and Rust’s native control flow (the “ambient monad”), automatic upcasting of exception types via Into (the exception (Either, Result) monad normally does not do this, and it’s not clear whether it can), and potentially others.

Alternatives

  • Don’t.

  • Only add the ? operator, but not catch expressions.

  • Instead of a built-in catch construct, attempt to define one using macros. However, this is likely to be awkward because, at least, macros may only have their contents as a single block, rather than two. Furthermore, macros are excellent as a “safety net” for features which we forget to add to the language itself, or which only have specialized use cases; but generally useful control flow constructs still work better as language features.

  • Add first-class checked exceptions, which are propagated automatically (without an ? operator).

    This has the drawbacks of being a more invasive change and duplicating functionality: each function must choose whether to use checked exceptions via throws, or to return a Result. While the two are isomorphic and converting between them is easy, with this proposal, the issue does not even arise, as exception handling is defined in terms of Result. Furthermore, automatic exception propagation raises the specter of “exception safety”: how serious an issue this would actually be in practice, I don’t know - there’s reason to believe that it would be much less of one than in C++.

  • Wait (and hope) for HKTs and generic monad sugar.

Future possibilities

Expose a generalized form of break or return as described

This RFC doesn’t propose doing so at this time, but as it would be an independently useful feature, it could be added as well.

throw and throws

It is possible to carry the exception handling analogy further and also add throw and throws constructs.

throw is very simple: throw EXPR is essentially the same thing as Err(EXPR)?; in other words it throws the exception EXPR to the innermost catch block, or to the function’s caller if there is none.

A throws clause on a function:

fn foo(arg: Foo) -> Bar throws Baz { ... }

would mean that instead of writing return Ok(foo) and return Err(bar) in the body of the function, one would write return foo and throw bar, and these are implicitly turned into Ok or Err for the caller. This removes syntactic overhead from both “normal” and “throwing” code paths and (apart from ? to propagate exceptions) matches what code might look like in a language with native exceptions.

Generalize over Result, Option, and other result-carrying types

Option<T> is completely equivalent to Result<T, ()> modulo names, and many common APIs use the Option type, so it would be useful to extend all of the above syntax to Option, and other (potentially user-defined) equivalent-to-Result types, as well.

This can be done by specifying a trait for types which can be used to “carry” either a normal result or an exception. There are several different, equivalent ways to formulate it, which differ in the set of methods provided, but the meaning in any case is essentially just that you can choose some types Normal and Exception such that Self is isomorphic to Result<Normal, Exception>.

Here is one way:

#[lang(result_carrier)]
trait ResultCarrier {
    type Normal;
    type Exception;
    fn embed_normal(from: Normal) -> Self;
    fn embed_exception(from: Exception) -> Self;
    fn translate<Other: ResultCarrier<Normal=Normal, Exception=Exception>>(from: Self) -> Other;
}

For greater clarity on how these methods work, see the section on impls below. (For a simpler formulation of the trait using Result directly, see further below.)

The translate method says that it should be possible to translate to any other ResultCarrier type which has the same Normal and Exception types. This may not appear to be very useful, but in fact, this is what can be used to inspect the result, by translating it to a concrete, known type such as Result<Normal, Exception> and then, for example, pattern matching on it.

Laws:

  1. For all x, translate(embed_normal(x): A): B = embed_normal(x): B.
  2. For all x, translate(embed_exception(x): A): B = embed_exception(x): B.
  3. For all carrier, translate(translate(carrier: A): B): A = carrier: A.

Here I’ve used explicit type ascription syntax to make it clear that e.g. the types of embed_ on the left and right hand sides are different.

The first two laws say that embedding a result x into one result-carrying type and then translating it to a second result-carrying type should be the same as embedding it into the second type directly.

The third law says that translating to a different result-carrying type and then translating back should be a no-op.

impls of the trait

impl<T, E> ResultCarrier for Result<T, E> {
    type Normal = T;
    type Exception = E;
    fn embed_normal(a: T) -> Result<T, E> { Ok(a) }
    fn embed_exception(e: E) -> Result<T, E> { Err(e) }
    fn translate<Other: ResultCarrier<Normal=T, Exception=E>>(result: Result<T, E>) -> Other {
        match result {
            Ok(a)  => Other::embed_normal(a),
            Err(e) => Other::embed_exception(e)
        }
    }
}

As we can see, translate can be implemented by deconstructing ourself and then re-embedding the contained value into the other result-carrying type.

impl<T> ResultCarrier for Option<T> {
    type Normal = T;
    type Exception = ();
    fn embed_normal(a: T) -> Option<T> { Some(a) }
    fn embed_exception(e: ()) -> Option<T> { None }
    fn translate<Other: ResultCarrier<Normal=T, Exception=()>>(option: Option<T>) -> Other {
        match option {
            Some(a) => Other::embed_normal(a),
            None    => Other::embed_exception(())
        }
    }
}

Potentially also:

impl ResultCarrier for bool {
    type Normal = ();
    type Exception = ();
    fn embed_normal(a: ()) -> bool { true }
    fn embed_exception(e: ()) -> bool { false }
    fn translate<Other: ResultCarrier<Normal=(), Exception=()>>(b: bool) -> Other {
        match b {
            true  => Other::embed_normal(()),
            false => Other::embed_exception(())
        }
    }
}

The laws should be sufficient to rule out any “icky” impls. For example, an impl for Vec where an exception is represented as the empty vector, and a normal result as a single-element vector: here the third law fails, because if the Vec has more than one element to begin with, then it’s not possible to translate to a different result-carrying type and then back without losing information.

The bool impl may be surprising, or not useful, but it is well-behaved: bool is, after all, isomorphic to Result<(), ()>.

Other miscellaneous notes about ResultCarrier

  • Our current lint for unused results could be replaced by one which warns for any unused result of a type which implements ResultCarrier.

  • If there is ever ambiguity due to the result-carrying type being underdetermined (experience should reveal whether this is a problem in practice), we could resolve it by defaulting to Result.

  • Translating between different result-carrying types with the same Normal and Exception types should, but may not necessarily currently be, a machine-level no-op most of the time.

    We could/should make it so that:

    • repr(Option<T>) = repr(Result<T, ()>)
    • repr(bool) = repr(Option<()>) = repr(Result<(), ()>)

    If these hold, then translate between these types could in theory be compiled down to just a transmute. (Whether LLVM is smart enough to do this, I don’t know.)

  • The translate() function smells to me like a natural transformation between functors, but I’m not category theorist enough for it to be obvious.

Alternative formulations of the ResultCarrier trait

All of these have the form:

trait ResultCarrier {
    type Normal;
    type Exception;
    ...methods...
}

and differ only in the methods, which will be given.

Explicit isomorphism with Result

fn from_result(Result<Normal, Exception>) -> Self;
fn to_result(Self) -> Result<Normal, Exception>;

This is, of course, the simplest possible formulation.

The drawbacks are that it, in some sense, privileges Result over other potentially equivalent types, and that it may be less efficient for those types: for any non-Result type, every operation requires two method calls (one into Result, and one out), whereas with the ResultCarrier trait in the main text, they only require one.

Laws:

  • For all x, from_result(to_result(x)) = x.
  • For all x, to_result(from_result(x)) = x.

Laws for the remaining formulations below are left as an exercise for the reader.

Avoid privileging Result, most naive version

fn embed_normal(Normal) -> Self;
fn embed_exception(Exception) -> Self;
fn is_normal(&Self) -> bool;
fn is_exception(&Self) -> bool;
fn assert_normal(Self) -> Normal;
fn assert_exception(Self) -> Exception;

Of course this is horrible.

Destructuring with HOFs (a.k.a. Church/Scott-encoding)

fn embed_normal(Normal) -> Self;
fn embed_exception(Exception) -> Self;
fn match_carrier<T>(Self, FnOnce(Normal) -> T, FnOnce(Exception) -> T) -> T;

This is probably the right approach for Haskell, but not for Rust.

With this formulation, because they each take ownership of them, the two closures may not even close over the same variables!

Destructuring with HOFs, round 2

trait BiOnceFn {
    type ArgA;
    type ArgB;
    type Ret;
    fn callA(Self, ArgA) -> Ret;
    fn callB(Self, ArgB) -> Ret;
}

trait ResultCarrier {
    type Normal;
    type Exception;
    fn normal(Normal) -> Self;
    fn exception(Exception) -> Self;
    fn match_carrier<T>(Self, BiOnceFn<ArgA=Normal, ArgB=Exception, Ret=T>) -> T;
}

Here we solve the environment-sharing problem from above: instead of two objects with a single method each, we use a single object with two methods! I believe this is the most flexible and general formulation (which is however a strange thing to believe when they are all equivalent to each other). Of course, it’s even more awkward syntactically.

Summary

Divide global declarations into two categories:

  • constants declare constant values. These represent a value, not a memory address. This is the most common thing one would reach for and would replace static as we know it today in almost all cases.
  • statics declare global variables. These represent a memory address. They would be rarely used: the primary use cases are global locks, global atomic counters, and interfacing with legacy C libraries.

Motivation

We have been wrestling with the best way to represent globals for some times. There are a number of interrelated issues:

  • Significant addresses and inlining: For optimization purposes, it is useful to be able to inline constant values directly into the program. It is even more useful if those constant values do not have known addresses, because that means the compiler is free to replicate them as it wishes. Moreover, if a constant is inlined into downstream crates, then they must be recompiled whenever that constant changes.
  • Read-only memory: Whenever possible, we’d like to place large constants into read-only memory. But this means that the data must be truly immutable, or else a segfault will result.
  • Global atomic counters and the like: We’d like to make it possible for people to create global locks or atomic counters that can be used without resorting to unsafe code.
  • Interfacing with C code: Some C libraries require the use of global, mutable data. Other times it’s just convenient and threading is not a concern.
  • Initializer constants: There must be a way to have initializer constants for things like locks and atomic counters, so that people can write static MY_COUNTER: AtomicUint = INIT_ZERO or some such. It should not be possible to modify these initializer constants.

The current design is that we have only one keyword, static, which declares a global variable. By default, global variables do not have significant addresses and can be inlined into the program. You can make a global variable have a significant address by marking it #[inline(never)]. Furthermore, you can declare a mutable global using static mut: all accesses to static mut variables are considered unsafe. Because we wish to allow static values to be placed in read-only memory, they are forbidden from having a type that includes interior mutable data (that is, an appearance of UnsafeCell type).

Some concrete problems with this design are:

  • There is no way to have a safe global counter or lock. Those must be placed in static mut variables, which means that access to them is illegal. To resolve this, there is an alternative proposal, according to which, access to static mut is considered safe if the type of the static mut meets the Sync trait.
  • The significance (no pun intended) of the #[inline(never)] annotation is not intuitive.
  • There is no way to have a generic type constant.

Other less practical and more aesthetic concerns are:

  • Although static and let look and feel analogous, the two behave quite differently. Generally speaking, static declarations do not declare variables but rather values, which can be inlined and which do not have fixed addresses. You cannot have interior mutability in a static variable, but you can in a let. So that static variables can appear in patterns, it is illegal to shadow a static variable – but let variables cannot appear in patterns. Etc.
  • There are other constructs in the language, such as nullary enum variants and nullary structs, which look like global data but in fact act quite differently. They are actual values which do not have addresses. They are categorized as rvalues and so forth.

Detailed design

Constants

Reintroduce a const declaration which declares a constant:

const name: type = value;

Constants may be declared in any scope. They cannot be shadowed. Constants are considered rvalues. Therefore, taking the address of a constant actually creates a spot on the local stack – they by definition have no significant addresses. Constants are intended to behave exactly like nullary enum variants.

Possible extension: Generic constants

As a possible extension, it is perfectly reasonable for constants to have generic parameters. For example, the following constant is legal:

struct WrappedOption<T> { value: Option<T> }
const NONE<T> = WrappedOption { value: None }

Note that this makes no sense for a static variable, which represents a memory location and hence must have a concrete type.

Possible extension: constant functions

It is possible to imagine constant functions as well. This could help to address the problem of encapsulating initialization. To avoid the need to specify what kinds of code can execute in a constant function, we can limit them syntactically to a single constant expression that can be expanded at compilation time (no recursion).

struct LockedData<T:Send> { lock: Lock, value: T }

const LOCKED<T:Send>(t: T) -> LockedData<T> {
    LockedData { lock: INIT_LOCK, value: t }
}

This would allow us to make the value field on UnsafeCell private, among other things.

Static variables

Repurpose the static declaration to declare static variables only. Static variables always have single addresses. static variables can optionally be declared as mut. The lifetime of a static variable is 'static. It is not legal to move from a static. Accesses to a static variable generate actual reads and writes: the value is not inlined (but see “Unresolved Questions” below).

Non-mut statics must have a type that meets the Sync bound. All access to the static is considered safe (that is, reading the variable and taking its address). If the type of the static does not contain an UnsafeCell in its interior, the compiler may place it in read-only memory, but otherwise it must be placed in mutable memory.

mut statics may have any type. All access is considered unsafe. They may not be placed in read-only memory.

Globals referencing Globals

const => const

It is possible to create a const or a static which references another const or another static by its address. For example:

struct SomeStruct { x: uint }
const FOO: SomeStruct = SomeStruct { x: 1 };
const BAR: &'static SomeStruct = &FOO;

Constants are generally inlined into the stack frame from which they are referenced, but in a static context there is no stack frame. Instead, the compiler will reinterpret this as if it were written as:

struct SomeStruct { x: uint }
const FOO: SomeStruct = SomeStruct { x: 1 };
const BAR: &'static SomeStruct = {
    static TMP: SomeStruct = FOO;
    &TMP
};

Here a static is introduced to be able to give the const a pointer which does indeed have the 'static lifetime. Due to this rewriting, the compiler will disallow SomeStruct from containing an UnsafeCell (interior mutability). In general, a constant A cannot reference the address of another constant B if B contains an UnsafeCell in its interior.

const => static

It is illegal for a constant to refer to another static. A constant represents a constant value while a static represents a memory location, and this sort of reference is difficult to reconcile in light of their definitions.

static => const

If a static references the address of a const, then a similar rewriting happens, but there is no interior mutability restriction (only a Sync restriction).

static => static

It is illegal for a static to reference another static by value. It is required that all references be borrowed. Additionally, not all kinds of borrows are allowed, only explicitly taking the address of another static is allowed. For example, interior borrows of fields and elements or accessing elements of an array are both disallowed.

If a by-value reference were allowed, then this sort of reference would require that the static being referenced fall into one of two categories:

  1. It’s an initializer pattern. This is the purpose of const, however.
  2. The values are kept in sync. This is currently technically infeasible.

Instead of falling into one of these two categories, the compiler will instead disallow any references to statics by value (from other statics).

Patterns

Today, a static is allowed to be used in pattern matching. With the introduction of const, however, a static will be forbidden from appearing in a pattern match, and instead only a const can appear.

Drawbacks

This RFC introduces two keywords for global data. Global data is kind of an edge feature so this feels like overkill. (On the other hand, the only keyword that most Rust programmers should need to know is const – I imagine static variables will be used quite rarely.)

Alternatives

The other design under consideration is to keep the current split but make access to static mut be considered safe if the type of the static mut is Sync. For the details of this discussion, please see RFC 177.

One serious concern is with regard to timing. Adding more things to the Rust 1.0 schedule is inadvisable. Therefore, it would be possible to take a hybrid approach: keep the current static rules, or perhaps the variation where access to static mut is safe, for the time being, and create const declarations after Rust 1.0 is released.

Unresolved questions

  • Should the compiler be allowed to inline the values of static variables which are deeply immutable (and thus force recompilation)?

  • Should we permit static variables whose type is not Sync, but simply make access to them unsafe?

  • Should we permit static variables whose type is not Sync, but whose initializer value does not actually contain interior mutability? For example, a static of Option<UnsafeCell<uint>> with the initializer of None is in theory safe.

  • How hard are the envisioned extensions to implement? If easy, they would be nice to have. If hard, they can wait.

Summary

Restrict which traits can be used to make trait objects.

Currently, we allow any traits to be used for trait objects, but restrict the methods which can be called on such objects. Here, we propose instead restricting which traits can be used to make objects. Despite being less flexible, this will make for better error messages, less surprising software evolution, and (hopefully) better design. The motivation for the proposed change is stronger due to part of the DST changes.

Motivation

Part of the planned, in progress DST work is to allow trait objects where a trait is expected. Example:

fn foo<Sized? T: SomeTrait>(y: &T) { ... }

fn bar(x: &SomeTrait) {
    foo(x)
}

Previous to DST the call to foo was not expected to work because SomeTrait was not a type, so it could not instantiate T. With DST this is possible, and it makes intuitive sense for this to work (an alternative is to require impl SomeTrait for SomeTrait { ... }, but that seems weird and confusing and rather like boilerplate. Note that the precise mechanism here is out of scope for this RFC).

This is only sound if the trait is object-safe. We say a method m on trait T is object-safe if it is legal (in current Rust) to call x.m(...) where x has type &T, i.e., x is a trait object. If all methods in T are object-safe, then we say T is object-safe.

If we ignore this restriction we could allow code such as the following:

trait SomeTrait {
    fn foo(&self, other: &Self) { ... } // assume self and other have the same concrete type
}

fn bar<Sized? T: SomeTrait>(x: &T, y: &T) {
    x.foo(y); // x and y may have different concrete types, pre-DST we could
        // assume that x and y had the same concrete types.
}

fn baz(x: &SomeTrait, y: &SomeTrait) {
    bar(x, y) // x and y may have different concrete types
}

This RFC proposes enforcing object-safety when trait objects are created, rather than where methods on a trait object are called or where we attempt to match traits. This makes both method call and using trait objects with generic code simpler. The downside is that it makes Rust less flexible, since not all traits can be used to create trait objects.

Software evolution is improved with this proposal: imagine adding a non-object-safe method to a previously object-safe trait. With this proposal, you would then get errors wherever a trait-object is created. The error would explain why the trait object could not be created and point out exactly which method was to blame and why. Without this proposal, the only errors you would get would be where a trait object is used with a generic call and would be something like “type error: SomeTrait does not implement SomeTrait” - no indication that the non-object-safe method were to blame, only a failure in trait matching.

Another advantage of this proposal is that it implies that all method-calls can always be rewritten into an equivalent UFCS call. This simplifies the “core language” and makes method dispatch notation – which involves some non-trivial inference – into a kind of “sugar” for the more explicit UFCS notation.

Detailed design

To be precise about object-safety, an object-safe method must meet one of the following conditions:

  • require Self : Sized; or,
  • meet all of the following conditions:
    • must not have any type parameters; and,
    • must have a receiver that has type Self or which dereferences to the Self type;
      • for now, this means self, &self, &mut self, or self: Box<Self>, but eventually this should be extended to custom types like self: Rc<Self> and so forth.
    • must not use Self (in the future, where we allow arbitrary types for the receiver, Self may only be used for the type of the receiver and only where we allow Sized? types).

A trait is object-safe if all of the following conditions hold:

  • all of its methods are object-safe; and,
  • the trait does not require that Self : Sized (see also RFC 546).

When an expression with pointer-to-concrete type is coerced to a trait object, the compiler will check that the trait is object-safe (in addition to the usual check that the concrete type implements the trait). It is an error for the trait to be non-object-safe.

Note that a trait can be object-safe even if some of its methods use features that are not supported with an object receiver. This is true when code that attempted to use those features would only work if the Self type is Sized. This is why all methods that require Self:Sized are exempt from the typical rules. This is also why by-value self methods are permitted, since currently one cannot invoke pass an unsized type by-value (though we consider that a useful future extension).

Drawbacks

This is a breaking change and forbids some safe code which is legal today. This can be addressed in two ways: splitting traits, or adding where Self:Sized clauses to methods that cannot not be used with objects.

Example problem

Here is an example trait that is not object safe:

trait SomeTrait {
    fn foo(&self) -> int { ... }
    
    // Object-safe methods may not return `Self`:
    fn new() -> Self;
}

Splitting a trait

One option is to split a trait into object-safe and non-object-safe parts. We hope that this will lead to better design. We are not sure how much code this will affect, it would be good to have data about this.

trait SomeTrait {
    fn foo(&self) -> int { ... }
}

trait SomeTraitCtor : SomeTrait {
    fn new() -> Self;
}

Adding a where-clause

Sometimes adding a second trait feels like overkill. In that case, it is often an option to simply add a where Self:Sized clause to the methods of the trait that would otherwise violate the object safety rule.

trait SomeTrait {
    fn foo(&self) -> int { ... }
    
    fn new() -> Self
        where Self : Sized; // this condition is new
}

The reason that this makes sense is that if one were writing a generic function with a type parameter T that may range over the trait object, that type parameter would have to be declared ?Sized, and hence would not have access to the new method:

fn baz<T:?Sized+SomeTrait>(t: &T) {
    let v: T = SomeTrait::new(); // illegal because `T : Sized` is not known to hold
}

However, if one writes a function with sized type parameter, which could never be a trait object, then the new function becomes available.

fn baz<T:SomeTrait>(t: &T) {
    let v: T = SomeTrait::new(); // OK
}

Alternatives

We could continue to check methods rather than traits are object-safe. When checking the bounds of a type parameter for a function call where the function is called with a trait object, we would check that all methods are object-safe as part of the check that the actual type parameter satisfies the formal bounds. We could probably give a different error message if the bounds are met, but the trait is not object-safe.

We might in the future use finer-grained reasoning to permit more non-object-safe methods from appearing in the trait. For example, we might permit fn foo() -> Self because it (implicitly) requires that Self be sized. Similarly, we might permit other tests beyond just sized-ness. Any such extension would be backwards compatible.

Unresolved questions

N/A

Edits

  • 2014-02-09. Edited by Nicholas Matsakis to (1) include the requirement that object-safe traits do not require Self:Sized and (2) specify that methods may include where Self:Sized to overcome object safety restrictions.
  • Start Date: 2014-09-19
  • RFC PR: rust-lang/rfcs#256
  • Rust Issue: https://github.com/rust-lang/rfcs/pull/256

Summary

Remove the reference-counting based Gc<T> type from the standard library and its associated support infrastructure from rustc.

Doing so lays a cleaner foundation upon which to prototype a proper tracing GC, and will avoid people getting incorrect impressions of Rust based on the current reference-counting implementation.

Motivation

Ancient History

Long ago, the Rust language had integrated support for automatically managed memory with arbitrary graph structure (notably, multiple references to the same object), via the type constructors @T and @mut T for any T. The intention was that Rust would provide a task-local garbage collector as part of the standard runtime for Rust programs.

As a short-term convenience, @T and @mut T were implemented via reference-counting: each instance of @T/@mut T had a reference count added to it (as well as other meta-data that were again for implementation convenience). To support this, the rustc compiler would emit, for any instruction copying or overwriting an instance of @T/@mut T, code to update the reference count(s) accordingly.

(At the same time, @T was still considered an instance of Copy by the compiler. Maintaining the reference counts of @T means that you cannot create copies of a given type implementing Copy by memcpy’ing blindly; one must distinguish so-called “POD” data that is Copy and contains no @Tfrom "non-POD"Copydata that can contain@T` and thus must be sure to update reference counts when creating a copy.)

Over time, @T was replaced with the library type Gc<T> (and @mut T was rewritten as Gc<RefCell<T>>), but the intention was that Rust would still have integrated support for a garbage collection. To continue supporting the reference-count updating semantics, the Gc<T> type has a lang item, "gc". In effect, all of the compiler support for maintaining the reference-counts from the prior @T was still in place; the move to a library type Gc<T> was just a shift in perspective from the end-user’s point of view (and that of the parser).

Recent history: Removing uses of Gc from the compiler

Largely due to the tireless efforts of eddyb, one of the primary clients of Gc<T>, namely the rustc compiler itself, has little to no remaining uses of Gc<T>.

A new hope

This means that we have an opportunity now, to remove the Gc<T> type from libstd, and its associated built-in reference-counting support from rustc itself.

I want to distinguish removal of the particular reference counting Gc<T> from our compiler and standard library (which is what is being proposed here), from removing the goal of supporting a garbage collected Gc<T> in the future. I (and I think the majority of the Rust core team) still believe that there are use cases that would be well handled by a proper tracing garbage collector.

The expected outcome of removing reference-counting Gc<T> are as follows:

  • A cleaner compiler code base,

  • A cleaner standard library, where Copy data can be indeed copied blindly (assuming the source and target types are in agreement, which is required for a tracing GC),

  • It would become impossible for users to use Gc<T> and then get incorrect impressions about how Rust’s GC would behave in the future. In particular, if we leave the reference-counting Gc<T> in place, then users may end up depending on implementation artifacts that we would be pressured to continue supporting in the future. (Note that Gc<T> is already marked “experimental”, so this particular motivation is not very strong.)

Detailed design

Remove the std::gc module. This, I believe, is the extent of the end-user visible changes proposed by this RFC, at least for users who are using libstd (as opposed to implementing their own).

Then remove the rustc support for Gc<T>. As part of this, we can either leave in or remove the "gc" and "managed_heap" entries in the lang items table (in case they could be of use for a future GC implementation). I propose leaving them, but it does not matter terribly to me. The important thing is that once std::gc is gone, then we can remove the support code associated with those two lang items, which is the important thing.

Drawbacks

Taking out the reference-counting Gc<T> now may lead people to think that Rust will never have a Gc<T>.

  • In particular, having Gc<T> in place now means that it is easier to argue for putting in a tracing collector (since it would be a net win over the status quo, assuming it works).

    (This sub-bullet is a bit of a straw man argument, as I suspect any community resistance to adding a tracing GC will probably be unaffected by the presence or absence of the reference-counting Gc<T>.)

  • As another related note, it may confuse people to take out a Gc<T> type now only to add another implementation with the same name later. (Of course, is that more or less confusing than just replacing the underlying implementation in such a severe manner.)

Users may be using Gc<T> today, and they would have to switch to some other option (such as Rc<T>, though note that the two are not 100% equivalent; see [Gc versus Rc] appendix).

Alternatives

Keep the Gc<T> implementation that we have today, and wait until we have a tracing GC implemented and ready to be deployed before removing the reference-counting infrastructure that had been put in to support @T. (Which may never happen, since adding a tracing GC is only a goal, not a certainty, and thus we may be stuck supporting the reference-counting Gc<T> until we eventually do decide to remove Gc<T> in the future. So this RFC is just suggesting we be proactive and pull that band-aid off now.

Unresolved questions

None yet.

Appendices

Gc versus Rc

There are performance differences between the current ref-counting Gc<T> and the library type Rc<T>, but such differences are beneath the level of abstraction of interest to this RFC. The main user observable difference between the ref-counting Gc<T> and the library type Rc<T> is that cyclic structure allocated via Gc<T> will be torn down when the task itself terminates successfully or via unwind.

The following program illustrates this difference. If you have a program that is using Gc and is relying on this tear-down behavior at task death, then switching to Rc will not suffice.

use std::cell::RefCell;
use std::gc::{GC,Gc};
use std::io::timer;
use std::rc::Rc;
use std::time::Duration;

struct AnnounceDrop { name: String }

#[allow(non_snake_case)]
fn AnnounceDrop<S:Str>(s:S) -> AnnounceDrop {
    AnnounceDrop { name: s.as_slice().to_string() }
}

impl Drop for AnnounceDrop{ 
    fn drop(&mut self) {
       println!("dropping {}", self.name);
    }
}

struct RcCyclic<D> { _on_drop: D, recur: Option<Rc<RefCell<RcCyclic<D>>>> }
struct GcCyclic<D> { _on_drop: D, recur: Option<Gc<RefCell<GcCyclic<D>>>> }

type RRRcell<D> = Rc<RefCell<RcCyclic<D>>>;
type GRRcell<D> = Gc<RefCell<GcCyclic<D>>>;

fn make_rc_and_gc<S:Str>(name: S) -> (RRRcell<AnnounceDrop>, GRRcell<AnnounceDrop>) {
    let name = name.as_slice().to_string();
    let rc_cyclic = Rc::new(RefCell::new(RcCyclic {
        _on_drop: AnnounceDrop(name.clone().append("-rc")),
        recur: None,
    }));

    let gc_cyclic = box (GC) RefCell::new(GcCyclic {
        _on_drop: AnnounceDrop(name.append("-gc")),
        recur: None,
    });

    (rc_cyclic, gc_cyclic)
}

fn make_proc(name: &str, sleep_time: i64, and_then: proc():Send) -> proc():Send {
    let name = name.to_string();
    proc() {
        let (rc_cyclic, gc_cyclic) = make_rc_and_gc(name);

        rc_cyclic.borrow_mut().recur = Some(rc_cyclic.clone());
        gc_cyclic.borrow_mut().recur = Some(gc_cyclic);

        timer::sleep(Duration::seconds(sleep_time));

        and_then();
    }
}

fn main() {
    let (_rc_noncyclic, _gc_noncyclic) = make_rc_and_gc("main-noncyclic");

    spawn(make_proc("success-cyclic", 2, proc () {}));

    spawn(make_proc("failure-cyclic", 1, proc () { fail!("Oop"); }));

    println!("Hello, world!")
}

The above program produces output as follows:

% rustc gc-vs-rc-sample.rs && ./gc-vs-rc-sample
Hello, world!
dropping main-noncyclic-gc
dropping main-noncyclic-rc
task '<unnamed>' failed at 'Oop', gc-vs-rc-sample.rs:60
dropping failure-cyclic-gc
dropping success-cyclic-gc

This illustrates that both Gc<T> and Rc<T> will be reclaimed when used to represent non-cyclic data (the cases labelled main-noncyclic-gc and main-noncyclic-rc. But when you actually complete the cyclic structure, then in the tasks that run to completion (either successfully or unwinding from a failure), we still manage to drop the Gc<T> cyclic structures, illustrated by the printouts from the cases labelled failure-cyclic-gc and success-cyclic-gc.

Summary

Remove drop flags from values implementing Drop, and remove automatic memory zeroing associated with dropping values.

Keep dynamic drop semantics, by having each function maintain a (potentially empty) set of auto-injected boolean flags for the drop obligations for the function that need to be tracked dynamically (which we will call “dynamic drop obligations”).

Motivation

Currently, implementing Drop on a struct (or enum) injects a hidden bit, known as the “drop-flag”, into the struct (and likewise, each of the enum variants). The drop-flag, in tandem with Rust’s implicit zeroing of dropped values, tracks whether a value has already been moved to another owner or been dropped. (See the “How dynamic drop semantics works” appendix for more details if you are unfamiliar with this part of Rust’s current implementation.)

However, the above implementation is sub-optimal; problems include:

  • Most important: implicit memory zeroing is a hidden cost that today all Rust programs pay, in both execution time and code size. With the removal of the drop flag, we can remove implicit memory zeroing (or at least revisit its utility – there may be other motivations for implicit memory zeroing, e.g. to try to keep secret data from being exposed to unsafe code).

  • Hidden bits are bad: Users coming from a C/C++ background expect struct Foo { x: u32, y: u32 } to occupy 8 bytes, but if Foo implements Drop, the hidden drop flag will cause it to double in size (16 bytes). See the [Program illustrating semantic impact of hidden drop flag] appendix for a concrete illustration. Note that when Foo implements Drop, each instance of Foo carries a drop-flag, even in contexts like a Vec<Foo> where a program cannot actually move individual values out of the collection. Thus, the amount of extra memory being used by drop-flags is not bounded by program stack growth; the memory wastage is strewn throughout the heap.

An earlier RFC (the withdrawn RFC PR #210) suggested resolving this problem by switching from a dynamic drop semantics to a “static drop semantics”, which was defined in that RFC as one that performs drop of certain values earlier to ensure that the set of drop-obligations does not differ at any control-flow merge point, i.e. to ensure that the set of values to drop is statically known at compile-time.

However, discussion on the RFC PR #210 comment thread pointed out its policy for inserting early drops into the code is non-intuitive (in other words, that the drop policy should either be more aggressive, a la RFC PR #239, or should stay with the dynamic drop status quo). Also, the mitigating mechanisms proposed by that RFC (NoisyDrop/QuietDrop) were deemed unacceptable.

So, static drop semantics are a non-starter. Luckily, the above strategy is not the only way to implement dynamic drop semantics. Rather than requiring that the set of drop-obligations be the same at every control-flow merge point, we can do a intra-procedural static analysis to identify the set of drop-obligations that differ at any merge point, and then inject a set of stack-local boolean-valued drop-flags that dynamically track them. That strategy is what this RFC is describing.

The expected outcomes are as follows:

  • We remove the drop-flags from all structs/enums that implement Drop. (There are still the injected stack-local drop flags, but those should be cheaper to inject and maintain.)

  • Since invoking drop code is now handled by the stack-local drop flags and we have no more drop-flags on the values themselves, we can (and will) remove memory zeroing.

  • Libraries currently relying on drop doing memory zeroing (i.e. libraries that check whether content is zero to decide whether its fn drop has been invoked will need to be revised, since we will not have implicit memory zeroing anymore.

  • In the common case, most libraries using Drop will not need to change at all from today, apart from the caveat in the previous bullet.

Detailed design

Drop obligations

No struct or enum has an implicit drop-flag. When a local variable is initialized, that establishes a set of “drop obligations”: a set of structural paths (e.g. a local a, or a path to a field b.f.y) that need to be dropped (or moved away to a new owner).

The drop obligations for a local variable x of struct-type T are computed from analyzing the structure of T. If T itself implements Drop, then x is a drop obligation. If T does not implement Drop, then the set of drop obligations is the union of the drop obligations of the fields of T.

When a path is moved to a new location, or consumed by a function call, or when control flow reaches the end of its owner’s lexical scope, the path is removed from the set of drop obligations.

At control-flow merge points, e.g. nodes that have predecessor nodes P_1, P_2, …, P_k with drop obligation sets S_1, S_2, … S_k, we

  • First identify the set of drop obligations that differ between the predecessor nodes, i.e. the set:

    (S_1 | S_2 | ... | S_k) \ (S_1 & S_2 & ... & S_k)

    where | denotes set-union, & denotes set-intersection, \ denotes set-difference. These are the dynamic drop obligations induced by this merge point. Note that if S_1 = S_2 = ... = S_k, the above set is empty.

  • The set of drop obligations for the merge point itself is the union of the drop-obligations from all predecessor points in the control flow, i.e. (S_1 | S_2 | ... | S_k) in the above notation.

    (One could also just use the intersection here; it actually makes no difference to the static analysis, since all of the elements of the difference

    (S_1 | S_2 | ... | S_k) \ (S_1 & S_2 & ... & S_k)

    have already been added to the set of dynamic drop obligations. But the overall code transformation is clearer if one keeps the dynamic drop obligations in the set of drop obligations.)

Stack-local drop flags

For every dynamic drop obligation induced by a merge point, the compiler is responsible for ensure that its drop code is run at some point. If necessary, it will inject and maintain boolean flag analogous to

enum NeedsDropFlag { NeedsLocalDrop, DoNotDrop }

Some compiler analysis may be able to identify dynamic drop obligations that do not actually need to be tracked. Therefore, we do not specify the precise set of boolean flags that are injected.

Example of code with dynamic drop obligations

The function f2 below was copied from the static drop RFC PR #210; it has differing sets of drop obligations at a merge point, necessitating a potential injection of a NeedsDropFlag.

fn f2() {

    // At the outset, the set of drop obligations is
    // just the set of moved input parameters (empty
    // in this case).

    //                                      DROP OBLIGATIONS
    //                                  ------------------------
    //                                  {  }
    let pDD : Pair<D,D> = ...;
    pDD.x = ...;
    //                                  {pDD.x}
    pDD.y = ...;
    //                                  {pDD.x, pDD.y}
    let pDS : Pair<D,S> = ...;
    //                                  {pDD.x, pDD.y, pDS.x}
    let some_d : Option<D>;
    //                                  {pDD.x, pDD.y, pDS.x}
    if test() {
        //                                  {pDD.x, pDD.y, pDS.x}
        {
            let temp = xform(pDD.y);
            //                              {pDD.x,        pDS.x, temp}
            some_d = Some(temp);
            //                              {pDD.x,        pDS.x, temp, some_d}
        } // END OF SCOPE for `temp`
        //                                  {pDD.x,        pDS.x, some_d}

        // MERGE POINT PREDECESSOR 1

    } else {
        {
            //                              {pDD.x, pDD.y, pDS.x}
            let z = D;
            //                              {pDD.x, pDD.y, pDS.x, z}

            // This drops `pDD.y` before
            // moving `pDD.x` there.
            pDD.y = pDD.x;

            //                              {       pDD.y, pDS.x, z}
            some_d = None;
            //                              {       pDD.y, pDS.x, z, some_d}
        } // END OF SCOPE for `z`
        //                                  {       pDD.y, pDS.x, some_d}

        // MERGE POINT PREDECESSOR 2

    }

    // MERGE POINT: set of drop obligations do not
    // match on all incoming control-flow paths.
    //
    // Predecessor 1 has drop obligations
    // {pDD.x,        pDS.x, some_d}
    // and Predecessor 2 has drop obligations
    // {       pDD.y, pDS.x, some_d}.
    //
    // Therefore, this merge point implies that
    // {pDD.x, pDD.y} are dynamic drop obligations,
    // while {pDS.x, some_d} are potentially still
    // resolvable statically (and thus may not need
    // associated boolean flags).

    // The resulting drop obligations are the following:

    //                                  {pDD.x, pDD.y, pDS.x, some_d}.

    // (... some code that does not change drop obligations ...)

    //                                  {pDD.x, pDD.y, pDS.x, some_d}.

    // END OF SCOPE for `pDD`, `pDS`, `some_d`
}

After the static analysis has identified all of the dynamic drop obligations, code is injected to maintain the stack-local drop flags and to do any necessary drops at the appropriate points. Below is the updated fn f2 with an approximation of the injected code.

Note: we say “approximation”, because one does need to ensure that the drop flags are updated in a manner that is compatible with potential task fail!/panic!, because stack unwinding must be informed which state needs to be dropped; i.e. you need to initialize _pDD_dot_x before you start to evaluate a fallible expression to initialize pDD.y.

fn f2_rewritten() {

    // At the outset, the set of drop obligations is
    // just the set of moved input parameters (empty
    // in this case).

    //                                      DROP OBLIGATIONS
    //                                  ------------------------
    //                                  {  }
    let _drop_pDD_dot_x : NeedsDropFlag;
    let _drop_pDD_dot_y : NeedsDropFlag;

    _drop_pDD_dot_x = DoNotDrop;
    _drop_pDD_dot_y = DoNotDrop;

    let pDD : Pair<D,D>;
    pDD.x = ...;
    _drop_pDD_dot_x = NeedsLocalDrop;
    pDD.y = ...;
    _drop_pDD_dot_y = NeedsLocalDrop;

    //                                  {pDD.x, pDD.y}
    let pDS : Pair<D,S> = ...;
    //                                  {pDD.x, pDD.y, pDS.x}
    let some_d : Option<D>;
    //                                  {pDD.x, pDD.y, pDS.x}
    if test() {
        //                                  {pDD.x, pDD.y, pDS.x}
        {
            _drop_pDD_dot_y = DoNotDrop;
            let temp = xform(pDD.y);
            //                              {pDD.x,        pDS.x, temp}
            some_d = Some(temp);
            //                              {pDD.x,        pDS.x, temp, some_d}
        } // END OF SCOPE for `temp`
        //                                  {pDD.x,        pDS.x, some_d}

        // MERGE POINT PREDECESSOR 1

    } else {
        {
            //                              {pDD.x, pDD.y, pDS.x}
            let z = D;
            //                              {pDD.x, pDD.y, pDS.x, z}

            // This drops `pDD.y` before
            // moving `pDD.x` there.
            _drop_pDD_dot_x = DoNotDrop;
            pDD.y = pDD.x;

            //                              {       pDD.y, pDS.x, z}
            some_d = None;
            //                              {       pDD.y, pDS.x, z, some_d}
        } // END OF SCOPE for `z`
        //                                  {       pDD.y, pDS.x, some_d}

        // MERGE POINT PREDECESSOR 2

    }

    // MERGE POINT: set of drop obligations do not
    // match on all incoming control-flow paths.
    //
    // Predecessor 1 has drop obligations
    // {pDD.x,        pDS.x, some_d}
    // and Predecessor 2 has drop obligations
    // {       pDD.y, pDS.x, some_d}.
    //
    // Therefore, this merge point implies that
    // {pDD.x, pDD.y} are dynamic drop obligations,
    // while {pDS.x, some_d} are potentially still
    // resolvable statically (and thus may not need
    // associated boolean flags).

    // The resulting drop obligations are the following:

    //                                  {pDD.x, pDD.y, pDS.x, some_d}.

    // (... some code that does not change drop obligations ...)

    //                                  {pDD.x, pDD.y, pDS.x, some_d}.

    // END OF SCOPE for `pDD`, `pDS`, `some_d`

    // rustc-inserted code (not legal Rust, since `pDD.x` and `pDD.y`
    // are inaccessible).

    if _drop_pDD_dot_x { mem::drop(pDD.x); }
    if _drop_pDD_dot_y { mem::drop(pDD.y); }
}

Note that in a snippet like

       _drop_pDD_dot_y = DoNotDrop;
       let temp = xform(pDD.y);

this is okay, in part because the evaluating the identifier xform is infallible. If instead it were something like:

       _drop_pDD_dot_y = DoNotDrop;
       let temp = lookup_closure()(pDD.y);

then that would not be correct, because we need to set _drop_pDD_dot_y to DoNotDrop after the lookup_closure() invocation.

It may probably be more intellectually honest to write the transformation like:

       let temp = lookup_closure()({ _drop_pDD_dot_y = DoNotDrop; pDD.y });

Control-flow sensitivity

Note that the dynamic drop obligations are based on a control-flow analysis, not just the lexical nesting structure of the code.

In particular: If control flow splits at a point like an if-expression, but the two arms never meet, then they can have completely sets of drop obligations.

This is important, since in coding patterns like loops, one often sees different sets of drop obligations prior to a break compared to a point where the loop repeats, such as a continue or the end of a loop block.

    // At the outset, the set of drop obligations is
    // just the set of moved input parameters (empty
    // in this case).

    //                                      DROP OBLIGATIONS
    //                                  ------------------------
    //                                  {  }
    let mut pDD : Pair<D,D> = mk_dd();
    let mut maybe_set : D;

    //                                  {         pDD.x, pDD.y }
    'a: loop {
        // MERGE POINT

        //                                  {     pDD.x, pDD.y }
        if test() {
            //                                  { pDD.x, pDD.y }
            consume(pDD.x);
            //                                  {        pDD.y }
            break 'a;
        }
        // *not* merge point (only one path, the else branch, flows here)

        //                                  {     pDD.x, pDD.y }

        // never falls through; must merge with 'a loop.
    }

    // RESUME POINT: break 'a above flows here

    //                                  {                pDD.y }

    // This is the point immediately preceding `'b: loop`; (1.) below.

    'b: loop {
        // MERGE POINT
        //
        // There are *three* incoming paths: (1.) the statement
        // preceding `'b: loop`, (2.) the `continue 'b;` below, and
        // (3.) the end of the loop's block below.  The drop
        // obligation for `maybe_set` originates from (3.).

        //                                  {            pDD.y, maybe_set }

        consume(pDD.y);

        //                                  {                 , maybe_set }

        if test() {
            //                                  {             , maybe_set }
            pDD.x = mk_d();
            //                                  { pDD.x       , maybe_set }
            break 'b;
        }

        // *not* merge point (only one path flows here)

        //                                  {                 , maybe_set }

        if test() {
            //                                  {             , maybe_set }
            pDD.y = mk_d();

            // This is (2.) referenced above.   {        pDD.y, maybe_set }
            continue 'b;
        }
        // *not* merge point (only one path flows here)

        //                                  {                 , maybe_set }

        pDD.y = mk_d();
        // This is (3.) referenced above.   {            pDD.y, maybe_set }

        maybe_set = mk_d();
        g(&maybe_set);

        // This is (3.) referenced above.   {            pDD.y, maybe_set }
    }

    // RESUME POINT: break 'b above flows here

    //                                  {         pDD.x       , maybe_set }

    // when we hit the end of the scope of `maybe_set`;
    // check its stack-local flag.

Likewise, a return statement represents another control flow jump, to the end of the function.

Remove implicit memory zeroing

With the above in place, the remainder is relatively trivial. The compiler can be revised to no longer inject a drop flag into structs and enums that implement Drop, and likewise memory zeroing can be removed.

Beyond that, the libraries will obviously need to be audited for dependence on implicit memory zeroing.

Drawbacks

The only reasons not do this are:

  1. Some hypothetical reason to continue doing implicit memory zeroing, or

  2. We want to abandon dynamic drop semantics.

At this point Felix thinks the Rust community has made a strong argument in favor of keeping dynamic drop semantics.

Alternatives

  • Static drop semantics RFC PR #210 has been referenced frequently in this document.

  • Eager drops RFC PR #239 is the more aggressive semantics that would drop values immediately after their final use. This would probably invalidate a number of RAII style coding patterns.

Optional Extensions

A lint identifying dynamic drop obligations

Add a lint (set by default to allow) that reports potential dynamic drop obligations, so that end-user code can opt-in to having them reported. The expected benefits of this are:

  1. developers may have intended for a value to be moved elsewhere on all paths within a function, and,

  2. developers may want to know about how many boolean dynamic drop flags are potentially being injected into their code.

Unresolved questions

How to handle moves out of array[index_expr]

Niko pointed out to me today that my prototype was not addressing moves out of array[index_expr] properly. I was assuming that we would just make such an expression illegal (or that they should already be illegal).

But they are not already illegal, and above assumption that we would make it illegal should have been explicit. That, or we should address the problem in some other way.

To make this concrete, here is some code that runs today:

#[deriving(Show)]
struct AnnounceDrop { name: &'static str }

impl Drop for AnnounceDrop {
    fn drop(&mut self) { println!("dropping {}", self.name); }
}

fn foo<A>(a: [A, ..3], i: uint) -> A {
    a[i]
}

fn main() {
    let a = [AnnounceDrop { name: "fst" },
             AnnounceDrop { name: "snd" },
             AnnounceDrop { name: "thd" }];
    let r = foo(a, 1);
    println!("foo returned {}", r);
}

This prints:

dropping fst
dropping thd
foo returned AnnounceDrop { name: snd }
dropping snd

because it first moves the entire array into foo, and then foo returns the second element, but still needs to drop the rest of the array.

Embedded drop flags and zeroing support this seamlessly, of course. But the whole point of this RFC is to get rid of the embedded per-value drop-flags.

If we want to continue supporting moving out of a[i] (and we probably do, I have been converted on this point), then the drop flag needs to handle this case. Our current thinking is that we can support it by using a single uint flag (as opposed to the booleans used elsewhere) for such array that has been moved out of. The uint flag represents “drop all elements from the array except for the one listed in the flag.” (If it is only moved out of on one branch and not another, then we would either use an Option<uint>, or still use uint and just represent unmoved case via some value that is not valid index, such as the length of the array).

Should we keep #[unsafe_no_drop_flag] ?

Currently there is an unsafe_no_drop_flag attribute that is used to indicate that no drop flag should be associated with a struct/enum, and instead the user-written drop code will be run multiple times (and thus must internally guard itself from its own side-effects; e.g. do not attempt to free the backing buffer for a Vec more than once, by tracking within the Vec itself if the buffer was previously freed).

The “obvious” thing to do is to remove unsafe_no_drop_flag, since the per-value drop flag is going away. However, we could keep the attribute, and just repurpose its meaning to instead mean the following: Never inject a dynamic stack-local drop-flag for this value. Just run the drop code multiple times, just like today.

In any case, since the semantics of this attribute are unstable, we will feature-gate it (with feature name unsafe_no_drop_flag).

Appendices

How dynamic drop semantics works

(This section is just presenting background information on the semantics of drop and the drop-flag as it works in Rust today; it does not contain any discussion of the changes being proposed by this RFC.)

A struct or enum implementing Drop will have its drop-flag automatically set to a non-zero value when it is constructed. When attempting to drop the struct or enum (i.e. when control reaches the end of the lexical scope of its owner), the injected glue code will only execute its associated fn drop if its drop-flag is non-zero.

In addition, the compiler injects code to ensure that when a value is moved to a new location in memory or dropped, then the original memory is entirely zeroed.

A struct/enum definition implementing Drop can be tagged with the attribute #[unsafe_no_drop_flag]. When so tagged, the struct/enum will not have a hidden drop flag embedded within it. In this case, the injected glue code will execute the associated glue code unconditionally, even though the struct/enum value may have been moved to a new location in memory or dropped (in either case, the memory representing the value will have been zeroed).

The above has a number of implications:

  • A program can manually cause the drop code associated with a value to be skipped by first zeroing out its memory.

  • A Drop implementation for a struct tagged with unsafe_no_drop_flag must assume that it will be called more than once. (However, every call to drop after the first will be given zeroed memory.)

Program illustrating semantic impact of hidden drop flag

#![feature(macro_rules)]

use std::fmt;
use std::mem;

#[deriving(Clone,Show)]
struct S {  name: &'static str }

#[deriving(Clone,Show)]
struct Df { name: &'static str }

#[deriving(Clone,Show)]
struct Pair<X,Y>{ x: X, y: Y }

static mut current_indent: uint = 0;

fn indent() -> String {
    String::from_char(unsafe { current_indent }, ' ')
}

impl Drop for Df {
    fn drop(&mut self) {
        println!("{}dropping Df {}", indent(), self.name)
    }
}

macro_rules! struct_Dn {
    ($Dn:ident) => {

        #[unsafe_no_drop_flag]
        #[deriving(Clone,Show)]
        struct $Dn { name: &'static str }

        impl Drop for $Dn {
            fn drop(&mut self) {
                if unsafe { (0,0) == mem::transmute::<_,(uint,uint)>(self.name) } {
                    println!("{}dropping already-zeroed {}",
                             indent(), stringify!($Dn));
                } else {
                    println!("{}dropping {} {}",
                             indent(), stringify!($Dn), self.name)
                }
            }
        }
    }
}

struct_Dn!(DnA)
struct_Dn!(DnB)
struct_Dn!(DnC)

fn take_and_pass<T:fmt::Show>(t: T) {
    println!("{}t-n-p took and will pass: {}", indent(), &t);
    unsafe { current_indent += 4; }
    take_and_drop(t);
    unsafe { current_indent -= 4; }
}

fn take_and_drop<T:fmt::Show>(t: T) {
    println!("{}t-n-d took and will drop: {}", indent(), &t);
}

fn xform(mut input: Df) -> Df {
    input.name = "transformed";
    input
}

fn foo(b: || -> bool) {
    let mut f1 = Df  { name: "f1" };
    let mut n2 = DnC { name: "n2" };
    let f3 = Df  { name: "f3" };
    let f4 = Df  { name: "f4" };
    let f5 = Df  { name: "f5" };
    let f6 = Df  { name: "f6" };
    let n7 = DnA { name: "n7" };
    let _fx = xform(f6);           // `f6` consumed by `xform`
    let _n9 = DnB { name: "n9" };
    let p = Pair { x: f4, y: f5 }; // `f4` and `f5` moved into `p`
    let _f10 = Df { name: "f10" };

    println!("foo scope start: {}", (&f3, &n7));
    unsafe { current_indent += 4; }
    if b() {
        take_and_pass(p.x); // `p.x` consumed by `take_and_pass`, which drops it
    }
    if b() {
        take_and_pass(n7); // `n7` consumed by `take_and_pass`, which drops it
    }
    
    // totally unsafe: manually zero the struct, including its drop flag.
    unsafe fn manually_zero<S>(s: &mut S) {
        let len = mem::size_of::<S>();
        let p : *mut u8 = mem::transmute(s);
        for i in range(0, len) {
            *p.offset(i as int) = 0;
        }
    }
    unsafe {
        manually_zero(&mut f1);
        manually_zero(&mut n2);
    }
    println!("foo scope end");
    unsafe { current_indent -= 4; }

    // here, we drop each local variable, in reverse order of declaration.
    // So we should see the following drop sequence:
    // drop(f10), printing "Df f10"
    // drop(p)
    //   ==> drop(p.y), printing "Df f5"
    //   ==> attempt to drop(and skip) already-dropped p.x, no-op
    // drop(_n9), printing "DnB n9"
    // drop(_fx), printing "Df transformed"
    // attempt to drop already-dropped n7, printing "already-zeroed DnA"
    // no drop of `f6` since it was consumed by `xform`
    // no drop of `f5` since it was moved into `p`
    // no drop of `f4` since it was moved into `p`
    // drop(f3), printing "f3"
    // attempt to drop manually-zeroed `n2`, printing "already-zeroed DnC"
    // attempt to drop manually-zeroed `f1`, no-op.
}

fn main() {
    foo(|| true);
}

Summary

In string literal contexts, restrict \xXX escape sequences to just the range of ASCII characters, \x00\x7F. \xXX inputs in string literals with higher numbers are rejected (with an error message suggesting that one use an \uNNNN escape).

Motivation

In a string literal context, the current \xXX character escape sequence is potentially confusing when given inputs greater than 0x7F, because it does not encode that byte literally, but instead encodes whatever the escape sequence \u00XX would produce.

Thus, for inputs greater than 0x7F, \xXX will encode multiple bytes into the generated string literal, as illustrated in the Rust example appendix.

This is different from what C/C++ programmers might expect (see Behavior of xXX in C appendix).

(It would not be legal to encode the single byte literally into the string literal, since then the string would not be well-formed UTF-8.)

It has been suggested that the \xXX character escape should be removed entirely (at least from string literal contexts). This RFC is taking a slightly less aggressive stance: keep \xXX, but only for ASCII inputs when it occurs in string literals. This way, people can continue using this escape format (which shorter than the \uNNNN format) when it makes sense.

Here are some links to discussions on this topic, including direct comments that suggest exactly the strategy of this RFC.

  • https://github.com/rust-lang/rfcs/issues/312
  • https://github.com/rust-lang/rust/issues/12769
  • https://github.com/rust-lang/rust/issues/2800#issuecomment-31477259
  • https://github.com/rust-lang/rfcs/pull/69#issuecomment-43002505
  • https://github.com/rust-lang/rust/issues/12769#issuecomment-43574856
  • https://github.com/rust-lang/meeting-minutes/blob/master/weekly-meetings/2014-01-21.md#xnn-escapes-in-strings
  • https://mail.mozilla.org/pipermail/rust-dev/2012-July/002025.html

Note in particular the meeting minutes bullet, where the team explicitly decided to keep things “as they are”.

However, at the time of that meeting, Rust did not have byte string literals; people were converting string-literals into byte arrays via the bytes! macro. (Likewise, the rust-dev post is also from a time, summer 2012, when we did not have byte-string literals.)

We are in a different world now. The fact that now \xXX denotes a code unit in a byte-string literal, but in a string literal denotes a codepoint, does not seem elegant; it rather seems like a source of confusion. (Caveat: While Felix does believe this assertion, this context-dependent interpretation of \xXX does have precedent in both Python and Racket; see Racket example and Python example appendices.)

By restricting \xXX to the range 0x000x7F, we side-step the question of “is it a code unit or a code point?” entirely (which was the real context of both the rust-dev thread and the meeting minutes bullet). This RFC is a far more conservative choice that we can safely make for the short term (i.e. for the 1.0 release) than it would have been to switch to a “\xXX is a code unit” interpretation.

The expected outcome is reduced confusion for C/C++ programmers (which is, after all, our primary target audience for conversion), and any other language where \xXX never results in more than one byte. The error message will point them to the syntax they need to adopt.

Detailed design

In string literal contexts, \xXX inputs with XX > 0x7F are rejected (with an error message that mentions either, or both, of \uNNNN escapes and the byte-string literal format b"..").

The full byte range remains supported when \xXX is used in byte-string literals, b"..."

Raw strings by design do not offer escape sequences, so they are unchanged.

Character and string escaping routines (such as core::char::escape_unicode, and such as used by the "{:?}" formatter) are updated so that string inputs that previously would previously have printed \xXX with XX > 0x7F are updated to use \uNNNN escapes instead.

Drawbacks

Some reasons not to do this:

  • we think that the current behavior is intuitive,

  • it is consistent with language X (and thus has precedent),

  • existing libraries are relying on this behavior, or

  • we want to optimize for inputting characters with codepoints in the range above 0x7F in string-literals, rather than optimizing for ASCII.

The thesis of this RFC is that the first bullet is a falsehood.

While there is some precedent for the “\xXX is code point” interpretation in some languages, the majority do seem to favor the “\xXX is code unit” point of view. The proposal of this RFC is side-stepping the distinction by limiting the input range for \xXX.

The third bullet is a strawman since we have not yet released 1.0, and thus everything is up for change.

This RFC makes no comment on the validity of the fourth bullet.

Alternatives

  • We could remove \xXX entirely from string literals. This would require people to use the \uNNNN escape format even for bytes in the range 000x7F, which seems annoying.

  • We could switch \xXX from meaning code point to meaning code unit in both string literal and byte-string literal contexts. This was previously considered and explicitly rejected in an earlier meeting, as discussed in the Motivation section.

Unresolved questions

None.

Appendices

Behavior of xXX in C

Here is a C program illustrating how xXX escape sequences are treated in string literals in that context:

#include <stdio.h>

int main() {
    char *s;

    s = "a";
    printf("s[0]: %d\n", s[0]);
    printf("s[1]: %d\n", s[1]);

    s = "\x61";
    printf("s[0]: %d\n", s[0]);
    printf("s[1]: %d\n", s[1]);

    s = "\x7F";
    printf("s[0]: %d\n", s[0]);
    printf("s[1]: %d\n", s[1]);

    s = "\x80";
    printf("s[0]: %d\n", s[0]);
    printf("s[1]: %d\n", s[1]);
    return 0;
}

Its output is the following:

% gcc example.c && ./a.out
s[0]: 97
s[1]: 0
s[0]: 97
s[1]: 0
s[0]: 127
s[1]: 0
s[0]: -128
s[1]: 0

Rust example

Here is a Rust program that explores the various ways \xXX sequences are treated in both string literal and byte-string literal contexts.

 #![feature(macro_rules)]

fn main() {
    macro_rules! print_str {
        ($r:expr, $e:expr) => { {
            println!("{:>20}: \"{}\"",
                     format!("\"{}\"", $r),
                     $e.escape_default())
        } }
    }

    macro_rules! print_bstr {
        ($r:expr, $e:expr) => { {
            println!("{:>20}: {}",
                     format!("b\"{}\"", $r),
                     $e)
        } }
    }

    macro_rules! print_bytes {
        ($r:expr, $e:expr) => {
            println!("{:>9}.as_bytes(): {}", format!("\"{}\"", $r), $e.as_bytes())
        } }

    // println!("{}", b"\u0000"); // invalid: \uNNNN is not a byte escape.
    print_str!(r"\0", "\0");
    print_bstr!(r"\0", b"\0");
    print_bstr!(r"\x00", b"\x00");
    print_bytes!(r"\x00", "\x00");
    print_bytes!(r"\u0000", "\u0000");
    println!("");
    print_str!(r"\x61", "\x61");
    print_bstr!(r"a", b"a");
    print_bstr!(r"\x61", b"\x61");
    print_bytes!(r"\x61", "\x61");
    print_bytes!(r"\u0061", "\u0061");
    println!("");
    print_str!(r"\x7F", "\x7F");
    print_bstr!(r"\x7F", b"\x7F");
    print_bytes!(r"\x7F", "\x7F");
    print_bytes!(r"\u007F", "\u007F");
    println!("");
    print_str!(r"\x80", "\x80");
    print_bstr!(r"\x80", b"\x80");
    print_bytes!(r"\x80", "\x80");
    print_bytes!(r"\u0080", "\u0080");
    println!("");
    print_str!(r"\xFF", "\xFF");
    print_bstr!(r"\xFF", b"\xFF");
    print_bytes!(r"\xFF", "\xFF");
    print_bytes!(r"\u00FF", "\u00FF");
    println!("");
    print_str!(r"\u0100", "\u0100");
    print_bstr!(r"\x01\x00", b"\x01\x00");
    print_bytes!(r"\u0100", "\u0100");
}

In current Rust, it generates output as follows:

% rustc --version && echo && rustc example.rs && ./example
rustc 0.12.0-pre (d52d0c836 2014-09-07 03:36:27 +0000)

                "\0": "\x00"
               b"\0": [0]
             b"\x00": [0]
   "\x00".as_bytes(): [0]
 "\u0000".as_bytes(): [0]

              "\x61": "a"
                b"a": [97]
             b"\x61": [97]
   "\x61".as_bytes(): [97]
 "\u0061".as_bytes(): [97]

              "\x7F": "\x7f"
             b"\x7F": [127]
   "\x7F".as_bytes(): [127]
 "\u007F".as_bytes(): [127]

              "\x80": "\x80"
             b"\x80": [128]
   "\x80".as_bytes(): [194, 128]
 "\u0080".as_bytes(): [194, 128]

              "\xFF": "\xff"
             b"\xFF": [255]
   "\xFF".as_bytes(): [195, 191]
 "\u00FF".as_bytes(): [195, 191]

            "\u0100": "\u0100"
         b"\x01\x00": [1, 0]
 "\u0100".as_bytes(): [196, 128]
%

Note that the behavior of \xXX on byte-string literals matches the expectations established by the C program in Behavior of xXX in C; that is good. The problem is the behavior of \xXX for XX > 0x7F in string-literal contexts, namely in the fourth and fifth examples where the .as_bytes() invocations are showing that the underlying byte array has two elements instead of one.

Racket example

% racket
Welcome to Racket v5.93.
> (define a-string "\xbb\n")
> (display a-string)
»
> (bytes-length (string->bytes/utf-8 a-string))
3
> (define a-byte-string #"\xc2\xbb\n")
> (bytes-length a-byte-string)
3
> (display a-byte-string)
»
> (exit)
%

The above code illustrates that in Racket, the \xXX escape sequence denotes a code unit in byte-string context (#".." in that language), while it denotes a code point in string context ("..").

Python example

% python
Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a_string = u"\xbb\n";
>>> print a_string
»

>>> len(a_string.encode("utf-8"))
3
>>> a_byte_string = "\xc2\xbb\n";
>>> len(a_byte_string)
3
>>> print a_byte_string
»

>>> exit()
%

The above code illustrates that in Python, the \xXX escape sequence denotes a code unit in byte-string context (".." in that language), while it denotes a code point in unicode string context (u"..").

Summary

Change the types of byte string literals to be references to statically sized types. Ensure the same change can be performed backward compatibly for string literals in the future.

Motivation

Currently byte string and string literals have types &'static [u8] and &'static str. Therefore, although the sizes of the literals are known at compile time, they are erased from their types and inaccessible until runtime. This RFC suggests to change the type of byte string literals to &'static [u8, ..N]. In addition this RFC suggest not to introduce any changes to str or string literals, that would prevent a backward compatible addition of strings of fixed size FixedString<N> (the name FixedString in this RFC is a placeholder and is open for bikeshedding) and the change of the type of string literals to &'static FixedString<N> in the future.

FixedString<N> is essentially a [u8, ..N] with UTF-8 invariants and additional string methods/traits. It fills the gap in the vector/string chart:

Vec<T>String
[T, ..N]???
&[T]&str

Today, given the lack of non-type generic parameters and compile time (function) evaluation (CTE), strings of fixed size are not very useful. But after introduction of CTE the need in compile time string operations will raise rapidly. Even without CTE but with non-type generic parameters alone fixed size strings can be used in runtime for “heapless” string operations, which are useful in constrained environments or for optimization. So the main motivation for changes today is forward compatibility.

Examples of use for new literals, that are not possible with old literals:

// Today: initialize mutable array with byte string literal
let mut arr: [u8, ..3] = *b"abc";
arr[0] = b'd';

// Future with CTE: compile time string concatenation
static LANG_DIR: FixedString<5 /*The size should, probably, be inferred*/> = *"lang/";
static EN_FILE: FixedString<_> = LANG_DIR + *"en"; // FixedString<N> implements Add
static FR_FILE: FixedString<_> = LANG_DIR + *"fr";

// Future without CTE: runtime "heapless" string concatenation
let DE_FILE = LANG_DIR + *"de"; // Performed at runtime if not optimized

Detailed design

Change the type of byte string literals from &'static [u8] to &'static [u8, ..N]. Leave the door open for a backward compatible change of the type of string literals from &'static str to &'static FixedString<N>.

Strings of fixed size

If str is moved to the library today, then strings of fixed size can be implemented like this:

struct str<Sized? T = [u8]>(T);

Then string literals will have types &'static str<[u8, ..N]>.

Drawbacks of this approach include unnecessary exposition of the implementation - underlying sized or unsized arrays [u8]/[u8, ..N] and generic parameter T. The key requirement here is the autocoercion from reference to fixed string to string slice an we are unable to meet it now without exposing the implementation.

In the future, after gaining the ability to parameterize on integers, strings of fixed size could be implemented in a better way:

struct __StrImpl<Sized? T>(T); // private

pub type str = __StrImpl<[u8]>; // unsized referent of string slice `&str`, public
pub type FixedString<const N: uint> = __StrImpl<[u8, ..N]>; // string of fixed size, public

// &FixedString<N> -> &str : OK, including &'static FixedString<N> -> &'static str for string literals

So, we don’t propose to make these changes today and suggest to wait until generic parameterization on integers is added to the language.

Precedents

C and C++ string literals are lvalue char arrays of fixed size with static duration. C++ library proposal for strings of fixed size (link), the paper also contains some discussion and motivation.

Rejected alternatives and discussion

Array literals

The types of array literals potentially can be changed from [T, ..N] to &'a [T, ..N] for consistency with the other literals and ergonomics. The major blocker for this change is the inability to move out from a dereferenced array literal if T is not Copy.

let mut a = *[box 1i, box 2, box 3]; // Wouldn't work without special-casing of array literals with regard to moving out from dereferenced borrowed pointer

Despite that array literals as references have better usability, possible staticness and consistency with other literals.

Usage statistics for array literals

Array literals can be used both as slices, when a view to array is sufficient to perform the task, and as values when arrays themselves should be copied or modified. The exact estimation of the frequencies of both uses is problematic, but some regex search in the Rust codebase gives the next statistics: In approximately 70% of cases array literals are used as slices (explicit & on array literals, immutable bindings). In approximately 20% of cases array literals are used as values (initialization of struct fields, mutable bindings, boxes). In the rest 10% of cases the usage is unclear.

So, in most cases the change to the types of array literals will lead to shorter notation.

Static lifetime

Although all the literals under consideration are similar and are essentially arrays of fixed size, array literals are different from byte string and string literals with regard to lifetimes. While byte string and string literals can always be placed into static memory and have static lifetime, array literals can depend on local variables and can’t have static lifetime in general case. The chosen design potentially allows to trivially enhance some array literals with static lifetime in the future to allow use like

fn f() -> &'static [int] {
    [1, 2, 3]
}

Alternatives

The alternative design is to make the literals the values and not the references.

The changes

Keep the types of array literals as [T, ..N]. Change the types of byte literals from &'static [u8] to [u8, ..N]. Change the types of string literals form &'static str to FixedString<N>. 2) Introduce the missing family of types - strings of fixed size - FixedString<N>. … 3) Add the autocoercion of array literals (not arrays of fixed size in general) to slices. Add the autocoercion of new byte literals to slices. Add the autocoercion of new string literals to slices. Non-literal arrays and strings do not autocoerce to slices, in accordance with the general agreements on explicitness. 4) Make string and byte literals lvalues with static lifetime.

Examples of use:

// Today: initialize mutable array with literal
let mut arr: [u8, ..3] = b"abc";
arr[0] = b'd';

// Future with CTE: compile time string concatenation
static LANG_DIR: FixedString<_> = "lang/";
static EN_FILE: FixedString<_> = LANG_DIR + "en"; // FixedString<N> implements Add
static FR_FILE: FixedString<_> = LANG_DIR + "fr";

// Future without CTE: runtime "heapless" string concatenation
let DE_FILE = LANG_DIR + "de"; // Performed at runtime if not optimized

Drawbacks of the alternative design

Special rules about (byte) string literals being static lvalues add a bit of unnecessary complexity to the specification.

In theory let s = "abcd"; copies the string from static memory to stack, but the copy is unobservable an can, probably, be elided in most cases.

The set of additional autocoercions has to exist for ergonomic purpose (and for backward compatibility). Writing something like:

fn f(arg: &str) {}
f("Hello"[]);
f(&"Hello");

for all literals would be just unacceptable.

Minor breakage:

fn main() {
    let s = "Hello";
    fn f(arg: &str) {}
    f(s); // Will require explicit slicing f(s[]) or implicit DST coercion from reference f(&s)
}

Status quo

Status quo (or partial application of the changes) is always an alternative.

Drawbacks of status quo

Examples:

// Today: can't use byte string literals in some cases
let mut arr: [u8, ..3] = [b'a', b'b', b'c']; // Have to use array literals
arr[0] = b'd';

// Future: FixedString<N> is added, CTE is added, but the literal types remain old
let mut arr: [u8, ..3] = b"abc".to_fixed(); // Have to use a conversion method
arr[0] = b'd';

static LANG_DIR: FixedString<_> = "lang/".to_fixed(); // Have to use a conversion method
static EN_FILE: FixedString<_> = LANG_DIR + "en".to_fixed();
static FR_FILE: FixedString<_> = LANG_DIR + "fr".to_fixed();

// Bad future: FixedString<N> is not added
// "Heapless"/compile-time string operations aren't possible, or performed with "magic" like extended concat! or recursive macros.

Note, that in the “Future” scenario the return type of to_fixed depends on the value of self, so it requires sufficiently advanced CTE, for example C++14 with its powerful constexpr machinery still doesn’t allow to write such a function.

Drawbacks

None.

Unresolved questions

None.

Summary

Removes the “virtual struct” (aka struct inheritance) feature, which is currently feature gated.

Motivation

Virtual structs were added experimentally prior to the RFC process as a way of inheriting fields from one struct when defining a new struct.

The feature was introduced and remains behind a feature gate.

The motivations for removing this feature altogether are:

  1. The feature is likely to be replaced by a more general mechanism, as part of the need to address hierarchies such as the DOM, ASTs, and so on. See this post for some recent discussion.

  2. The implementation is somewhat buggy and incomplete, and the feature is not well-documented.

  3. Although it’s behind a feature gate, keeping the feature around is still a maintenance burden.

Detailed design

Remove the implementation and feature gate for virtual structs.

Retain the virtual keyword as reserved for possible future use.

Drawbacks

The language will no longer offer any built-in mechanism for avoiding repetition of struct fields. Macros offer a reasonable workaround until a more general mechanism is added.

Unresolved questions

None known.

Summary

Reserve abstract, final, and override as possible keywords.

Motivation

We intend to add some mechanism to Rust to support more efficient inheritance (see, e.g., RFC PRs #245 and #250, and this thread on discuss). Although we have not decided how to do this, we do know that we will. Any implementation is likely to make use of keywords virtual (already used, to remain reserved), abstract, final, and override, so it makes sense to reserve these now to make the eventual implementation as backwards compatible as possible.

Detailed design

Make abstract, final, and override reserved keywords.

Drawbacks

Takes a few more words out of the possible vocabulary of Rust programmers.

Alternatives

Don’t do this and deal with it when we have an implementation. This would mean bumping the language version, probably.

Unresolved questions

N/A

Summary

This is a conventions RFC for settling a number of remaining naming conventions:

  • Referring to types in method names
  • Iterator type names
  • Additional iterator method names
  • Getter/setter APIs
  • Associated types
  • Trait naming
  • Lint naming
  • Suffix ordering
  • Prelude traits

It also proposes to standardize on lower case error messages within the compiler and standard library.

Motivation

As part of the ongoing API stabilization process, we need to settle naming conventions for public APIs. This RFC is a continuation of that process, addressing a number of smaller but still global naming issues.

Detailed design

The RFC includes a number of unrelated naming conventions, broken down into subsections below.

Referring to types in method names

Function names often involve type names, the most common example being conversions like as_slice. If the type has a purely textual name (ignoring parameters), it is straightforward to convert between type conventions and function conventions:

Type nameText in methods
Stringstring
Vec<T>vec
YourTypeyour_type

Types that involve notation are less clear, so this RFC proposes some standard conventions for referring to these types. There is some overlap on these rules; apply the most specific applicable rule.

Type nameText in methods
&strstr
&[T]slice
&mut [T]mut_slice
&[u8]bytes
&Tref
&mut Tmut
*const Tptr
*mut Tmut_ptr

The only surprise here is the use of mut rather than mut_ref for mutable references. This abbreviation is already a fairly common convention (e.g. as_ref and as_mut methods), and is meant to keep this very common case short.

Iterator type names

The current convention for iterator type names is the following:

Iterators require introducing and exporting new types. These types should use the following naming convention:

  • Base name. If the iterator yields something that can be described with a specific noun, the base name should be the pluralization of that noun (e.g. an iterator yielding words is called Words). Generic contains use the base name Items.

  • Flavor prefix. Iterators often come in multiple flavors, with the default flavor providing immutable references. Other flavors should prefix their name:

    • Moving iterators have a prefix of Move.
    • If the default iterator yields an immutable reference, an iterator yielding a mutable reference has a prefix Mut.
    • Reverse iterators have a prefix of Rev.

(These conventions were established as part of this PR and later this one.)

These conventions have not yet been updated to reflect the recent change to the iterator method names, in part to allow for a more significant revamp. There are some problems with the current rules:

  • They are fairly loose and therefore not mechanical or predictable. In particular, the choice of noun to use for the base name is completely arbitrary.

  • They are not always applicable. The iter module, for example, defines a large number of iterator types for use in the adapter methods on Iterator (e.g. Map for map, Filter for filter, etc.) The module does not follow the convention, and it’s not clear how it could do so.

This RFC proposes to instead align the convention with the iter module: the name of an iterator type should be the same as the method that produces the iterator.

For example:

  • iter would yield an Iter
  • iter_mut would yield an IterMut
  • into_iter would yield an IntoIter

These type names make the most sense when prefixed with their owning module, e.g. vec::IntoIter.

Advantages:

  • The rule is completely mechanical, and therefore highly predictable.

  • The convention can be (almost) universally followed: it applies equally well to vec and to iter.

Disadvantages:

  • IntoIter is not an ideal name. Note, however, that since we’ve moved to into_iter as the method name, the existing convention (MoveItems) needs to be updated to match, and it’s not clear how to do better than IntoItems in any case.

  • This naming scheme can result in clashes if multiple containers are defined in the same module. Note that this is already the case with today’s conventions. In most cases, this situation should be taken as an indication that a more refined module hierarchy is called for.

Additional iterator method names

An earlier RFC settled the conventions for the “standard” iterator methods: iter, iter_mut, into_iter.

However, there are many cases where you also want “nonstandard” iterator methods: bytes and chars for strings, keys and values for maps, the various adapters for iterators.

This RFC proposes the following convention:

  • Use iter (and variants) for data types that can be viewed as containers, and where the iterator provides the “obvious” sequence of contained items.

  • If there is no single “obvious” sequence of contained items, or if there are multiple desired views on the container, provide separate methods for these that do not use iter in their name. The name should instead directly reflect the view/item type being iterated (like bytes).

  • Likewise, for iterator adapters (filter, map and so on) or other iterator-producing operations (intersection), use the clearest name to describe the adapter/operation directly, and do not mention iter.

  • If not otherwise qualified, an iterator-producing method should provide an iterator over immutable references. Use the _mut suffix for variants producing mutable references, and the into_ prefix for variants consuming the data in order to produce owned values.

Getter/setter APIs

Some data structures do not wish to provide direct access to their fields, but instead offer “getter” and “setter” methods for manipulating the field state (often providing checking or other functionality).

The proposed convention for a field foo: T is:

  • A method foo(&self) -> &T for getting the current value of the field.
  • A method set_foo(&self, val: T) for setting the field. (The val argument here may take &T or some other type, depending on the context.)

Note that this convention is about getters/setters on ordinary data types, not on builder objects. The naming conventions for builder methods are still open.

Associated types

Unlike type parameters, the names of associated types for a trait are a meaningful part of its public API.

Associated types should be given concise, but meaningful names, generally following the convention for type names rather than generic. For example, use Err rather than E, and Item rather than T.

Trait naming

The wiki guidelines have long suggested naming traits as follows:

Prefer (transitive) verbs, nouns, and then adjectives; avoid grammatical suffixes (like able)

Trait names like Copy, Clone and Show follow this convention. The convention avoids grammatical verbosity and gives Rust code a distinctive flavor (similar to its short keywords).

This RFC proposes to amend the convention to further say: if there is a single method that is the dominant functionality of the trait, consider using the same name for the trait itself. This is already the case for Clone and ToCStr, for example.

According to these rules, Encodable should probably be Encode.

There are some open questions about these rules; see Unresolved Questions below.

Lints

Our lint names are not consistent. While this may seem like a minor concern, when we hit 1.0 the lint names will be locked down, so it’s worth trying to clean them up now.

The basic rule is: the lint name should make sense when read as “allow lint-name” or “allow lint-name items”. For example, “allow deprecated items” and “allow dead_code” makes sense, while “allow unsafe_block” is ungrammatical (should be plural).

Specifically, this RFC proposes that:

  • Lint names should state the bad thing being checked for, e.g. deprecated, so that #[allow(deprecated)] (items) reads correctly. Thus ctypes is not an appropriate name; improper_ctypes is.

  • Lints that apply to arbitrary items (like the stability lints) should just mention what they check for: use deprecated rather than deprecated_items. This keeps lint names short. (Again, think “allow lint-name items”.)

  • If a lint applies to a specific grammatical class, mention that class and use the plural form: use unused_variables rather than unused_variable. This makes #[allow(unused_variables)] read correctly.

  • Lints that catch unnecessary, unused, or useless aspects of code should use the term unused, e.g. unused_imports, unused_typecasts.

  • Use snake case in the same way you would for function names.

Suffix ordering

Very occasionally, conventions will require a method to have multiple suffixes, for example get_unchecked_mut. When feasible, design APIs so that this situation does not arise.

Because it is so rare, it does not make sense to lay out a complete convention for the order in which various suffixes should appear; no one would be able to remember it.

However, the mut suffix is so common, and is now entrenched as showing up in final position, that this RFC does propose one simple rule: if there are multiple suffixes including mut, place mut last.

Prelude traits

It is not currently possible to define inherent methods directly on basic data types like char or slices. Consequently, libcore and other basic crates provide one-off traits (like ImmutableSlice or Char) that are intended to be implemented solely by these primitive types, and which are included in the prelude.

These traits are generally not designed to be used for generic programming, but the fact that they appear in core libraries with such basic names makes it easy to draw the wrong conclusion.

This RFC proposes to use a Prelude suffix for these basic traits. Since the traits are, in fact, included in the prelude their names do not generally appear in Rust programs. Therefore, choosing a longer and clearer name will help avoid confusion about the intent of these traits, and will avoid namespace pollution.

(There is one important drawback in today’s Rust: associated functions in these traits cannot yet be called directly on the types implementing the traits. These functions are the one case where you would need to mention the trait by name, today. Hopefully, this situation will change before 1.0; otherwise we may need a separate plan for dealing with associated functions.)

Error messages

Error messages – including those produced by fail! and those placed in the desc or detail fields of e.g. IoError – should in general be in all lower case. This applies to both rustc and std.

This is already the predominant convention, but there are some inconsistencies.

Alternatives

Iterator type names

The iterator type name convention could instead basically stick with today’s convention, but using suffixes instead of prefixes, and IntoItems rather than MoveItems.

Unresolved questions

How far should the rules for trait names go? Should we avoid “-er” suffixes, e.g. have Read rather than Reader?

Summary

This is a conventions RFC that proposes that the items exported from a module should never be prefixed with that module name. For example, we should have io::Error, not io::IoError.

(An alternative design is included that special-cases overlap with the prelude.)

Motivation

Currently there is no clear prohibition around including the module’s name as a prefix on an exported item, and it is sometimes done for type names that are feared to be “popular” (like Error and Result being IoError and IoResult) for clarity.

This RFC include two designs: one that entirely rules out such prefixes, and one that rules it out except for names that overlap with the prelude. Pros/cons are given for each.

Detailed design

The main rule being proposed is very simple: the items exported from a module should never be prefixed with the module’s name.

Rationale:

  • Avoids needless stuttering like io::IoError.
  • Any ambiguity can be worked around:
    • Either qualify by the module, i.e. io::Error,
    • Or rename on import: use io::Error as IoError.
  • The rule is extremely simple and clear.

Downsides:

  • The name may already exist in the module wanting to export it.
    • If that’s due to explicit imports, those imports can be renamed or module-qualified (see above).
    • If that’s due to a prelude conflict, however, confusion may arise due to the conventional global meaning of identifiers defined in the prelude (i.e., programmers do not expect prelude imports to be shadowed).

Overall, the RFC author believes that if this convention is adopted, confusion around redefining prelude names would gradually go away, because (at least for things like Result) we would come to expect it.

Alternative design

An alternative rule would be to never prefix an exported item with the module’s name, except for names that are also defined in the prelude, which must be prefixed by the module’s name.

For example, we would have io::Error and io::IoResult.

Rationale:

  • Largely the same as the above, but less decisively.
  • Avoids confusion around prelude-defined names.

Downsides:

  • Retains stuttering for some important cases, e.g. custom Result types, which are likely to be fairly common.
  • Makes it even more problematic to expand the prelude in the future.

Summary

This RFC is preparation for API stabilization for the std::num module. The proposal is to finish the simplification efforts started in @bjz’s reversal of the numerics hierarchy.

Broadly, the proposal is to collapse the remaining numeric hierarchy in std::num, and to provide only limited support for generic programming (roughly, only over primitive numeric types that vary based on size). Traits giving detailed numeric hierarchy can and should be provided separately through the Cargo ecosystem.

Thus, this RFC proposes to flatten or remove most of the traits currently provided by std::num, and generally to simplify the module as much as possible in preparation for API stabilization.

Motivation

History

Starting in early 2013, there was an effort to design a comprehensive “numeric hierarchy” for Rust: a collection of traits classifying a wide variety of numbers and other algebraic objects. The intent was to allow highly-generic code to be written for algebraic structures and then instantiated to particular types.

This hierarchy covered structures like bigints, but also primitive integer and float types. It was an enormous and long-running community effort.

Later, it was recognized that building such a hierarchy within libstd was misguided:

@bjz The API that resulted from #4819 attempted, like Haskell, to blend both the primitive numerics and higher level mathematical concepts into one API. This resulted in an ugly hybrid where neither goal was adequately met. I think the libstd should have a strong focus on implementing fundamental operations for the base numeric types, but no more. Leave the higher level concepts to libnum or future community projects.

The std::num module has thus been slowly migrating away from a large trait hierarchy toward a simpler one providing just APIs for primitive data types: this is @bjz’s reversal of the numerics hierarchy.

Along side this effort, there are already external numerics packages like @bjz’s num-rs.

But we’re not finished yet.

The current state of affairs

The std::num module still contains quite a few traits that subdivide out various features of numbers:

pub trait Zero: Add<Self, Self> {
    fn zero() -> Self;
    fn is_zero(&self) -> bool;
}

pub trait One: Mul<Self, Self> {
    fn one() -> Self;
}

pub trait Signed: Num + Neg<Self> {
    fn abs(&self) -> Self;
    fn abs_sub(&self, other: &Self) -> Self;
    fn signum(&self) -> Self;
    fn is_positive(&self) -> bool;
    fn is_negative(&self) -> bool;
}

pub trait Unsigned: Num {}

pub trait Bounded {
    fn min_value() -> Self;
    fn max_value() -> Self;
}

pub trait Primitive: Copy + Clone + Num + NumCast + PartialOrd + Bounded {}

pub trait Num: PartialEq + Zero + One + Neg<Self> + Add<Self,Self> + Sub<Self,Self>
             + Mul<Self,Self> + Div<Self,Self> + Rem<Self,Self> {}

pub trait Int: Primitive + CheckedAdd + CheckedSub + CheckedMul + CheckedDiv
             + Bounded + Not<Self> + BitAnd<Self,Self> + BitOr<Self,Self>
             + BitXor<Self,Self> + Shl<uint,Self> + Shr<uint,Self> {
    fn count_ones(self) -> uint;
    fn count_zeros(self) -> uint { ... }
    fn leading_zeros(self) -> uint;
    fn trailing_zeros(self) -> uint;
    fn rotate_left(self, n: uint) -> Self;
    fn rotate_right(self, n: uint) -> Self;
    fn swap_bytes(self) -> Self;
    fn from_be(x: Self) -> Self { ... }
    fn from_le(x: Self) -> Self { ... }
    fn to_be(self) -> Self { ... }
    fn to_le(self) -> Self { ... }
}

pub trait FromPrimitive {
    fn from_i64(n: i64) -> Option<Self>;
    fn from_u64(n: u64) -> Option<Self>;

    // many additional defaulted methods
    // ...
}

pub trait ToPrimitive {
    fn to_i64(&self) -> Option<i64>;
    fn to_u64(&self) -> Option<u64>;

    // many additional defaulted methods
    // ...
}

pub trait NumCast: ToPrimitive {
    fn from<T: ToPrimitive>(n: T) -> Option<Self>;
}

pub trait Saturating {
    fn saturating_add(self, v: Self) -> Self;
    fn saturating_sub(self, v: Self) -> Self;
}

pub trait CheckedAdd: Add<Self, Self> {
    fn checked_add(&self, v: &Self) -> Option<Self>;
}

pub trait CheckedSub: Sub<Self, Self> {
    fn checked_sub(&self, v: &Self) -> Option<Self>;
}

pub trait CheckedMul: Mul<Self, Self> {
    fn checked_mul(&self, v: &Self) -> Option<Self>;
}

pub trait CheckedDiv: Div<Self, Self> {
    fn checked_div(&self, v: &Self) -> Option<Self>;
}

pub trait Float: Signed + Primitive {
    // a huge collection of static functions (for constants) and methods
    ...
}

pub trait FloatMath: Float {
    // an additional collection of methods
}

The Primitive traits are intended primarily to support a mechanism, #[deriving(FromPrimitive)], that makes it easy to provide conversions from numeric types to C-like enums.

The Saturating and Checked traits provide operations that provide special handling for overflow and other numeric errors.

Almost all of these traits are currently included in the prelude.

In addition to these traits, the std::num module includes a couple dozen free functions, most of which duplicate methods available though traits.

Where we want to go: a summary

The goal of this RFC is to refactor the std::num hierarchy with the following goals in mind:

  • Simplicity.

  • Limited generic programming: being able to work generically over the natural classes of primitive numeric types that vary only by size. There should be enough abstraction to support porting strconv, the generic string/number conversion code used in std.

  • Minimizing dependencies for libcore. For example, it should not require cmath.

  • Future-proofing for external numerics packages. The Cargo ecosystem should ultimately provide choices of sophisticated numeric hierarchies, and std::num should not get in the way.

Detailed design

Overview: the new hierarchy

This RFC proposes to collapse the trait hierarchy in std::num to just the following traits:

  • Int, implemented by all primitive integer types (u8 - u64, i8-i64)
    • UnsignedInt, implemented by u8 - u64
  • Signed, implemented by all signed primitive numeric types (i8-i64, f32-f64)
  • Float, implemented by f32 and f64
    • FloatMath, implemented by f32 and f64, which provides functionality from cmath

These traits inherit from all applicable overloaded operator traits (from core::ops). They suffice for generic programming over several basic categories of primitive numeric types.

As designed, these traits include a certain amount of redundancy between Int and Float. The Alternatives section shows how this could be factored out into a separate Num trait. But doing so suggests a level of generic programming that these traits aren’t intended to support.

The main reason to pull out Signed into its own trait is so that it can be added to the prelude. (Further discussion below.)

Detailed definitions

Below is the full definition of these traits. The functionality remains largely as it is today, just organized into fewer traits:

pub trait Int: Copy + Clone + PartialOrd + PartialEq
             + Add<Self,Self> + Sub<Self,Self>
             + Mul<Self,Self> + Div<Self,Self> + Rem<Self,Self>
             + Not<Self> + BitAnd<Self,Self> + BitOr<Self,Self>
             + BitXor<Self,Self> + Shl<uint,Self> + Shr<uint,Self>
{
    // Constants
    fn zero() -> Self;  // These should be associated constants when those are available
    fn one() -> Self;
    fn min_value() -> Self;
    fn max_value() -> Self;

    // Deprecated:
    // fn is_zero(&self) -> bool;

    // Bit twiddling
    fn count_ones(self) -> uint;
    fn count_zeros(self) -> uint { ... }
    fn leading_zeros(self) -> uint;
    fn trailing_zeros(self) -> uint;
    fn rotate_left(self, n: uint) -> Self;
    fn rotate_right(self, n: uint) -> Self;
    fn swap_bytes(self) -> Self;
    fn from_be(x: Self) -> Self { ... }
    fn from_le(x: Self) -> Self { ... }
    fn to_be(self) -> Self { ... }
    fn to_le(self) -> Self { ... }

    // Checked arithmetic
    fn checked_add(self, v: Self) -> Option<Self>;
    fn checked_sub(self, v: Self) -> Option<Self>;
    fn checked_mul(self, v: Self) -> Option<Self>;
    fn checked_div(self, v: Self) -> Option<Self>;
    fn saturating_add(self, v: Self) -> Self;
    fn saturating_sub(self, v: Self) -> Self;
}

pub trait UnsignedInt: Int {
    fn is_power_of_two(self) -> bool;
    fn checked_next_power_of_two(self) -> Option<Self>;
    fn next_power_of_two(self) -> Self;
}

pub trait Signed: Neg<Self> {
    fn abs(&self) -> Self;
    fn signum(&self) -> Self;
    fn is_positive(&self) -> bool;
    fn is_negative(&self) -> bool;

    // Deprecated:
    // fn abs_sub(&self, other: &Self) -> Self;
}

pub trait Float: Copy + Clone + PartialOrd + PartialEq + Signed
               + Add<Self,Self> + Sub<Self,Self>
               + Mul<Self,Self> + Div<Self,Self> + Rem<Self,Self>
{
    // Constants
    fn zero() -> Self;  // These should be associated constants when those are available
    fn one() -> Self;
    fn min_value() -> Self;
    fn max_value() -> Self;

    // Classification and decomposition
    fn is_nan(self) -> bool;
    fn is_infinite(self) -> bool;
    fn is_finite(self) -> bool;
    fn is_normal(self) -> bool;
    fn classify(self) -> FPCategory;
    fn integer_decode(self) -> (u64, i16, i8);

    // Float intrinsics
    fn floor(self) -> Self;
    fn ceil(self) -> Self;
    fn round(self) -> Self;
    fn trunc(self) -> Self;
    fn mul_add(self, a: Self, b: Self) -> Self;
    fn sqrt(self) -> Self;
    fn powi(self, n: i32) -> Self;
    fn powf(self, n: Self) -> Self;
    fn exp(self) -> Self;
    fn exp2(self) -> Self;
    fn ln(self) -> Self;
    fn log2(self) -> Self;
    fn log10(self) -> Self;

    // Conveniences
    fn fract(self) -> Self;
    fn recip(self) -> Self;
    fn rsqrt(self) -> Self;
    fn to_degrees(self) -> Self;
    fn to_radians(self) -> Self;
    fn log(self, base: Self) -> Self;
}

// This lives directly in `std::num`, not `core::num`, since it requires `cmath`
pub trait FloatMath: Float {
    // Exactly the methods defined in today's version
}

Float constants, float math, and cmath

This RFC proposes to:

  • Remove all float constants from the Float trait. These constants are available directly from the f32 and f64 modules, and are not really useful for the kind of generic programming these new traits are intended to allow.

  • Continue providing various cmath functions as methods in the FloatMath trait. Putting this in a separate trait means that libstd depends on cmath but libcore does not.

Free functions

All of the free functions defined in std::num are deprecated.

The prelude

The prelude will only include the Signed trait, as the operations it provides are widely expected to be available when they apply.

The reason for removing the rest of the traits is two-fold:

  • The remaining operations are relatively uncommon. Note that various overloaded operators, like +, work regardless of this choice. Those doing intensive work with e.g. floats would only need to import Float and FloatMath.

  • Keeping this functionality out of the prelude means that the names of methods and associated items remain available for external numerics libraries in the Cargo ecosystem.

strconv, FromStr, ToStr, FromStrRadix, ToStrRadix

Currently, traits for converting from &str and to String are both included, in their own modules, in libstd. This is largely due to the desire to provide impls for numeric types, which in turn relies on std::num::strconv.

This RFC proposes to:

  • Move the FromStr trait into core::str.
  • Rename the ToStr trait to ToString, and move it to collections::string.
  • Break up and revise std::num::strconv into separate, private modules that provide the needed functionality for the from_str and to_string methods. (Some of this functionality has already migrated to fmt and been deprecated in strconv.)
  • Move the FromStrRadix into core::num.
  • Remove ToStrRadix, which is already deprecated in favor of fmt.

FromPrimitive and friends

Ideally, the FromPrimitive, ToPrimitive, Primitive, NumCast traits would all be removed in favor of a more principled way of working with C-like enums. However, such a replacement is outside of the scope of this RFC, so these traits are left (as #[experimental]) for now. A follow-up RFC proposing a better solution should appear soon.

In the meantime, see this proposal and the discussion on this issue about Ordinal for the rough direction forward.

Drawbacks

This RFC somewhat reduces the potential for writing generic numeric code with std::num traits. This is intentional, however: the new design represents “just enough” generics to cover differently-sized built-in types, without any attempt at general algebraic abstraction.

Alternatives

The status quo is clearly not ideal, and as explained above there was a long attempt at providing a more complete numeric hierarchy in std. So some collapse of the hierarchy seems desirable.

That said, there are other possible factorings. We could introduce the following Num trait to factor out commonalities between Int and Float:

pub trait Num: Copy + Clone + PartialOrd + PartialEq
             + Add<Self,Self> + Sub<Self,Self>
             + Mul<Self,Self> + Div<Self,Self> + Rem<Self,Self>
{
    fn zero() -> Self;  // These should be associated constants when those are available
    fn one() -> Self;
    fn min_value() -> Self;
    fn max_value() -> Self;
}

However, it’s not clear whether this factoring is worth having a more complex hierarchy, especially because the traits are not intended for generic programming at that level (and generic programming across integer and floating-point types is likely to be extremely rare)

The signed and unsigned operations could be offered on more types, allowing removal of more traits but a less clear-cut semantics.

Unresolved questions

This RFC does not propose a replacement for #[deriving(FromPrimitive)], leaving the relevant traits in limbo status. (See this proposal and the discussion on this issue about Ordinal for the rough direction forward.)

  • Start Date: 2014-10-09
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/378
  • Rust Issue #: https://github.com/rust-lang/rust/issues/18635

Summary

Parse macro invocations with parentheses or square brackets as expressions no matter the context, and require curly braces or a semicolon following the invocation to invoke a macro as a statement.

Motivation

Currently, macros that start a statement want to be a whole statement, and so expressions such as foo!().bar don’t parse if they start a statement. The reason for this is because sometimes one wants a macro that expands to an item or statement (for example, macro_rules!), and forcing the user to add a semicolon to the end is annoying and easy to forget for long, multi-line statements. However, the vast majority of macro invocations are not intended to expand to an item or statement, leading to frustrating parser errors.

Unfortunately, this is not as easy to resolve as simply checking for an infix operator after every statement-like macro invocation, because there exist operators that are both infix and prefix. For example, consider the following function:

fn frob(x: int) -> int {
    maybe_return!(x)
    // Provide a default value
    -1
}

Today, this parses successfully. However, if a rule were added to the parser that any macro invocation followed by an infix operator be parsed as a single expression, this would still parse successfully, but not in the way expected: it would be parsed as (maybe_return!(x)) - 1. This is an example of how it is impossible to resolve this ambiguity properly without breaking compatibility.

Detailed design

Treat all macro invocations with parentheses, (), or square brackets, [], as expressions, and never attempt to parse them as statements or items in a block context unless they are followed directly by a semicolon. Require all item-position macro invocations to be either invoked with curly braces, {}, or be followed by a semicolon (for consistency).

This distinction between parentheses and curly braces has precedent in Rust: tuple structs, which use parentheses, must be followed by a semicolon, while structs with fields do not need to be followed by a semicolon. Many constructs like match and if, which use curly braces, also do not require semicolons when they begin a statement.

Drawbacks

  • This introduces a difference between different macro invocation delimiters, where previously there was no difference.
  • This requires the use of semicolons in a few places where it was not necessary before.

Alternatives

  • Require semicolons after all macro invocations that aren’t being used as expressions. This would have the downside of requiring semicolons after every macro_rules! declaration.

Unresolved questions

None.

Summary

  • Remove reflection from the compiler
  • Remove libdebug
  • Remove the Poly format trait as well as the :? format specifier

Motivation

In ancient Rust, one of the primary methods of printing a value was via the %? format specifier. This would use reflection at runtime to determine how to print a type. Metadata generated by the compiler (a TyDesc) would be generated to guide the runtime in how to print a type. One of the great parts about reflection was that it was quite easy to print any type. No extra burden was required from the programmer to print something.

There are, however, a number of cons to this approach:

  • Generating extra metadata for many many types by the compiler can lead to noticeable increases in compile time and binary size.
  • This form of formatting is inherently not speedy. Widespread usage of %? led to misleading benchmarks about formatting in Rust.
  • Depending on how metadata is handled, this scheme makes it very difficult to allow recompiling a library without recompiling downstream dependants.

Over time, usage off the ? formatting has fallen out of fashion for the following reasons:

  • The deriving-based infrastructure was improved greatly and has started seeing much more widespread use, especially for traits like Clone.
  • The formatting language implementation and syntax has changed. The most common formatter is now {} (an implementation of Show), and it is quite common to see an implementation of Show on nearly all types (frequently via deriving). This form of customizable-per-typformatting largely provides the gap that the original formatting language did not provide, which was limited to only primitives and %?.
  • Compiler built-ins, such as ~[T] and ~str have been removed from the language, and runtime reflection on Vec<T> and String are far less useful (they just print pointers, not contents).

As a result, the :? formatting specifier is quite rarely used today, and when it is used it’s largely for historical purposes and the output is not of very high quality any more.

The drawbacks and today’s current state of affairs motivate this RFC to recommend removing this infrastructure entirely. It’s possible to add it back in the future with a more modern design reflecting today’s design principles of Rust and the many language changes since the infrastructure was created.

Detailed design

  • Remove all reflection infrastructure from the compiler. I am not personally super familiar with what exists, but at least these concrete actions will be taken.
    • Remove the visit_glue function from TyDesc.
    • Remove any form of visit_glue generation.
    • (maybe?) Remove the name field of TyDesc.
  • Remove core::intrinsics::TyVisitor
  • Remove core::intrinsics::visit_tydesc
  • Remove libdebug
  • Remove std::fmt::Poly
  • Remove the :? format specifier in the formatting language syntax.

Drawbacks

The current infrastructure for reflection, although outdated, represents a significant investment of work in the past which could be a shame to lose. While present in the git history, this infrastructure has been updated over time, and it will no longer receive this attention.

Additionally, given an arbitrary type T, it would now be impossible to print it in literally any situation. Type parameters will now require some bound, such as Show, to allow printing a type.

These two drawbacks are currently not seen as large enough to outweigh the gains from reducing the surface area of the std::fmt API and reduction in maintenance load on the compiler.

Alternatives

The primary alternative to outright removing this infrastructure is to preserve it, but flag it all as #[experimental] or feature-gated. The compiler could require the fmt_poly feature gate to be enabled to enable formatting via :? in a crate. This would mean that any backwards-incompatible changes could continue to be made, and any arbitrary type T could still be printed.

Unresolved questions

  • Can core::intrinsics::TyDesc be removed entirely?

Summary

Stabilize the std::fmt module, in addition to the related macros and formatting language syntax. As a high-level summary:

  • Leave the format syntax as-is.
  • Remove a number of superfluous formatting traits (renaming a few in the process).

Motivation

This RFC is primarily motivated by the need to stabilize std::fmt. In the past stabilization has not required RFCs, but the changes envisioned for this module are far-reaching and modify some parts of the language (format syntax), leading to the conclusion that this stabilization effort required an RFC.

Detailed design

The std::fmt module encompasses more than just the actual structs/traits/functions/etc defined within it, but also a number of macros and the formatting language syntax for describing format strings. Each of these features of the module will be described in turn.

Formatting Language Syntax

The documented syntax will not be changing as-written. All of these features will be accepted wholesale (considered stable):

  • Usage of {} for “format something here” placeholders
  • {{ as an escape for { (and vice-versa for })
  • Various format specifiers
    • fill character for alignment
    • actual alignment, left (<), center (^), and right (>).
    • sign to print (+ or -)
    • minimum width for text to be printed
      • both a literal count and a runtime argument to the format string
    • precision or maximum width
      • all of a literal count, a specific runtime argument to the format string, and “the next” runtime argument to the format string.
    • “alternate formatting” (#)
    • leading zeroes (0)
  • Integer specifiers of what to format ({0})
  • Named arguments ({foo})

Using Format Specifiers

While quite useful occasionally, there is no static guarantee that any implementation of a formatting trait actually respects the format specifiers passed in. For example, this code does not necessarily work as expected:

#[deriving(Show)]
struct A;

format!("{:10}", A);

All of the primitives for rust (strings, integers, etc) have implementations of Show which respect these formatting flags, but almost no other implementations do (notably those generated via deriving).

This RFC proposes stabilizing the formatting flags, despite this current state of affairs. There are in theory possible alternatives in which there is a static guarantee that a type does indeed respect format specifiers when one is provided, generating a compile-time error when a type doesn’t respect a specifier. These alternatives, however, appear to be too heavyweight and are considered somewhat overkill.

In general it’s trivial to respect format specifiers if an implementation delegates to a primitive or somehow has a buffer of what’s to be formatted. To cover these two use cases, the Formatter structure passed around has helper methods to assist in formatting these situations. This is, however, quite rare to fall into one of these two buckets, so the specifiers are largely ignored (and the formatter is write!-n to directly).

Named Arguments

Currently Rust does not support named arguments anywhere except for format strings. Format strings can get away with it because they’re all part of a macro invocation (unlike the rest of Rust syntax).

The worry for stabilizing a named argument syntax for the formatting language is that if Rust ever adopts named arguments with a different syntax, it would be quite odd having two systems.

The most recently proposed keyword argument RFC used : for the invocation syntax rather than = as formatting does today. Additionally, today foo = bar is a valid expression, having a value of type ().

With these worries, there are one of two routes that could be pursued:

  1. The expr = expr syntax could be disallowed on the language level. This could happen both in a total fashion or just allowing the expression appearing as a function argument. For both cases, this will probably be considered a “wart” of Rust’s grammar.
  2. The foo = bar syntax could be allowed in the macro with prior knowledge that the default argument syntax for Rust, if one is ever developed, will likely be different. This would mean that the foo = bar syntax in formatting macros will likely be considered a wart in the future.

Given these two cases, the clear choice seems to be accepting a wart in the formatting macros themselves. It will likely be possible to extend the macro in the future to support whatever named argument syntax is developed as well, and the old syntax could be accepted for some time.

Formatting Traits

Today there are 16 formatting traits. Each trait represents a “type” of formatting, corresponding to the [type] production in the formatting syntax. As a bit of history, the original intent was for each trait to declare what specifier it used, allowing users to add more specifiers in newer crates. For example the time crate could provide the {:time} formatting trait. This design was seen as too complicated, however, so it was not landed. It does, however, partly motivate why there is one trait per format specifier today.

The 16 formatting traits and their format specifiers are:

  • nothingShow
  • dSigned
  • iSigned
  • uUnsigned
  • bBool
  • cChar
  • oOctal
  • xLowerHex
  • XUpperHex
  • sString
  • pPointer
  • tBinary
  • fFloat
  • eLowerExp
  • EUpperExp
  • ?Poly

This RFC proposes removing the following traits:

  • Signed
  • Unsigned
  • Bool
  • Char
  • String
  • Float

Note that this RFC would like to remove Poly, but that is covered by a separate RFC.

Today by far the most common formatting trait is Show, and over time the usefulness of these formatting traits has been reduced. The traits this RFC proposes to remove are only assertions that the type provided actually implements the trait, there are few known implementations of the traits which diverge on how they are implemented.

Additionally, there are a two of oddities inherited from ancient C:

  • Both d and i are wired to Signed
  • One may reasonable expect the Binary trait to use b as its specifier.

The remaining traits this RFC recommends leaving. The rationale for this is that they represent alternate representations of primitive types in general, and are also quite often expected when coming from other format syntaxes such as C/Python/Ruby/etc.

It would, of course, be possible to re-add any of these traits in a backwards-compatible fashion.

Format type for Binary

With the removal of the Bool trait, this RFC recommends renaming the specifier for Binary to b instead of t.

Combining all traits

A possible alternative to having many traits is to instead have one trait, such as:

pub trait Show {
    fn fmt(...);
    fn hex(...) { fmt(...) }
    fn lower_hex(...) { fmt(...) }
    ...
}

There are a number of pros to this design:

  • Instead of having to consider many traits, only one trait needs to be considered.
  • All types automatically implement all format types or zero format types.
  • In a hypothetical world where a format string could be constructed at runtime, this would alleviate the signature of such a function. The concrete type taken for all its arguments would be &Show and then if the format string supplied :x or :o the runtime would simply delegate to the relevant trait method.

There are also a number of cons to this design, which motivate this RFC recommending the remaining separation of these traits.

  • The “static assertion” that a type implements a relevant format trait becomes almost nonexistent because all types either implement none or all formatting traits.
  • The documentation for the Show trait becomes somewhat overwhelming because it’s no longer immediately clear which method should be overridden for what.
  • A hypothetical world with runtime format string construction could find a different system for taking arguments.

Method signature

Currently, each formatting trait has a signature as follows:

fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result;

This implies that all formatting is considered to be a stream-oriented operation where f is a sink to write bytes to. The fmt::Result type indicates that some form of “write error” happened, but conveys no extra information.

This API has a number of oddities:

  • The type Formatter has inherent write and write_fmt methods to be used in conjunction with the write! macro return an instance of fmt::Result.
  • The Formatter type also implements the std::io::Writer trait in order to be able to pass around a &mut Writer.
  • This relies on the duck-typing of macros and for the inherent write_fmt method to trump the Writer’s write_fmt method in order to return an error of the correct type.
  • The Result return type is an enumeration with precisely one variant, FormatError.

Overall, this signature seems to be appropriate in terms of “give me a sink of bytes to write myself to, and let me return an error if one happens”. Due to this, this RFC recommends that all formatting traits be marked #[unstable].

Macros

There are a number of prelude macros which interact with the format syntax:

  • format_args
  • format_args_method
  • write
  • writeln
  • print
  • println
  • format
  • fail
  • assert
  • debug_assert

All of these are macro_rules!-defined macros, except for format_args and format_args_method.

Common syntax

All of these macros take some form of prefix, while the trailing suffix is always some instantiation of the formatting syntax. The suffix portion is recommended to be considered #[stable], and the sections below will discuss each macro in detail with respect to its prefix and semantics.

format_args

The fundamental purpose of this macro is to generate a value of type &fmt::Arguments which represents a pending format computation. This structure can then be passed at some point to the methods in std::fmt to actually perform the format.

The prefix of this macro is some “callable thing”, be it a top-level function or a closure. It cannot invoke a method because foo.bar is not a “callable thing” to call the bar method on foo.

Ideally, this macro would have no prefix, and would be callable like:

use std::fmt;

let args = format_args!("Hello {}!", "world");
let hello_world = fmt::format(args);

Unfortunately, without an implementation of RFC 31 this is not possible. As a result, this RFC proposes a #[stable] consideration of this macro and its syntax.

format_args_method

The purpose of this macro is to solve the “call this method” case not covered with the format_args macro. This macro was introduced fairly late in the game to solve the problem that &*trait_object was not allowed. This is currently allowed, however (due to DST).

This RFC proposes immediately removing this macro. The primary user of this macro is write!, meaning that the following code, which compiles today, would need to be rewritten:

let mut output = std::io::stdout();
// note the lack of `&mut` in front
write!(output, "hello {}", "world");

The write! macro would be redefined as:

macro_rules! write(
    ($dst:expr, $($arg:tt)*) => ({
        let dst = &mut *$dst;
        format_args!(|args| { dst.write_fmt(args) }, $($arg)*)
    })
)

The purpose here is to borrow $dst outside of the closure to ensure that the closure doesn’t borrow too many of its contents. Otherwise, code such as this would be disallowed

write!(&mut my_struct.writer, "{}", my_struct.some_other_field);

write/writeln

These two macros take the prefix of “some pointer to a writer” as an argument, and then format data into the write (returning whatever write_fmt returns). These macros were originally designed to require a &mut T as the first argument, but today, due to the usage of format_args_method, they can take any T which responds to write_fmt.

This RFC recommends marking these two macros #[stable] with the modification above (removing format_args_method). The ln suffix to writeln will be discussed shortly.

print/println

These two macros take no prefix, and semantically print to a task-local stdout stream. The purpose of a task-local stream is provide some form of buffering to make stdout printing at all performant.

This RFC recommends marking these two macros a #[stable].

The ln suffix

The name println is one of the few locations in Rust where a short C-like abbreviation is accepted rather than the more verbose, but clear, print_line (for example). Due to the overwhelming precedent of other languages (even Java uses println!), this is seen as an acceptable special case to the rule.

format

This macro takes no prefix and returns a String.

In ancient rust this macro was called its shorter name, fmt. Additionally, the name format is somewhat inconsistent with the module name of fmt. Despite this, this RFC recommends considering this macro #[stable] due to its delegation to the format method in the std::fmt module, similar to how the write! macro delegates to the fmt::write.

fail/assert/debug_assert

The format string portions of these macros are recommended to be considered as #[stable] as part of this RFC. The actual stability of the macros is not considered as part of this RFC.

Freestanding Functions

There are a number of freestanding functions to consider in the std::fmt module for stabilization.

  • fn format(args: &Arguments) -> String

    This RFC recommends #[experimental]. This method is largely an implementation detail of this module, and should instead be used via:

    let args: &fmt::Arguments = ...;
    format!("{}", args)
  • fn write(output: &mut FormatWriter, args: &Arguments) -> Result

    This is somewhat surprising in that the argument to this function is not a Writer, but rather a FormatWriter. This is technically speaking due to the core/std separation and how this function is defined in core and Writer is defined in std.

    This RFC recommends marking this function #[experimental] as the write_fmt exists on Writer to perform the corresponding operation. Consequently we may wish to remove this function in favor of the write_fmt method on FormatWriter.

    Ideally this method would be removed from the public API as it is just an implementation detail of the write! macro.

  • fn radix<T>(x: T, base: u8) -> RadixFmt<T, Radix>

    This function is a bit of an odd-man-out in that it is a constructor, but does not follow the existing conventions of Type::new. The purpose of this function is to expose the ability to format a number for any radix. The default format specifiers :o, :x, and :t are essentially shorthands for this function, except that the format types have specialized implementations per radix instead of a generic implementation.

    This RFC proposes that this function be considered #[unstable] as its location and naming are a bit questionable, but the functionality is desired.

Miscellaneous items

  • trait FormatWriter

    This trait is currently the actual implementation strategy of formatting, and is defined specially in libcore. It is rarely used outside of libcore. It is recommended to be #[experimental].

    There are possibilities in moving Reader and Writer to libcore with the error type as an associated item, allowing the FormatWriter trait to be eliminated entirely. Due to this possibility, the trait will be experimental for now as alternative solutions are explored.

  • struct Argument, mod rt, fn argument, fn argumentstr, fn argumentuint, Arguments::with_placeholders, Arguments::new

    These are implementation details of the Arguments structure as well as the expansion of the format_args! macro. It’s recommended to mark these as #[experimental] and #[doc(hidden)]. Ideally there would be some form of macro-based privacy hygiene which would allow these to be truly private, but it will likely be the case that these simply become stable and we must live with them forever.

  • struct Arguments

    This is a representation of a “pending format string” which can be used to safely execute a Formatter over it. This RFC recommends #[stable].

  • struct Formatter

    This instance is passed to all formatting trait methods and contains helper methods for respecting formatting flags. This RFC recommends #[unstable].

    This RFC also recommends deprecating all public fields in favor of accessor methods. This should help provide future extensibility as well as preventing unnecessary mutation in the future.

  • enum FormatError

    This enumeration only has one instance, WriteError. It is recommended to make this a struct instead and rename it to just Error. The purpose of this is to signal that an error has occurred as part of formatting, but it does not provide a generic method to transmit any other information other than “an error happened” to maintain the ergonomics of today’s usage. It’s strongly recommended that implementations of Show and friends are infallible and only generate an error if the underlying Formatter returns an error itself.

  • Radix/RadixFmt

    Like the radix function, this RFC recommends #[unstable] for both of these pieces of functionality.

Drawbacks

Today’s macro system necessitates exporting many implementation details of the formatting system, which is unfortunate.

Alternatives

A number of alternatives were laid out in the detailed description for various aspects.

Unresolved questions

  • How feasible and/or important is it to construct a format string at runtime given the recommend stability levels in this RFC?

Module system cleanups

Summary

  • Lift the hard ordering restriction between extern crate, use and other items.
  • Allow pub extern crate as opposed to only private ones.
  • Allow extern crate in blocks/functions, and not just in modules.

Motivation

The main motivation is consistency and simplicity: None of the changes proposed here change the module system in any meaningful way, they just remove weird forbidden corner cases that are all already possible to express today with workarounds.

Thus, they make it easier to learn the system for beginners, and easier to for developers to evolve their module hierarchies

Lifting the ordering restriction between extern crate, use and other items.

Currently, certain items need to be written in a fixed order: First all extern crate, then all use and then all other items. This has historically reasons, due to the older, more complex resolution algorithm, which included that shadowing was allowed between those items in that order, and usability reasons, as it makes it easy to locate imports and library dependencies.

However, after RFC 50 got accepted there is only ever one item name in scope from any given source so the historical “hard” reasons loose validity: Any resolution algorithm that used to first process all extern crate, then all use and then all items can still do so, it just has to filter out the relevant items from the whole module body, rather then from sequential sections of it. And any usability reasons for keeping the order can be better addressed with conventions and lints, rather than hard parser rules.

(The exception here are the special cased prelude, and globs and macros, which are feature gated and out of scope for this proposal)

As it is, today the ordering rule is a unnecessary complication, as it routinely causes beginner to stumble over things like this:

mod foo;
use foo::bar; // ERROR: Imports have to precede items

In addition, it doesn’t even prevent certain patterns, as it is possible to work around the order restriction by using a submodule:

struct Foo;
// One of many ways to expose the crate out of order:
mod bar { extern crate bar; pub use self::bar::x; pub use self::bar::y; ... }

Which with this RFC implemented would be identical to

struct Foo;
extern crate bar;

Another use case are item macros/attributes that want to automatically include their their crate dependencies. This is possible by having the macro expand to an item that links to the needed crate, eg like this:

#[my_attribute]
struct UserType;

Expands to:

struct UserType;
extern crate "MyCrate" as <gensymb>
impl <gensymb>::MyTrait for UserType { ... }

With the order restriction still in place, this requires the sub module workaround, which is unnecessary verbose.

As an example, gfx-rs currently employs this strategy.

Allow pub extern crate as opposed to only private ones.

extern crate semantically is somewhere between useing a module, and declaring one with mod, and is identical to both as far as as the module path to it is considered. As such, its surprising that its not possible to declare a extern crate as public, even though you can still make it so with an reexport:


mod foo {
    extern crate "bar" as bar_;
    pub use bar_ as bar;
}

While its generally not necessary to export a extern library directly, the need for it does arise occasionally during refactorings of huge crate collections, generally if a public module gets turned into its own crate.

As an example,the author recalls stumbling over it during a refactoring of gfx-rs.

Allow extern crate in blocks/functions, and not just in modules.

Similar to the point above, its currently possible to both import and declare a module in a block expression or function body, but not to link to an library:

fn foo() {
    let x = {
        extern crate qux; // ERROR: Extern crate not allowed here
        use bar::baz;     // OK
        mod bar { ... };  // OK
        qux::foo()
    };
}

This is again a unnecessary restriction considering that you can declare modules and imports there, and thus can make an extern library reachable at that point:

fn foo() {
    let x = {
        mod qux { extern crate "qux" as qux_; pub use self::qux_ as qux; }
        qux::foo()
    };
}

This again benefits macros and gives the developer the power to place external dependencies only needed for a single function lexically near it.

General benefits

In general, the simplification and freedom added by these changes would positively effect the docs of Rusts module system (which is already often regarded as too complex by outsiders), and possibly admit other simplifications or RFCs based on the now-equality of view items and items in the module system.

(As an example, the author is considering an RFC about merging the use and type features; by lifting the ordering restriction they become more similar and thus more redundant)

This also does not have to be a 1.0 feature, as it is entirely backwards compatible to implement, and strictly allows more programs to compile than before. However, as alluded to above it might be a good idea for 1.0 regardless

Detailed design

  • Remove the ordering restriction from resolve
  • If necessary, change resolve to look in the whole scope block for view items, not just in a prefix of it.
  • Make pub extern crate parse and teach privacy about it
  • Allow extern crate view items in blocks

Drawbacks

  • The source of names in scope might be harder to track down
  • Similarly, it might become confusing to see when a library dependency exist.

However, these issues already exist today in one form or another, and can be addressed by proper docs that make library dependencies clear, and by the fact that definitions are generally greppable in a file.

Alternatives

As this just cleans up a few aspects of the module system, there isn’t really an alternative apart from not or only partially implementing it.

By not implementing this proposal, the module system remains more complex for the user than necessary.

Unresolved questions

  • Inner attributes occupy the same syntactic space as items and view items, and are currently also forced into a given order by needing to be written first. This is also potentially confusing or restrictive for the same reasons as for the view items mentioned above, especially in regard to the build-in crate attributes, and has one big issue: It is currently not possible to load a syntax extension that provides an crate-level attribute, as with the current macro system this would have to be written like this:

    #[phase(plugin)]
    extern crate mycrate;
    #![myattr]
    

    Which is impossible to write due to the ordering restriction. However, as attributes and the macro system are also not finalized, this has not been included in this RFC directly.

  • This RFC does also explicitly not talk about wildcard imports and macros in regard to resolution, as those are feature gated today and likely subject to change. In any case, it seems unlikely that they will conflict with the changes proposed here, as macros would likely follow the same module system rules where possible, and wildcard imports would either be removed, or allowed in a way that doesn’t conflict with explicitly imported names to prevent compilation errors on upstream library changes (new public item may not conflict with downstream items).

Summary

  • Add the ability to have trait bounds that are polymorphic over lifetimes.

Motivation

Currently, closure types can be polymorphic over lifetimes. But closure types are deprecated in favor of traits and object types as part of RFC #44 (unboxed closures). We need to close the gap. The canonical example of where you want this is if you would like a closure that accepts a reference with any lifetime. For example, today you might write:

fn with(callback: |&Data|) {
    let data = Data { ... };
    callback(&data)
}

If we try to write this using unboxed closures today, we have a problem:

fn with<'a, T>(callback: T)
    where T : FnMut(&'a Data)
{
    let data = Data { ... };
    callback(&data)
}

// Note that the `()` syntax is shorthand for the following:
fn with<'a, T>(callback: T)
    where T : FnMut<(&'a Data,),()>
{
    let data = Data { ... };
    callback(&data)
}

The problem is that the argument type &'a Data must include a lifetime, and there is no lifetime one could write in the fn sig that represents “the stack frame of the with function”. Naturally we have the same problem if we try to use an FnMut object (which is the closer analog to the original closure example):

fn with<'a>(callback: &mut FnMut(&'a Data))
{
    let data = Data { ... };
    callback(&data)
}

fn with<'a>(callback: &mut FnMut<(&'a Data,),()>)
{
    let data = Data { ... };
    callback(&data)
}

Under this proposal, you would be able to write this code as follows:

// Using the FnMut(&Data) notation, the &Data is
// in fact referencing an implicit bound lifetime, just
// as with closures today.
fn with<T>(callback: T)
    where T : FnMut(&Data)
{
    let data = Data { ... };
    callback(&data)
}

// If you prefer, you can use an explicit name,
// introduced by the `for<'a>` syntax.
fn with<T>(callback: T)
    where T : for<'a> FnMut(&'a Data)
{
    let data = Data { ... };
    callback(&data)
}

// No sugar at all.
fn with<T>(callback: T)
    where T : for<'a> FnMut<(&'a Data,),()>
{
    let data = Data { ... };
    callback(&data)
}

And naturally the object form(s) work as well:

// The preferred notation, using `()`, again introduces
// implicit binders for omitted lifetimes:
fn with(callback: &mut FnMut(&Data))
{
    let data = Data { ... };
    callback(&data)
}

// Explicit names work too.
fn with(callback: &mut for<'a> FnMut(&'a Data))
{
    let data = Data { ... };
    callback(&data)
}

// The fully explicit notation requires an explicit `for`,
// as before, to declare the bound lifetimes.
fn with(callback: &mut for<'a> FnMut<(&'a Data,),()>)
{
    let data = Data { ... };
    callback(&data)
}

The syntax for fn types must be updated as well to use for.

Detailed design

For syntax

We modify the grammar for a trait reference to include

for<lifetimes> Trait<T1, ..., Tn>
for<lifetimes> Trait(T1, ..., tn) -> Tr

This syntax can be used in where clauses and types. The for syntax is not permitted in impls nor in qualified paths (<T as Trait>). In impls, the distinction between early and late-bound lifetimes are inferred. In qualified paths, which are used to select a member from an impl, no bound lifetimes are permitted.

Update syntax of fn types

The existing bare fn types will be updated to use the same for notation. Therefore, <'a> fn(&'a int) becomes for<'a> fn(&'a int).

Implicit binders when using parentheses notation and in fn types

When using the Trait(T1, ..., Tn) notation, implicit binders are introduced for omitted lifetimes. In other words, FnMut(&int) is effectively shorthand for for<'a> FnMut(&'a int), which is itself shorthand for for<'a> FnMut<(&'a int,),()>. No implicit binders are introduced when not using the parentheses notation (i.e., Trait<T1,...,Tn>). These binders interact with lifetime elision in the usual way, and hence FnMut(&Foo) -> &Bar is shorthand for for<'a> FnMut(&'a Foo) -> &'a Bar. The same is all true (and already true) for fn types.

Distinguishing early vs late bound lifetimes in impls

We will distinguish early vs late-bound lifetimes on impls in the same way as we do for fns. Background on this process can be found in these two blog posts [1, 2]. The basic idea is to distinguish early-bound lifetimes, which must be substituted immediately, from late-bound lifetimes, which can be made into a higher-ranked trait reference.

The rule is that any lifetime parameter 'x declared on an impl is considered early bound if 'x appears in any of the following locations:

  • the self type of the impl;
  • a where clause associated with the impl (here we assume that all bounds on impl parameters are desugared into where clauses).

All other lifetimes are considered late bound.

When we decide what kind of trait-reference is provided by an impl, late bound lifetimes are moved into a for clause attached to the reference. Here are some examples:

// Here 'late does not appear in any where clause nor in the self type,
// and hence it is late-bound. Thus this impl is considered to provide:
//
//     SomeType : for<'late> FnMut<(&'late Foo,),()>
impl<'late> FnMut(&'late Foo) -> Bar for SomeType { ... }

// Here 'early appears in the self type and hence it is early bound.
// This impl thus provides:
//
//     SomeOtherType<'early> : FnMut<(&'early Foo,),()>
impl<'early> FnMut(&'early Foo) -> Bar for SomeOtherType<'early> { ... }

This means that if there were a consumer that required a type which implemented FnMut(&Foo), only SomeType could be used, not SomeOtherType:

fn foo<T>(t: T) where T : FnMut(&Foo) { ... }

foo::<SomeType>(...) // ok
foo::<SomeOtherType<'static>>(...) // not ok

Instantiating late-bound lifetimes in a trait reference

Whenever an associated item from a trait reference is accessed, all late-bound lifetimes are instantiated. This means basically when a method is called and so forth. Here are some examples:

fn foo<'b,T:for<'a> FnMut(&'a &'b Foo)>(t: T) {
    t(...); // here, 'a is freshly instantiated
    t(...); // here, 'a is freshly instantiated again
}

Other times when a late-bound lifetime would be instantiated:

  • Accessing an associated constant, once those are implemented.
  • Accessing an associated type.

Another way to state these rules is that bound lifetimes are not permitted in the traits found in qualified paths – and things like method calls and accesses to associated items can all be desugared into calls via qualified paths. For example, the call t(...) above is equivalent to:

fn foo<'b,T:for<'a> FnMut(&'a &'b Foo)>(t: T) {
    // Here, per the usual rules, the omitted lifetime on the outer
    // reference will be instantiated with a fresh variable.
    <t as FnMut<(&&'b Foo,),()>::call_mut(&mut t, ...);
    <t as FnMut<(&&'b Foo,),()>::call_mut(&mut t, ...);
}

Subtyping of trait references

The subtyping rules for trait references that involve higher-ranked lifetimes will be defined in an analogous way to the current subtyping rules for closures. The high-level idea is to replace each higher-ranked lifetime with a skolemized variable, perform the usual subtyping checks, and then check whether those skolemized variables would be being unified with anything else. The interested reader is referred to Simon Peyton-Jones rather thorough but quite readable paper on the topic or the documentation in src/librustc/middle/typeck/infer/region_inference/doc.rs.

The most important point is that the rules provide for subtyping that goes from “more general” to “less general”. For example, if I have a trait reference like for<'a> FnMut(&'a int), that would be usable wherever a trait reference with a concrete lifetime, like FnMut(&'static int), is expected.

Drawbacks

This feature is needed. There isn’t really any particular drawback beyond language complexity.

Alternatives

Drop the keyword. The for keyword is used due to potential ambiguities surrounding UFCS notation. Under UFCS, it is legal to write e.g. <T>::Foo::Bar in a type context. This is awfully close to something like <'a> ::std::FnMut. Currently, the parser could probably use the lifetime distinction to know the difference, but future extensions (see next paragraph) could allow types to be used as well, and it is still possible we will opt to “drop the tick” in lifetimes. Moreover, the syntax <'a> FnMut(&'a uint) is not exactly beautiful to begin with.

Permit higher-ranked traits with type variables. This RFC limits “higher-rankedness” to lifetimes. It is plausible to extend the system in the future to permit types as well, though only in where clauses and not in types. For example, one might write:

fn foo<IDENTITY>(t: IDENTITY) where IDENTITY : for<U> FnMut(U) -> U { ... }

Unresolved questions

None. Implementation is underway though not complete.

  • Start Date: 2014-07-16
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/390
  • Rust Issue #: https://github.com/rust-lang/rust/issues/18478

Summary

The variants of an enum are currently defined in the same namespace as the enum itself. This RFC proposes to define variants under the enum’s namespace.

Note

In the rest of this RFC, flat enums will be used to refer to the current enum behavior, and namespaced enums will be used to refer to the proposed enum behavior.

Motivation

Simply put, flat enums are the wrong behavior. They’re inconsistent with the rest of the language and harder to work with.

Practicality

Some people prefer flat enums while others prefer namespaced enums. It is trivial to emulate flat enums with namespaced enums:

pub use MyEnum::*;

pub enum MyEnum {
    Foo,
    Bar,
}

On the other hand, it is impossible to emulate namespaced enums with the current enum system. It would have to look something like this:

pub enum MyEnum {
    Foo,
    Bar,
}

pub mod MyEnum {
    pub use super::{Foo, Bar};
}

However, it is now forbidden to have a type and module with the same name in the same namespace. This workaround was one of the rationales for the rejection of the enum mod proposal previously.

Many of the variants in Rust code today are already effectively namespaced, by manual name mangling. As an extreme example, consider the enums in syntax::ast:

pub enum Item_ {
    ItemStatic(...),
    ItemFn(...),
    ItemMod(...),
    ItemForeignMod(...),
    ...
}

pub enum Expr_ {
    ExprBox(...),
    ExprVec(...),
    ExprCall(...),
    ...
}

...

These long names are unavoidable as all variants of the 47 enums in the ast module are forced into the same namespace.

Going without name mangling is a risky move. Sometimes variants have to be inconsistently mangled, as in the case of IoErrorKind. All variants are un-mangled (e.g, EndOfFile or ConnectionRefused) except for one, OtherIoError. Presumably, Other would be too confusing in isolation. One also runs the risk of running into collisions as the library grows.

Consistency

Flat enums are inconsistent with the rest of the language. Consider the set of items. Some don’t have their own names, such as extern {} blocks, so items declared inside of them have no place to go but the enclosing namespace. Some items do not declare any “sub-names”, like struct definitions or statics. Consider all other items, and how sub-names are accessed:

mod foo {
    fn bar() {}
}

foo::bar()
trait Foo {
    type T;

    fn bar();
}

Foo::T
Foo::bar()
impl Foo {
    fn bar() {}
    fn baz(&self) {}
}

Foo::bar()
Foo::baz(a_foo) // with UFCS
enum Foo {
    Bar,
}

Bar // ??

Enums are the odd one out.

Current Rustdoc output reflects this inconsistency. Pages in Rustdoc map to namespaces. The documentation page for a module contains all names defined in its namespace - structs, typedefs, free functions, reexports, statics, enums, but not variants. Those are placed on the enum’s own page, next to the enum’s inherent methods which are placed in the enum’s namespace. In addition, search results incorrectly display a path for variant results that contains the enum itself, such as std::option::Option::None. These issues can of course be fixed, but that will require adding more special cases to work around the inconsistent behavior of enums.

Usability

This inconsistency makes it harder to work with enums compared to other items.

There are two competing forces affecting the design of libraries. On one hand, the author wants to limit the size of individual files by breaking the crate up into multiple modules. On the other hand, the author does not necessarily want to expose that module structure to consumers of the library, as overly deep namespace hierarchies are hard to work with. A common solution is to use private modules with public reexports:

// lib.rs
pub use inner_stuff::{MyType, MyTrait};

mod inner_stuff;

// a lot of code
// inner_stuff.rs
pub struct MyType { ... }

pub trait MyTrait { ... }

// a lot of code

This strategy does not work for flat enums in general. It is not all that uncommon for an enum to have many variants - for example, take rust-postgres’s SqlState enum, which contains 232 variants. It would be ridiculous to pub use all of them! With namespaced enums, this kind of reexport becomes a simple pub use of the enum itself.

Sometimes a developer wants to use many variants of an enum in an “unqualified” manner, without qualification by the containing module (with flat enums) or enum (with namespaced enums). This is especially common for private, internal enums within a crate. With flat enums, this is trivial within the module in which the enum is defined, but very painful anywhere else, as it requires each variant to be used individually, which can get extremely verbose. For example, take this from rust-postgres:

use message::{AuthenticationCleartextPassword,
              AuthenticationGSS,
              AuthenticationKerberosV5,
              AuthenticationMD5Password,
              AuthenticationOk,
              AuthenticationSCMCredential,
              AuthenticationSSPI,
              BackendKeyData,
              BackendMessage,
              BindComplete,
              CommandComplete,
              CopyInResponse,
              DataRow,
              EmptyQueryResponse,
              ErrorResponse,
              NoData,
              NoticeResponse,
              NotificationResponse,
              ParameterDescription,
              ParameterStatus,
              ParseComplete,
              PortalSuspended,
              ReadyForQuery,
              RowDescription,
              RowDescriptionEntry};
use message::{Bind,
              CancelRequest,
              Close,
              CopyData,
              CopyDone,
              CopyFail,
              Describe,
              Execute,
              FrontendMessage,
              Parse,
              PasswordMessage,
              Query,
              StartupMessage,
              Sync,
              Terminate};
use message::{WriteMessage, ReadMessage};

A glob import can’t be used because it would pull in other, unwanted names from the message module. With namespaced enums, this becomes far simpler:

use messages::BackendMessage::*;
use messages::FrontendMessage::*;
use messages::{FrontendMessage, BackendMessage, WriteMessage, ReadMessage};

Detailed design

The compiler’s resolve stage will be altered to place the value and type definitions for variants in their enum’s module, just as methods of inherent impls are. Variants will be handled differently than those methods are, however. Methods cannot currently be directly imported via use, while variants will be. The determination of importability currently happens at the module level. This logic will be adjusted to move that determination to the definition level. Specifically, each definition will track its “importability”, just as it currently tracks its “publicness”. All definitions will be importable except for methods in implementations and trait declarations.

The implementation will happen in two stages. In the first stage, resolve will be altered as described above. However, variants will be defined in both the flat namespace and nested namespace. This is necessary t keep the compiler bootstrapping.

After a new stage 0 snapshot, the standard library will be ported and resolve will be updated to remove variant definitions in the flat namespace. This will happen as one atomic PR to keep the implementation phase as compressed as possible. In addition, if unforeseen problems arise during this set of work, we can roll back the initial commit and put the change off until after 1.0, with only a small pre-1.0 change required. This initial conversion will focus on making the minimal set of changes required to port the compiler and standard libraries by reexporting variants in the old location. Later work can alter the APIs to take advantage of the new definition locations.

Library changes

Library authors can use reexports to take advantage of enum namespacing without causing too much downstream breakage:

pub enum Item {
    ItemStruct(Foo),
    ItemStatic(Bar),
}

can be transformed to

pub use Item::Struct as ItemStruct;
pub use Item::Static as ItemStatic;

pub enum Item {
    Struct(Foo),
    Static(Bar),
}

To simply keep existing code compiling, a glob reexport will suffice:

pub use Item::*;

pub enum Item {
    ItemStruct(Foo),
    ItemStatic(Bar),
}

Once RFC #385 is implemented, it will be possible to write a syntax extension that will automatically generate the reexport:

#[flatten]
pub enum Item {
    ItemStruct(Foo),
    ItemStatic(Bar),
}

Drawbacks

The transition period will cause enormous breakage in downstream code. It is also a fairly large change to make to resolve, which is already a bit fragile.

Alternatives

We can implement enum namespacing after 1.0 by adding a “fallback” case to resolve, where variants can be referenced from their “flat” definition location if no other definition would conflict in that namespace. In the grand scheme of hacks to preserve backwards compatibility, this is not that bad, but still decidedly worse than not having to worry about fallback at all.

Earlier iterations of namespaced enum proposals suggested preserving flat enums and adding enum mod syntax for namespaced enums. However, variant namespacing isn’t a large enough enough difference for the addition of a second way to define enums to hold its own weight as a language feature. In addition, it would simply cause confusion, as library authors need to decide which one they want to use, and library consumers need to double check which place they can import variants from.

Unresolved questions

A recent change placed enum variants in the type as well as the value namespace to allow for future language expansion. This broke some code that looked like this:

pub enum MyEnum {
    Foo(Foo),
    Bar(Bar),
}

pub struct Foo { ... }
pub struct Bar { ... }

Is it possible to make such a declaration legal in a world with namespaced enums? The variants Foo and Bar would no longer clash with the structs Foo and Bar, from the perspective of a consumer of this API, but the variant declarations Foo(Foo) and Bar(Bar) are ambiguous, since the Foo and Bar structs will be in scope inside of the MyEnum declaration.

  • Start Date: 2014-10-30
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/401
  • Rust Issue #: https://github.com/rust-lang/rust/issues/18469

Summary

Describe the various kinds of type conversions available in Rust and suggest some tweaks.

Provide a mechanism for smart pointers to be part of the DST coercion system.

Reform coercions from functions to closures.

The transmute intrinsic and other unsafe methods of type conversion are not covered by this RFC.

Motivation

It is often useful to convert a value from one type to another. This conversion might be implicit or explicit and may or may not involve some runtime action. Such conversions are useful for improving reuse of code, and avoiding unsafe transmutes.

Our current rules around type conversions are not well-described. The different conversion mechanisms interact poorly and the implementation is somewhat ad-hoc.

Detailed design

Rust has several kinds of type conversion: subtyping, coercion, and casting. Subtyping and coercion are implicit, there is no syntax. Casting is explicit, using the as keyword. The syntax for a cast expression is:

e_cast ::= e as U

Where e is any valid expression and U is any valid type (note that we restrict in type checking the valid types for U).

These conversions (and type equality) form a total order in terms of their strength. For any types T and U, if T == U then T is also a subtype of U. If T is a subtype of U, then T coerces to U, and if T coerces to U, then T can be cast to U.

There is an additional kind of coercion which does not fit into that total order

  • implicit coercions of receiver expressions. (I will use ‘expression coercion’ when I need to distinguish coercions in non-receiver position from coercions of receivers). All expression coercions are valid receiver coercions, but not all receiver coercions are valid casts.

Finally, I will discuss function polymorphism, which is something of a coercion edge case.

Subtyping

Subtyping is implicit and can occur at any stage in type checking or inference. Subtyping in Rust is very restricted and occurs only due to variance with respect to lifetimes and between types with higher ranked lifetimes. If we were to erase lifetimes from types, then the only subtyping would be due to type equality.

Coercions

A coercion is implicit and has no syntax. A coercion can only occur at certain coercion sites in a program, these are typically places where the desired type is explicit or can be derived by propagation from explicit types (without type inference). The base cases are:

  • In let statements where an explicit type is given: in let _: U = e;, e is coerced to have type U

  • In statics and consts, similarly to let statements

  • In argument position for function calls. The value being coerced is the actual parameter and it is coerced to the type of the formal parameter. For example, where foo is defined as fn foo(x: U) { ... } and is called with foo(e);, e is coerced to have type U

  • Where a field of a struct or variant is instantiated. E.g., where struct Foo { x: U } and the instantiation is Foo { x: e }, e is coerced to have type U

  • The result of a function, either the final line of a block if it is not semi- colon terminated or any expression in a return statement. For example, for fn foo() -> U { e }, e is coerced to have type U

If the expression in one of these coercion sites is a coercion-propagating expression, then the relevant sub-expressions in that expression are also coercion sites. Propagation recurses from these new coercion sites. Propagating expressions and their relevant sub-expressions are:

  • Array literals, where the array has type [U, ..n], each sub-expression in the array literal is a coercion site for coercion to type U

  • Array literals with repeating syntax, where the array has type [U, ..n], the repeated sub-expression is a coercion site for coercion to type U

  • Tuples, where a tuple is a coercion site to type (U_0, U_1, ..., U_n), each sub-expression is a coercion site for the respective type, e.g., the zero-th sub-expression is a coercion site to U_0

  • The box expression, if the expression has type Box<U>, the sub-expression is a coercion site to U (I expect this to be generalised when box expressions are)

  • Parenthesised sub-expressions ((e)), if the expression has type U, then the sub-expression is a coercion site to U

  • Blocks, if a block has type U, then the last expression in the block (if it is not semicolon-terminated) is a coercion site to U. This includes blocks which are part of control flow statements, such as if/else, if the block has a known type.

Note that we do not perform coercions when matching traits (except for receivers, see below). If there is an impl for some type U, and T coerces to U, that does not constitute an implementation for T. For example, the following will not type check, even though it is OK to coerce t to &T and there is an impl for &T:

struct T;
trait Trait {}

fn foo<X: Trait>(t: X) {}

impl<'a> Trait for &'a T {}


fn main() {
    let t: &mut T = &mut T;
    foo(t); //~ ERROR failed to find an implementation of trait Trait for &mut T
}

In a cast expression, e as U, the compiler will first attempt to coerce e to U, and only if that fails will the conversion rules for casts (see below) be applied.

Coercion is allowed between the following types:

  • T to U if T is a subtype of U (the ‘identity’ case)

  • T_1 to T_3 where T_1 coerces to T_2 and T_2 coerces to T_3 (transitivity case)

  • &mut T to &T

  • *mut T to *const T

  • &T to *const T

  • &mut T to *mut T

  • T to fn if T is a closure that does not capture any local variables in its environment.

  • T to U if T implements CoerceUnsized<U> (see below) and T = Foo<...> and U = Foo<...> (for any Foo, when we get HKT I expect this could be a constraint on the CoerceUnsized trait, rather than being checked here)

  • From TyCtor(T) to TyCtor(coerce_inner(T)) (these coercions could be provided by implementing CoerceUnsized for all instances of TyCtor) where TyCtor(T) is one of &T, &mut T, *const T, *mut T, or Box<T>.

And where coerce_inner is defined as:

  • coerce_inner([T, ..n]) = [T];

  • coerce_inner(T) = U where T is a concrete type which implements the trait U;

  • coerce_inner(T) = U where T is a sub-trait of U;

  • coerce_inner(Foo<..., T, ...>) = Foo<..., coerce_inner(T), ...> where Foo is a struct and only the last field has type T and T is not part of the type of any other fields;

  • coerce_inner((..., T)) = (..., coerce_inner(T)).

Note that coercing from sub-trait to a super-trait is a new coercion and is non- trivial. One implementation strategy which avoids re-computation of vtables is given in RFC PR #250.

A note for the future: although there hasn’t been an RFC nor much discussion, it is likely that post-1.0 we will add type ascription to the language (see #354). That will (probably) allow any expression to be annotated with a type (e.g, foo(a, b: T, c) a function call where the second argument has a type annotation).

Type ascription is purely descriptive and does not cast the sub-expression to the required type. However, it seems sensible that type ascription would be a coercion site, and thus type ascription would be a way to make implicit coercions explicit. There is a danger that such coercions would be confused with casts. I hope the rule that casting should change the type and type ascription should not is enough of a discriminant. Perhaps we will need a style guideline to encourage either casts or type ascription to force an implicit coercion. Perhaps type ascription should not be a coercion site. Or perhaps we don’t need type ascription at all if we allow trivial casts.

Custom unsizing coercions

It should be possible to coerce smart pointers (e.g., Rc) in the same way as the built-in pointers. In order to do so, we provide two traits and an intrinsic to allow users to make their smart pointers work with the compiler’s coercions. It might be possible to implement some of the coercions described for built-in pointers using this machinery, and whether that is a good idea or not is an implementation detail.

// Cannot be impl'ed - it really is quite a magical trait, see the cases below.
trait Unsize<Sized? U> for Sized? {}

The Unsize trait is a marker trait and a lang item. It should not be implemented by users and user implementations will be ignored. The compiler will assume the following implementations, these correspond to the definition of coerce_inner, above; note that these cannot be expressed in real Rust:

impl<T, n: int> Unsize<[T]> for [T, ..n] {}

// Where T is a trait
impl<Sized? T, U: T> Unsize<T> for U {}

// Where T and U are traits
impl<Sized? T, Sized? U: T> Unsize<T> for U {}

// Where T and U are structs ... following the rules for coerce_inner
impl Unsize<T> for U {}

impl Unsize<(..., T)> for (..., U)
    where U: Unsize(T) {}

The CoerceUnsized trait should be implemented by smart pointers and containers which want to be part of the coercions system.

trait CoerceUnsized<U> {
    fn coerce(self) -> U;
}

To help implement CoerceUnsized, we provide an intrinsic - fat_pointer_convert. This takes and returns raw pointers. The common case will be to take a thin pointer, unsize the contents, and return a fat pointer. But the exact behaviour depends on the types involved. This will perform any computation associated with a coercion (for example, adjusting or creating vtables). The implementation of fat_pointer_convert will match what the compiler must do in coerce_inner as described above.

intrinsic fn fat_pointer_convert<Sized? T, Sized? U>(t: *const T) -> *const U
    where T : Unsize<U>;

Here is an example implementation of CoerceUnsized for Rc:

impl<Sized? T, Sized? U> CoerceUnsized<Rc<T>> for Rc<U> {
    where U: Unsize<T>

    fn coerce(self) -> Rc<T> {
        let new_ptr: *const RcBox<T> = fat_pointer_convert(self._ptr);
        Rc { _ptr: new_ptr }
    }
}

Coercions of receiver expressions

These coercions occur when matching the type of the receiver of a method call with the self type (i.e., the type of e in e.m(...)) or in field access. These coercions can be thought of as a feature of the . operator, they do not apply when using the UFCS form with the self argument in argument position. Only an expression before the dot is coerced as a receiver. When using the UFCS form of method call, arguments are only coerced according to the expression coercion rules. This matches the rules for dispatch - dynamic dispatch only happens using the . operator, not the UFCS form.

In method calls the target type of the coercion is the concrete type of the impl in which the method is defined, modified by the type of self. Assuming the impl is for T, the target type is given by:

selftarget type
selfT
&self&T
&mut self&mut T
self: Box<Self>Box<T>

and likewise with any variations of the self type we might add in the future.

For field access, the target type is &T, &mut T for field assignment, where T is a struct with the named field.

A receiver coercion consists of some number of dereferences (either compiler built-in (of a borrowed reference or Box pointer, not raw pointers) or custom, given by the Deref trait), one or zero applications of coerce_inner or use of the CoerceUnsized trait (as defined above, note that this requires we are at a type which has neither references nor dereferences at the top level), and up to two address-of operations (i.e., T to &T, &mut T, *const T, or *mut T, with a fresh lifetime.). The usual mutability rules for taking a reference apply. (Note that the implementation of the coercion isn’t so simple, it is embedded in the search for candidate methods, but from the point of view of type conversions, that is not relevant).

Alternatively, a receiver coercion may be thought of as a two stage process. First, we dereference and then take the address until the source type has the same shape (i.e., has the same kind and number of indirection) as the target type. Then we try to coerce the adjusted source type to the target type using the usual coercion machinery. I believe, but have not proved, that these two descriptions are equivalent.

Casts

Casting is indicated by the as keyword. A cast e as U is valid if one of the following holds:

  • e has type T and T coerces to U; coercion-cast
  • e has type *T, U is *U_0, and either U_0: Sized or unsize_kind(T) = unsize_kind(U_0); ptr-ptr-cast
  • e has type *T and U is a numeric type, while T: Sized; ptr-addr-cast
  • e is an integer and U is *U_0, while U_0: Sized; addr-ptr-cast
  • e has type T and T and U are any numeric types; numeric-cast
  • e is a C-like enum and U is an integer type; enum-cast
  • e has type bool or char and U is an integer; prim-int-cast
  • e has type u8 and U is char; u8-char-cast
  • e has type &[T; n] and U is *const T; array-ptr-cast
  • e is a function pointer type and U has type *T, while T: Sized; fptr-ptr-cast
  • e is a function pointer type and U is an integer; fptr-addr-cast

where &.T and *T are references of either mutability, and where unsize_kind(T) is the kind of the unsize info in T - the vtable for a trait definition (e.g. fmt::Display or Iterator, not Iterator<Item=u8>) or a length (or () if T: Sized).

Note that lengths are not adjusted when casting raw slices - T: *const [u16] as *const [u8] creates a slice that only includes half of the original memory.

Casting is not transitive, that is, even if e as U1 as U2 is a valid expression, e as U2 is not necessarily so (in fact it will only be valid if U1 coerces to U2).

A cast may require a runtime conversion.

There will be a lint for trivial casts. A trivial cast is a cast e as T where e has type U and U is a subtype of T. The lint will be warn by default.

Function type polymorphism

Currently, functions may be used where a closure is expected by coercing a function to a closure. We will remove this coercion and instead use the following scheme:

  • Every function item has its own fresh type. This type cannot be written by the programmer (i.e., it is expressible but not denotable).
  • Conceptually, for each fresh function type, there is an automatically generated implementation of the Fn, FnMut, and FnOnce traits.
  • All function types are implicitly coercible to a fn() type with the corresponding parameter types.
  • Conceptually, there is an implementation of Fn, FnMut, and FnOnce for every fn() type.
  • Fn, FnMut, or FnOnce trait objects and references to type parameters bounded by these traits may be considered to have the corresponding unboxed closure type. This is a desugaring (alias), rather than a coercion. This is an existing part of the unboxed closures work.

These steps should allow for functions to be stored in variables with both closure and function type. It also allows variables with function type to be stored as a variable with closure type. Note that these have different dynamic semantics, as described below. For example,

fn foo() { ... }         // `foo` has a fresh and non-denotable type.

fn main() {
    let x: fn() = foo;   // `foo` is coerced to `fn()`.
    let y: || = x;       // `x` is coerced to `&Fn` (a closure object),
                         // legal due to the `fn()` auto-impls.

    let z: || = foo;     // `foo` is coerced to `&T` where `T` is fresh and
                         // bounded by `Fn`. Legal due to the fresh function
                         // type auto-impls.
}

The two kinds of auto-generated impls are rather different: the first case (for the fresh and non-denotable function types) is a static call to Fn::Call, which in turn calls the function with the given arguments. The first call would be inlined (in fact, the impls and calls to them may be special-cased by the compiler). In the second case (for fn() types), we must execute a virtual call to find the implementing method and then call the function itself because the function is ‘wrapped’ in a closure object.

Changes required

  • Add cast from unsized slices to raw pointers (&[V] to *V);

  • allow coercions as casts and add lint for trivial casts;

  • ensure we support all coercion sites;

  • remove [T, ..n] to &[T]/*[T] coercions;

  • add raw pointer coercions;

  • add sub-trait coercions;

  • add unsized tuple coercions;

  • add all transitive coercions;

  • receiver coercions - add referencing to raw pointers, remove triple referencing for slices;

  • remove function coercions, add function type polymorphism;

  • add DST/custom coercions.

Drawbacks

We are adding and removing some coercions. There is always a trade-off with implicit coercions on making Rust ergonomic vs making it hard to comprehend due to magical conversions. By changing this balance we might be making some things worse.

Alternatives

These rules could be tweaked in any number of ways.

Specifically for the DST custom coercions, the compiler could throw an error if it finds a user-supplied implementation of the Unsize trait, rather than silently ignoring them.

Amendments

  • Updated by #1558, which allows coercions from a non-capturing closure to a function pointer.

Unresolved questions

Summary

Overhaul the build command internally and establish a number of conventions around build commands to facilitate linking native code to Cargo packages.

  1. Instead of having the build command be some form of script, it will be a Rust command instead
  2. Establish a namespace of foo-sys packages which represent the native library foo. These packages will have Cargo-based dependencies between *-sys packages to express dependencies among C packages themselves.
  3. Establish a set of standard environment variables for build commands which will instruct how foo-sys packages should be built in terms of dynamic or static linkage, as well as providing the ability to override where a package comes from via environment variables.

Motivation

Building native code is normally quite a tricky business, and the original design of Cargo was to essentially punt on this problem. Today’s “solution” involves invoking an arbitrary build command in a sort of pseudo-shell with a number of predefined environment variables. This ad-hoc solution was known to be lacking at the time of implementing with the intention of identifying major pain points over time and revisiting the design once we had more information.

While today’s “hands off approach” certainly has a number of drawbacks, one of the upsides is that Cargo minimizes the amount of logic inside it as much as possible. This proposal attempts to stress this point as much as possible by providing a strong foundation on which to build robust build scripts, but not baking all of the logic into Cargo itself.

The time has now come to revisit the design, and some of the largest pain points that have been identified are:

  1. Packages needs the ability to build differently on different platforms.
  2. Projects should be able to control dynamic vs static at the top level. Note that the term “project” here means “top level package”.
  3. It should be possible to use libraries of build tool functionality. Cargo is indeed a package manager after all, and currently there is no way share a common set of build tool functionality among different Cargo packages.
  4. There is very little flexibility in locating packages, be it on the system, in a build directory, or in a home build dir.
  5. There is no way for two Rust packages to declare that they depend on the same native dependency.
  6. There is no way for C libraries to express their dependence on other C libraries.
  7. There is no way to encode a platform-specific dependency.

Each of these concerns can be addressed somewhat ad-hocly with a vanilla build command, but Cargo can certainly provide a more comprehensive solution to these problems.

Most of these concerns are fairly self-explanatory, but specifically (2) may require a bit more explanation:

Selecting linkage from the top level

Conceptually speaking, a native library is largely just a collections of symbols. The linkage involved in creating a final product is an implementation detail that is almost always irrelevant with respect to the symbols themselves.

When it comes to linking a native library, there are often a number of overlapping and sometimes competing concerns:

  1. Most unix-like distributions with package managers highly recommend dynamic linking of all dependencies. This reduces the overall size of an installation and allows dependencies to be updated without updating the original application.
  2. Those who distribute binaries of an application to many platforms prefer static linking as much as possible. This is largely done because the actual set of libraries on the platforms being installed on are often unknown and could be quite different than those linked to. Statically linking solves these problems by reducing the number of dependencies for an application.
  3. General developers of a package simply want a package to build at all costs. It’s ok to take a little bit longer to build, but if it takes hours of googling obscure errors to figure out you needed to install libfoo it’s probably not ok.
  4. Some native libraries have obscure linkage requirements. For example OpenSSL on OSX likely wants to be linked dynamically due to the special keychain support, but on linux it’s more ok to statically link OpenSSL if necessary.

The key point here is that the author of a library is not the one who dictates how an application should be linked. The builder or packager of a library is the one responsible for determining how a package should be linked.

Today this is not quite how Cargo operates, depending on what flavor of syntax extension you may be using. One of the goals of this re-working is to enable top-level projects to make easier decisions about how to link to libraries, where to find linked libraries, etc.

Detailed design

Summary:

  • Add a -l flag to rustc
  • Tweak an include! macro to rustc
  • Add a links key to Cargo manifests
  • Add platform-specific dependencies to Cargo manifests
  • Allow pre-built libraries in the same manner as Cargo overrides
  • Use Rust for build scripts
  • Develop a convention of *-sys packages

Modifications to rustc

A new flag will be added to rustc:

    -l LIBRARY          Link the generated crate(s) to the specified native
                        library LIBRARY. The name `LIBRARY` will have the format
                        `kind:name` where `kind` is one of: dylib, static,
                        framework. This corresponds to the `kind` key of the
                        `#[link]` attribute. The `name` specified is the name of
                        the native library to link. The `kind:` prefix may be
                        omitted and the `dylib` format will be assumed.
rustc -l dylib:ssl -l static:z foo.rs

Native libraries often have widely varying dependencies depending on what platforms they are compiled on. Often times these dependencies aren’t even constant among one platform! The reality we sadly have to face is that the dependencies of a native library itself are sometimes unknown until build time, at which point it’s too late to modify the source code of the program to link to a library.

For this reason, the rustc CLI will grow the ability to link to arbitrary libraries at build time. This is motivated by the build scripts which Cargo is growing, but it likely useful for custom Rust compiles at large.

Note that this RFC does not propose style guidelines nor suggestions for usage of -l vs #[link]. For Cargo it will later recommend discouraging use of #[link], but this is not generally applicable to all Rust code in existence.

Declaration of native library dependencies

Today Cargo has very little knowledge about what dependencies are being used by a package. By knowing the exact set of dependencies, Cargo paves a way into the future to extend its handling of native dependencies, for example downloading precompiled libraries. This extension allows Cargo to better handle constraint 5 above.

[package]

# This package unconditionally links to this list of native libraries
links = ["foo", "bar"]

The key links declares that the package will link to and provide the given C libraries. Cargo will impose the restriction that the same C library must not appear more than once in a dependency graph. This will prevent the same C library from being linked multiple times to packages.

If conflicts arise from having multiple packages in a dependency graph linking to the same C library, the C dependency should be refactored into a common Cargo-packaged dependency.

It is illegal to define links without also defining build.

Platform-specific dependencies

A number of native dependencies have various dependencies depending on what platform they’re building for. For example, libcurl does not depend on OpenSSL on Windows, but it is a common dependency on unix-based systems. To this end, Cargo will gain support for platform-specific dependencies, solving constraint 7 above:


[target.i686-pc-windows-gnu.dependencies.crypt32]
git = "https://github.com/user/crypt32-rs"

[target.i686-pc-windows-gnu.dependencies.winhttp]
path = "winhttp"

Here the top-level configuration key target will be a table whose sub-keys are target triples. The dependencies section underneath is the same as the top-level dependencies section in terms of functionality.

Semantically, platform specific dependencies are activated whenever Cargo is compiling for the exact target. Dependencies in other $target sections will not be compiled.

However, when generating a lockfile, Cargo will always download all dependencies unconditionally and perform resolution as if all packages were included. This is done to prevent the lockfile from radically changing depending on whether the package was last built on Linux or windows. This has the advantage of a stable lockfile, but has the drawback that all dependencies must be downloaded, even if they’re not used.

Pre-built libraries

A common pain point with constraints 1, 2, and cross compilation is that it’s occasionally difficult to compile a library for a particular platform. Other times it’s often useful to have a copy of a library locally which is linked against instead of built or detected otherwise for debugging purposes (for example). To facilitate these pain points, Cargo will support pre-built libraries being on the system similar to how local package overrides are available.

Normal Cargo configuration will be used to specify where a library is and how it’s supposed to be linked against:

# Each target triple has a namespace under the global `target` key and the
# `libs` key is a table for each native library.
#
# Each library can specify a number of key/value pairs where the values must be
# strings. The key/value pairs are metadata which are passed through to any
# native build command which depends on this library. The `rustc-flags` key is
# specially recognized as a set of flags to pass to `rustc` in order to link to
# this library.
[target.i686-unknown-linux-gnu.ssl]
rustc-flags = "-l static:ssl -L /home/build/root32/lib"
root = "/home/build/root32"

This configuration will be placed in the normal locations that .cargo/config is found. The configuration will only be queried if the target triple being built matches what’s in the configuration.

Rust build scripts

First pioneered by @tomaka in https://github.com/rust-lang/cargo/issues/610, the build command will no longer be an actual command, but rather a build script itself. This decision is motivated in solving constraints 1 and 3 above. The major motivation for this recommendation is the realization that the only common denominator for platforms that Cargo is running on is the fact that a Rust compiler is available. The natural conclusion from this fact is for a build script is to use Rust itself.

Furthermore, Cargo itself which serves quite well as a dependency manager, so by using Rust as a build tool it will be able to manage dependencies of the build tool itself. This will allow third-party solutions for build tools to be developed outside of Cargo itself and shared throughout the ecosystem of packages.

The concrete design of this will be the build command in the manifest being a relative path to a file in the package:

[package]
# ...
build = "build/compile.rs"

This file will be considered the entry point as a “build script” and will be built as an executable. A new top-level dependencies array, build-dependencies will be added to the manifest. These dependencies will all be available to the build script as external crates. Requiring that the build command have a separate set of dependencies solves a number of constraints:

  • When cross-compiling, the build tool as well as all of its dependencies are required to be built for the host architecture instead of the target architecture. A clear delineation will indicate precisely what dependencies need to be built for the host architecture.
  • Common packages, such as one to build cmake-based dependencies, can develop conventions around filesystem hierarchy formats to require minimum configuration to build extra code while being easily identified as having extra support code.

This RFC does not propose a convention of what to name the build script files.

Unlike links, it will be legal to specify build without specifying links. This is motivated by the code generation case study below.

Inputs

Cargo will provide a number of inputs to the build script to facilitate building native code for the current package:

  • The TARGET environment variable will contain the target triple that the native code needs to be built for. This will be passed unconditionally.
  • The NUM_JOBS environment variable will indicate the number of parallel jobs that the script itself should execute (if relevant).
  • The CARGO_MANIFEST_DIR environment variables will be the directory of the manifest of the package being built. Note that this is not the directory of the package whose build command is being run.
  • The OPT_LEVEL environment variable will contain the requested optimization level of code being built. This will be in the range 0-2. Note that this variable is the same for all build commands.
  • The PROFILE environment variable will contain the currently active Cargo profile being built. Note that this variable is the same for all build commands.
  • The DEBUG environment variable will contain true or false depending on whether the current profile specified that it should be debugged or not. Note that this variable is the same for all build commands.
  • The OUT_DIR environment variables contains the location in which all output should be placed. This should be considered a scratch area for compilations of any bundled items.
  • The CARGO_FEATURE_<foo> environment variable will be present if the feature foo is enabled. for the package being compiled.
  • The DEP_<foo>_<key> environment variables will contain metadata about the native dependencies for the current package. As the output section below will indicate, each compilation of a native library can generate a set of output metadata which will be passed through to dependencies. The only dependencies available (foo) will be those in links for immediate dependencies of the package being built. Note that each metadata key will be uppercased and - characters transformed to _ for the name of the environment variable.
  • If links is not present, then the command is unconditionally run with 0 command line arguments, otherwise:
  • The libraries that are requested via links are passed as command line arguments. The pre-built libraries in links (detailed above) will be filtered out and not passed to the build command. If there are no libraries to build (they’re all pre-built), the build command will not be invoked.

Outputs

The responsibility of the build script is to ensure that all requested native libraries are available for the crate to compile. The conceptual output of the build script will be metadata on stdout explaining how the compilation went and whether it succeeded.

An example output of a build command would be:

cargo:rustc-flags=-l static:foo -L /path/to/foo
cargo:root=/path/to/foo
cargo:libdir=/path/to/foo/lib
cargo:include=/path/to/foo/include

Each line that begins with cargo: is interpreted as a line of metadata for Cargo to store. The remaining part of the line is of the form key=value (like environment variables).

This output is similar to the pre-built libraries section above in that most key/value pairs are opaque metadata except for the special rustc-flags key. The rustc-flags key indicates to Cargo necessary flags needed to link the libraries specified.

For rustc-flags specifically, Cargo will propagate all -L flags transitively to all dependencies, and -l flags to the package being built. All metadata will only be passed to immediate dependants. Note that this is recommending that #[link] is discouraged as it is not the source code’s responsibility to dictate linkage.

If the build script exits with a nonzero exit code, then Cargo will consider it to have failed and will abort compilation.

Input/Output rationale

In general one of the purposes of a custom build command is to dynamically determine the necessary dependencies for a library. These dependencies may have been discovered through pkg-config, built locally, or even downloaded from a remote. This set can often change, and is the impetus for the rustc-flags metadata key. This key indicates what libraries should be linked (and how) along with where to find the libraries.

The remaining metadata flags are not as useful to rustc itself, but are quite useful to interdependencies among native packages themselves. For example libssh2 depends on OpenSSL on linux, which means it needs to find the corresponding libraries and header files. The metadata keys serve as a vector through which this information can be transmitted. The maintainer of the openssl-sys package (described below) would have a build script responsible for generating this sort of metadata so consumer packages can use it to build C libraries themselves.

A set of *-sys packages

This section will discuss a convention by which Cargo packages providing native dependencies will be named, it is not proposed to have Cargo enforce this convention via any means. These conventions are proposed to address constraints 5 and 6 above.

Common C dependencies will be refactored into a package named foo-sys where foo is the name of the C library that foo-sys will provide and link to. There are two key motivations behind this convention:

  • Each foo-sys package will declare its own dependencies on other foo-sys based packages
  • Dependencies on native libraries expressed through Cargo will be subject to version management, version locking, and deduplication as usual.

Each foo-sys package is responsible for providing the following:

  • Declarations of all symbols in a library. Essentially each foo-sys library is only a header file in terms of Rust-related code.
  • Ensuring that the native library foo is linked to the foo-sys crate. This guarantees that all exposed symbols are indeed linked into the crate.

Dependencies making use of *-sys packages will not expose extern blocks themselves, but rather use the symbols exposed in the foo-sys package directly. Additionally, packages using *-sys packages should not declare a #[link] directive to link to the native library as it’s already linked to the *-sys package.

Phasing strategy

The modifications to the build command are breaking changes to Cargo. To ease the transition, the build command will be join’d to the root path of a crate, and if the file exists and ends with .rs, it will be compiled as describe above. Otherwise a warning will be printed and the fallback behavior will be executed.

The purpose of this is to help most build scripts today continue to work (but not necessarily all), and pave the way forward to implement the newer integration.

Case study: Cargo

Cargo has a surprisingly complex set of C dependencies, and this proposal has created an example repository for what the configuration of Cargo would look like with respect to its set of C dependencies.

Case study: generated code

As the release of Rust 1.0 comes closer, the use of compiler plugins has become increasingly worrying over time. It is likely that plugins will not be available by default in the stable and beta release channels of Rust. Many core Cargo packages in the ecosystem today, such as gl-rs and iron, depend on plugins to build. Others, like rust-http, are already using compile-time code generation with a build script (which this RFC will attempt to standardize on).

When taking a closer look at these crates’ dependence on plugins it’s discovered that the primary use case is generating Rust code at compile time. For gl-rs, this is done to bind a platform-specific and evolving API, and for rust-http this is done to make code more readable and easier to understand. In general generating code at compile time is quite a useful ability for other applications such as bindgen (C bindings), dom bindings (used in Servo), etc.

Cargo’s and Rust’s support for compile-time generated code is quite lacking today, and overhauling the build command provides a nice opportunity to rethink this sort of functionality.

With this motivation, this RFC proposes tweaking the include! macro to enable it to be suitable for the purpose of including generated code:

include!(concat!(env!("OUT_DIR"), "/generated.rs"));

Today this does not compile as the argument to include! must be a string literal. This RFC proposes tweaking the semantics of the include! macro to expand locally before testing for a string literal. This is similar to the behavior of the format_args! macro today.

Using this, Cargo crates will have OUT_DIR present for compilations, and any generated Rust code can be generated by the build command and placed into OUT_DIR. The include! macro would then be used to include the contents of the code inside of the appropriate module.

Case study: controlling linkage

One of the motivations for this RFC and redesign of the build command is to making linkage controls more explicit to Cargo itself rather than hardcoding particular linkages in source code. As proposed, however, this RFC does not bake any sort of dynamic-vs-static knowledge into Cargo itself.

This design area is intentionally left untouched by Cargo in order to reduce the number of moving parts and also in an effort to simplify build commands as much as possible. There are, however, a number of methods to control how libraries are linked:

  1. First and foremost is the ability to override libraries via Cargo configuration. Overridden native libraries are specified manually and override whatever the “default” would have been otherwise.
  2. Delegation to arbitrary code running in build scripts allow the possibility of specification through other means such as environment variables.
  3. Usage of common third-party build tools will allow for conventions about selecting linkage to develop over time.

Note that points 2 and 3 are intentionally vague as this RFC does not have a specific recommendation for how scripts or tooling should respect linkage. By relying on a common set of dependencies to find native libraries it is envisioned that the tools will grow a convention through which a linkage preference can be specified.

For example, a possible implementation of pkg-config will be discussed. This tool can be used as a first-line-defense to help locate a library on the system as well as its dependencies. If a crate requests that pkg-config find the library foo, then the pkg-config crate could inspect some environments variables for how it operates:

  • If FOO_NO_PKG_CONFIG is set, then pkg-config immediately returns an errors. This helps users who want to force pkg-config to not find a package or force the package to build a statically linked fallback.
  • If FOO_DYNAMIC is set, then pkg-config will only succeed if it finds a dynamic version of foo. A similar meaning could be applied to FOO_STATIC.
  • If PKG_CONFIG_ALL_DYNAMIC is set, then it will act as if the package foo is requested by be dynamic specifically (similarly for static linking).

Note that this is not a concrete design, this is just meant to be an example to show how a common third-party tool can develop a convention for controlling linkage not through Cargo itself.

Also note that this can mean that cargo itself may not succeed “by default” in all cases, or larger projects with more flavorful configurations may want to pursue more fine-tuned control over how libraries are linked. It is intended that cargo will itself be driven with something such as a Makefile to perform this configuration (be it environment or in files).

Drawbacks

  • The system proposed here for linking native code is in general somewhat verbose. In theory well designed third-party Cargo crates can alleviate this verbosity by providing much of the boilerplate, but it’s unclear to what extent they’ll be able to alleviate it.

  • None of the third-party crates with “convenient build logic” currently exist, and it will take time to build these solutions.

  • Platform specific dependencies mean that the entire package graph must always be downloaded, regardless of the platform.

  • In general dealing with linkage is quite complex, and the conventions/systems proposed here aren’t exactly trivial and may be overkill for these purposes.

  • As can be seen in the example repository, platform dependencies are quite verbose and are difficult to work with when you actually want a negation instead of a positive platform to include.

  • Features themselves will also likely need to be platform-specific, but this runs into a number of tricky situations and needs to be fleshed out.

Alternatives

  • It has been proposed to support the links manifest key in the features section as well. In the proposed scheme you would have to create an optional dependency representing an optional native dependency, but this may be too burdensome for some cases.

  • The build command could instead take a script from an external package to run instead of a script inside of the package itself. The major drawback of this approach is that even the tiniest of build scripts require a full-blown package which needs to be uploaded to the registry and such. Due to the verboseness of so many packages, this was decided against.

  • Cargo remains fairly “dumb” with respect to how native libraries are linked, and it’s always a possibility that Cargo could grow more first-class support for dealing with the linkage of C libraries.

Unresolved questions

None

  • Start Date: 2014-11-01
  • RFC PR: #404
  • Rust Issue: #18499

Summary

When the compiler generates a dynamic library, alter the default behavior to favor linking all dependencies statically rather than maximizing the number of dynamic libraries. This behavior can be disabled with the existing -C prefer-dynamic flag.

Motivation

Long ago rustc used to only be able to generate dynamic libraries and as a consequence all Rust libraries were distributed/used in a dynamic form. Over time the compiler learned to create static libraries (dubbed rlibs). With this ability the compiler had to grow the ability to choose between linking a library either statically or dynamically depending on the available formats available to the compiler.

Today’s heuristics and algorithm are documented in the compiler, and the general idea is that as soon as “statically link all dependencies” fails then the compiler maximizes the number of dynamic dependencies. Today there is also not a method of instructing the compiler precisely what form intermediate libraries should be linked in the source code itself. The linkage can be “controlled” by passing --extern flags with only one per dependency where the desired format is passed.

While functional, these heuristics do not allow expressing an important use case of building a dynamic library as a final product (as opposed to an intermediate Rust library) while having all dependencies statically linked to the final dynamic library. This use case has been seen in the wild a number of times, and the current workaround is to generate a staticlib and then invoke the linker directly to convert that to a dylib (which relies on rustc generating PIC objects by default).

The purpose of this RFC is to remedy this use case while largely retaining the current abilities of the compiler today.

Detailed design

In english, the compiler will change its heuristics for when a dynamic library is being generated. When doing so, it will attempt to link all dependencies statically, and failing that, will continue to maximize the number of dynamic libraries which are linked in.

The compiler will also repurpose the -C prefer-dynamic flag to indicate that this behavior is not desired, and the compiler should maximize dynamic dependencies regardless.

In terms of code, the following patch will be applied to the compiler:

diff --git a/src/librustc/middle/dependency_format.rs b/src/librustc/middle/dependency_format.rs
index 8e2d4d0..dc248eb 100644
--- a/src/librustc/middle/dependency_format.rs
+++ b/src/librustc/middle/dependency_format.rs
@@ -123,6 +123,16 @@ fn calculate_type(sess: &session::Session,
             return Vec::new();
         }

+        // Generating a dylib without `-C prefer-dynamic` means that we're going
+        // to try to eagerly statically link all dependencies. This is normally
+        // done for end-product dylibs, not intermediate products.
+        config::CrateTypeDylib if !sess.opts.cg.prefer_dynamic => {
+            match attempt_static(sess) {
+                Some(v) => return v,
+                None => {}
+            }
+        }
+
         // Everything else falls through below
         config::CrateTypeExecutable | config::CrateTypeDylib => {},
     }

Drawbacks

None currently, but the next section of alternatives lists a few other methods of possibly achieving the same goal.

Alternatives

Disallow intermediate dynamic libraries

One possible solution to this problem is to completely disallow dynamic libraries as a possible intermediate format for rust libraries. This would solve the above problem in the sense that the compiler never has to make a choice. This would also additionally cut the distribution size in roughly half because only rlibs would be shipped, not dylibs.

Another point in favor of this approach is that the story for dynamic libraries in Rust (for Rust) is also somewhat lacking with today’s compiler. The ABI of a library changes quite frequently for unrelated changes, and it is thus infeasible to expect to ship a dynamic Rust library to later be updated in-place without recompiling downstream consumers. By disallowing dynamic libraries as intermediate formats in Rust, it is made quite obvious that a Rust library cannot depend on another dynamic Rust library. This would be codifying the convention today of “statically link all Rust code” in the compiler itself.

The major downside of this approach is that it would then be impossible to write a plugin for Rust in Rust. For example compiler plugins would cease to work because the standard library would be statically linked to both the rustc executable as well as the plugin being loaded.

In the common case duplication of a library in the same process does not tend to have adverse side effects, but some of the more flavorful features tend to interact adversely with duplication such as:

  • Globals with significant addresses (statics). These globals would all be duplicated and have different addresses depending on what library you’re talking to.
  • TLS/TLD. Any “thread local” or “task local” notion will be duplicated across each library in the process.

Today’s design of the runtime in the standard library causes dynamically loaded plugins with a statically linked standard library to fail very quickly as soon as any runtime-related operations is performed. Note, however, that the runtime of the standard library will likely be phased out soon, but this RFC considers the cons listed above to be reasons to not take this course of action.

Allow fine-grained control of linkage

Another possible alternative is to allow fine-grained control in the compiler to explicitly specify how each library should be linked (as opposed to a blanked prefer dynamic or not).

Recent forays with native libraries in Cargo has led to the conclusion that hardcoding linkage into source code is often a hazard and a source of pain down the line. The ultimate decision of how a library is linked is often not up to the author, but rather the developer or builder of a library itself.

This leads to the conclusion that linkage control of this form should be controlled through the command line instead, which is essentially already possible today (via --extern). Cargo essentially does this, but the standard libraries are shipped in dylib/rlib formats, causing the pain points listed in the motivation.

As a result, this RFC does not recommend pursuing this alternative too far, but rather considers the alteration above to the compiler’s heuristics to be satisfactory for now.

Unresolved questions

None yet!

Summary

Just like structs, variants can come in three forms - unit-like, tuple-like, or struct-like:

enum Foo {
    Foo,
    Bar(int, String),
    Baz { a: int, b: String }
}

The last form is currently feature gated. This RFC proposes to remove that gate before 1.0.

Motivation

Tuple variants with multiple fields can become difficult to work with, especially when the types of the fields don’t make it obvious what each one is. It is not an uncommon sight in the compiler to see inline comments used to help identify the various variants of an enum, such as this snippet from rustc::middle::def:

pub enum Def {
    // ...
    DefVariant(ast::DefId /* enum */, ast::DefId /* variant */, bool /* is_structure */),
    DefTy(ast::DefId, bool /* is_enum */),
    // ...
}

If these were changed to struct variants, this ad-hoc documentation would move into the names of the fields themselves. These names are visible in rustdoc, so a developer doesn’t have to go source diving to figure out what’s going on. In addition, the fields of struct variants can have documentation attached.

pub enum Def {
    // ...
    DefVariant {
        enum_did: ast::DefId,
        variant_did: ast::DefId,
        /// Identifies the variant as tuple-like or struct-like
        is_structure: bool,
    },
    DefTy {
        did: ast::DefId,
        is_enum: bool,
    },
    // ...
}

As the number of fields in a variant increases, it becomes increasingly crucial to use struct variants. For example, consider this snippet from rust-postgres:

enum FrontendMessage<'a> {
    // ...
    Bind {
        pub portal: &'a str,
        pub statement: &'a str,
        pub formats: &'a [i16],
        pub values: &'a [Option<Vec<u8>>],
        pub result_formats: &'a [i16]
    },
    // ...
}

If we convert Bind to a tuple variant:

enum FrontendMessage<'a> {
    // ...
    Bind(&'a str, &'a str, &'a [i16], &'a [Option<Vec<u8>>], &'a [i16]),
    // ...
}

we run into both the documentation issues discussed above, as well as ergonomic issues. If code only cares about the values and formats fields, working with a struct variant is nicer:

match msg {
    // you can reorder too!
    Bind { values, formats, .. } => ...
    // ...
}

versus

match msg {
    Bind(_, _, formats, values, _) => ...
    // ...
}

This feature gate was originally put in place because there were many serious bugs in the compiler’s support for struct variants. This is not the case today. The issue tracker does not appear have any open correctness issues related to struct variants and many libraries, including rustc itself, have been using them without trouble for a while.

Detailed design

Change the Status of the struct_variant feature from Active to Accepted.

The fields of struct variants use the same style of privacy as normal struct fields - they’re private unless tagged pub. This is inconsistent with tuple variants, where the fields have inherited visibility. Struct variant fields will be changed to have inhereted privacy, and pub will no longer be allowed.

Drawbacks

Adding formal support for a feature increases the maintenance burden of rustc.

Alternatives

If struct variants remain feature gated at 1.0, libraries that want to ensure that they will continue working into the future will be forced to avoid struct variants since there are no guarantees about backwards compatibility of feature-gated parts of the language.

Unresolved questions

N/A

Summary

This conventions RFC tweaks and finalizes a few long-running de facto conventions, including capitalization/underscores, and the role of the unwrap method.

See this RFC for a competing proposal for unwrap.

Motivation

This is part of the ongoing conventions formalization process. The conventions described here have been loosely followed for a long time, but this RFC seeks to nail down a few final details and make them official.

Detailed design

General naming conventions

In general, Rust tends to use UpperCamelCase for “type-level” constructs (types and traits) and snake_case for “value-level” constructs. More precisely, the proposed (and mostly followed) conventions are:

ItemConvention
Cratessnake_case (but prefer single word)
Modulessnake_case
TypesUpperCamelCase
TraitsUpperCamelCase
Enum variantsUpperCamelCase
Functionssnake_case
Methodssnake_case
General constructorsnew or with_more_details
Conversion constructorsfrom_some_other_type
Local variablessnake_case
Static variablesSCREAMING_SNAKE_CASE
Constant variablesSCREAMING_SNAKE_CASE
Type parametersconcise UpperCamelCase, usually single uppercase letter: T
Lifetimesshort, lowercase: 'a

Fine points

In UpperCamelCase, acronyms count as one word: use Uuid rather than UUID. In snake_case, acronyms are lower-cased: is_xid_start.

In UpperCamelCase names multiple numbers can be separated by a _ for clarity: Windows10_1709 instead of Windows101709.

In snake_case or SCREAMING_SNAKE_CASE, a “word” should never consist of a single letter unless it is the last “word”. So, we have btree_map rather than b_tree_map, but PI_2 rather than PI2.

unwrap, into_foo and into_inner

There has been a long running debate about the name of the unwrap method found in Option and Result, but also a few other standard library types. Part of the problem is that for some types (e.g. BufferedReader), unwrap will never panic; but for Option and Result calling unwrap is akin to asserting that the value is Some/Ok.

There’s basic agreement that we should have an unambiguous term for the Option/Result version of unwrap. Proposals have included assert, ensure, expect, unwrap_or_panic and others; see the links above for extensive discussion. No clear consensus has emerged.

This RFC proposes a simple way out: continue to call the methods unwrap for Option and Result, and rename other uses of unwrap to follow conversion conventions. Whenever possible, these panic-free unwrapping operations should be into_foo for some concrete foo, but for generic types like RefCell the name into_inner will suffice. By convention, these into_ methods cannot panic; and by (proposed) convention, unwrap should be reserved for an into_inner conversion that can.

Drawbacks

Not really applicable; we need to finalize these conventions.

Unresolved questions

Are there remaining subtleties about the rules here that should be clarified?

Summary

Change the precedence of + (object bounds) in type grammar so that it is similar to the precedence in the expression grammars.

Motivation

Currently + in types has a much higher precedence than it does in expressions. This means that for example one can write a type like the following:

&Object+Send

Whereas if that were an expression, parentheses would be required:

&(Object+Send)

Besides being confusing in its own right, this loose approach with regard to precedence yields ambiguities with unboxed closure bounds:

fn foo<F>(f: F)
    where F: FnOnce(&int) -> &Object + Send
{ }

In this example, it is unclear whether F returns an object which is Send, or whether F itself is Send.

Detailed design

This RFC proposes that the precedence of + be made lower than unary type operators. In addition, the grammar is segregated such that in “open-ended” contexts (e.g., after ->), parentheses are required to use a +, whereas in others (e.g., inside <>), parentheses are not. Here are some examples:

// Before                             After                         Note
// ~~~~~~                             ~~~~~                         ~~~~
   &Object+Send                       &(Object+Send)
   &'a Object+'a                      &'a (Object+'a)
   Box<Object+Send>                   Box<Object+Send>
   foo::<Object+Send,int>(...)        foo::<Object+Send,int>(...)
   Fn() -> Object+Send                Fn() -> (Object+Send)         // (*)
   Fn() -> &Object+Send               Fn() -> &(Object+Send)
   
// (*) Must yield a type error, as return type must be `Sized`.

More fully, the type grammar is as follows (EBNF notation):

TYPE = PATH
     | '&' [LIFETIME] TYPE
     | '&' [LIFETIME] 'mut' TYPE
     | '*' 'const' TYPE
     | '*' 'mut' TYPE
     | ...
     | '(' SUM ')'
SUM  = TYPE { '+' TYPE }
PATH = IDS '<' SUM { ',' SUM } '>'
     | IDS '(' SUM { ',' SUM } ')' '->' TYPE
IDS  = ['::'] ID { '::' ID }

Where clauses would use the following grammar:

WHERE_CLAUSE = PATH { '+' PATH }

One property of this grammar is that the TYPE nonterminal does not require a terminator as it has no “open-ended” expansions. SUM, in contrast, can be extended any number of times via the + token. Hence is why SUM must be enclosed in parens to make it into a TYPE.

Drawbacks

Common types like &'a Foo+'a become slightly longer (&'a (Foo+'a)).

Alternatives

We could live with the inconsistency between the type/expression grammars and disambiguate where clauses in an ad-hoc way.

Unresolved questions

None.

Summary

This RFC proposes a number of design improvements to the cmp and ops modules in preparation for 1.0. The impetus for these improvements, besides the need for stabilization, is that we’ve added several important language features (like multidispatch) that greatly impact the design. Highlights:

  • Make basic unary and binary operators work by value and use associated types.
  • Generalize comparison operators to work across different types; drop Equiv.
  • Refactor slice notation in favor of range notation so that special traits are no longer needed.
  • Add IndexSet to better support maps.
  • Clarify ownership semantics throughout.

Motivation

The operator and comparison traits play a double role: they are lang items known to the compiler, but are also library APIs that need to be stabilized.

While the traits have been fairly stable, a lot has changed in the language recently, including the addition of multidispatch, associated types, and changes to method resolution (especially around smart pointers). These are all things that impact the ideal design of the traits.

Since it is now relatively clear how these language features will work at 1.0, there is enough information to make final decisions about the construction of the comparison and operator traits. That’s what this RFC aims to do.

Detailed design

The traits in cmp and ops can be broken down into several categories, and to keep things manageable this RFC discusses each category separately:

  • Basic operators:
    • Unary: Neg, Not
    • Binary: Add, Sub, Mul, Div, Rem, Shl, Shr, BitAnd, BitOr, BitXor,
  • Comparison: PartialEq, PartialOrd, Eq, Ord, Equiv
  • Indexing and slicing: Index, IndexMut, Slice, SliceMut
  • Special traits: Deref, DerefMut, Drop, Fn, FnMut, FnOnce

Basic operators

The basic operators include arithmetic and bitwise notation with both unary and binary operators.

Current design

Here are two example traits, one unary and one binary, for basic operators:

pub trait Not<Result> {
    fn not(&self) -> Result;
}

pub trait Add<Rhs, Result> {
    fn add(&self, rhs: &Rhs) -> Result;
}

The rest of the operators follow the same pattern. Note that self and rhs are taken by reference, and the compiler introduce silent uses of & for the operands.

The traits also take Result as an input type.

Proposed design

This RFC proposes to make Result an associated (output) type, and to make the traits work by value:

pub trait Not {
    type Result;
    fn not(self) -> Result;
}

pub trait Add<Rhs = Self> {
    type Result;
    fn add(self, rhs: Rhs) -> Result;
}

The reason to make Result an associated type is straightforward: it should be uniquely determined given Self and other input types, and making it an associated type is better for both type inference and for keeping things concise when using these traits in bounds.

Making these traits work by value is motivated by cases like DList concatenation, where you may want the operator to actually consume the operands in producing its output (by welding the two lists together).

It also means that the compiler does not have to introduce a silent & for the operands, which means that the ownership semantics when using these operators is much more clear.

Fortunately, there is no loss in expressiveness, since you can always implement the trait on reference types. However, for types that do need to be taken by reference, there is a slight loss in ergonomics since you may need to explicitly borrow the operands with &. The upside is that the ownership semantics become clearer: they more closely resemble normal function arguments.

By keeping Rhs as an input trait on the trait, you can overload on the types of both operands via multidispatch. By defaulting Rhs to Self, in the future it will be possible to simply say T: Add as shorthand for T: Add<T>, which is the common case.

Examples:

// Basic setup for Copy types:
impl Add<uint> for uint {
    type Result = uint;
    fn add(self, rhs: uint) -> uint { ... }
}

// Overloading on the Rhs:
impl Add<uint> for Complex {
    type Result = Complex;
    fn add(self, rhs: uint) -> Complex { ... }
}

impl Add<Complex> for Complex {
    type Result = Complex;
    fn add(self, rhs: Complex) -> Complex { ... }
}

// Recovering by-ref semantics:
impl<'a, 'b> Add<&'a str> for &'b str {
    type Result = String;
    fn add(self, rhs: &'a str) -> String { ... }
}

Comparison traits

The comparison traits provide overloads for operators like == and >.

Current design

Comparisons are subtle, because some types (notably f32 and f64) do not actually provide full equivalence relations or total orderings. The current design therefore splits the comparison traits into “partial” variants that do not promise full equivalence relations/ordering, and “total” variants which inherit from them but make stronger semantic guarantees. The floating point types implement the partial variants, and the operators defer to them. But certain collection types require e.g. total rather than partial orderings:

pub trait PartialEq {
    fn eq(&self, other: &Self) -> bool;

    fn ne(&self, other: &Self) -> bool { !self.eq(other) }
}

pub trait Eq: PartialEq {}

pub trait PartialOrd: PartialEq {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering>;
    fn lt(&self, other: &Self) -> bool { .. }
    fn le(&self, other: &Self) -> bool { .. }
    fn gt(&self, other: &Self) -> bool { .. }
    fn ge(&self, other: &Self) -> bool { .. }
}

pub trait Ord: Eq + PartialOrd {
    fn cmp(&self, other: &Self) -> Ordering;
}

pub trait Equiv<T> {
    fn equiv(&self, other: &T) -> bool;
}

In addition there is an Equiv trait that can be used to compare values of different types for equality, but does not correspond to any operator sugar. (It was introduced in part to help solve some problems in map APIs, which are now resolved in a different way.)

The comparison traits all work by reference, and the compiler inserts implicit uses of & to make this ergonomic.

Proposed design

This RFC proposes to follow largely the same design strategy, but to remove Equiv and instead generalize the traits via multidispatch:

pub trait PartialEq<Rhs = Self> {
    fn eq(&self, other: &Rhs) -> bool;

    fn ne(&self, other: &Rhs) -> bool { !self.eq(other) }
}

pub trait Eq<Rhs = Self>: PartialEq<Rhs> {}

pub trait PartialOrd<Rhs = Self>: PartialEq<Rhs> {
    fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>;
    fn lt(&self, other: &Rhs) -> bool { .. }
    fn le(&self, other: &Rhs) -> bool { .. }
    fn gt(&self, other: &Rhs) -> bool { .. }
    fn ge(&self, other: &Rhs) -> bool { .. }
}

pub trait Ord<Rhs = Self>: Eq<Rhs> + PartialOrd<Rhs> {
    fn cmp(&self, other: &Rhs) -> Ordering;
}

Due to the use of defaulting, this generalization loses no ergonomics. However, it makes it possible to overload notation like == to compare different types without needing an explicit conversion. (Precisely which overloadings we provide in std will be subject to API stabilization.) This more general design will allow us to eliminate the iter::order submodule in favor of comparison notation, for example.

This design suffers from the problem that it is somewhat painful to implement or derive Eq/Ord, which is the common case. We can likely improve e.g. #[deriving(Ord)] to automatically derive PartialOrd. See Alternatives for a more radical design (and the reasons that it’s not feasible right now.)

Indexing and slicing

There are a few traits that support [] notation for indexing and slicing.

Current design:

The current design is as follows:

pub trait Index<Index, Sized? Result> {
    fn index<'a>(&'a self, index: &Index) -> &'a Result;
}

pub trait IndexMut<Index, Result> {
    fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}

pub trait Slice<Idx, Sized? Result> for Sized? {
    fn as_slice_<'a>(&'a self) -> &'a Result;
    fn slice_from_or_fail<'a>(&'a self, from: &Idx) -> &'a Result;
    fn slice_to_or_fail<'a>(&'a self, to: &Idx) -> &'a Result;
    fn slice_or_fail<'a>(&'a self, from: &Idx, to: &Idx) -> &'a Result;
}

// and similar for SliceMut...

The index and slice traits work somewhat differently. For Index/IndexMut, the return value is implicitly dereferenced, so that notation like v[i] = 3 makes sense. If you want to get your hands on the actual reference, you usually need an explicit &, for example &v[i] or &mut v[i] (the compiler determines whether to use Index or IndexMut by context). This follows the C notational tradition.

Slice notation, on the other hand, does not automatically dereference and so requires a special mut marker: v[mut 1..].

For both of these traits, the indexes themselves are taken by reference, and the compiler automatically introduces a & (so you write v[3] not v[&3]).

Proposed design

This RFC proposes to refactor the slice design into more modular components, which as a side-product will make slicing automatically dereference the result (consistently with indexing). The latter is desirable because &mut v[1..] is more consistent with the rest of the language than v[mut 1..] (and also makes the borrowing semantics more explicit).

Index revisions

In the new design, the index traits take the index by value and the compiler no longer introduces a silent &. This follows the same design as for e.g. Add above, and for much the same reasons. That means in particular that it will be possible to write map["key"] rather than map[*"key"] when using a map with String keys, and will still be possible to write v[3] for vectors. In addition, the Result becomes an associated type, again following the same design outlined above:

pub trait Index<Idx> for Sized?  {
    type Sized? Result;
    fn index<'a>(&'a self, index: Idx) -> &'a Result;
}

pub trait IndexMut<Idx> for Sized? {
    type Sized? Result;
    fn index_mut<'a>(&'a mut self, index: Idx) -> &'a mut Result;
}

In addition, this RFC proposes another trait, IndexSet, that is used for expr[i] = expr:

pub trait IndexSet<Idx> {
    type Val;
    fn index_set<'a>(&'a mut self, index: Idx, val: Val);
}

(This idea is borrowed from @sfackler’s earlier RFC.)

The motivation for this trait is cases like map["key"] = val, which should correspond to an insertion rather than a mutable lookup. With today’s setup, that expression would result in a panic if “key” was not already present in the map.

Of course, IndexSet and IndexMut overlap, since expr[i] = expr could be interpreted using either. Some types may implement IndexSet but not IndexMut (for example, if it doesn’t make sense to produce an interior reference). But for types providing both, the compiler will use IndexSet to interpret the expr[i] = expr syntax. (You can always get IndexMut by instead writing * &mut expr[i] = expr, but this will likely be extremely rare.)

Slice revisions

The changes to slice notation are more radical: this RFC proposes to remove the slice traits altogether! The replacement is to introduce range notation and overload indexing on it.

The current slice notation allows you to write v[i..j], v[i..], v[..j] and v[]. The idea for handling the first three is to add the following desugaring:

i..j  ==>  Range(i, j)
i..   ==>  RangeFrom(i)
..j   ==>  RangeTo(j)

where

struct Range<Idx>(Idx, Idx);
struct RangeFrom<Idx>(Idx);
struct RangeTo<Idx>(Idx);

Then, to implement slice notation, you just implement Index/IndexMut with Range, RangeFrom, and RangeTo index types.

This cuts down on the number of special traits and machinery. It makes indexing and slicing more consistent (since both will implicitly deref their result); you’ll write &mut v[1..] to get a mutable slice. It also opens the door to other uses of the range notation:

for x in 1..100 { ... }

because the refactored design is more modular.

What about v[] notation? The proposal is to desugar this to v[FullRange] where struct FullRange;.

Note that .. is already used in a few places in the grammar, notably fixed length arrays and functional record update. The former is at the type level, however, and the latter is not ambiguous: Foo { a: x, .. bar} since the .. bar component will never be parsed as an expression.

Special traits

Finally, there are a few “special” traits that hook into the compiler in various ways that go beyond basic operator overlaoding.

Deref and DerefMut

The Deref and DerefMut traits are used for overloading dereferencing, typically for smart pointers.

The current traits look like so:

pub trait Deref<Sized? Result> {
    fn deref<'a>(&'a self) -> &'a Result;
}

but the Result type should become an associated type, dictating that a smart pointer can only deref to a single other type (which is also needed for inference and other magic around deref):

pub trait Deref {
    type Sized? Result;
    fn deref<'a>(&'a self) -> &'a Result;
}

Drop

This RFC proposes no changes to the Drop trait.

Closure traits

This RFC proposes no changes to the closure traits. The current design looks like:

pub trait Fn<Args, Result> {
    fn call(&self, args: Args) -> Result;
}

and, given the way that multidispatch has worked out, it is safe and more flexible to keep both Args and Result as input types (which means that custom implementations could overload on either). In particular, the sugar for these traits requires writing all of these types anyway.

These traits should not be exposed as #[stable] for 1.0, meaning that you will not be able to implement or use them directly from the stable release channel. There are a few reasons for this. For one, when bounding by these traits you generally want to use the sugar Fn (T, U) -> V instead, which will be stable. Keeping the traits themselves unstable leaves us room to change their definition to support variadic generics in the future.

Drawbacks

The main drawback is that implementing the above will take a bit of time, which is something we’re currently very short on. However, stabilizing cmp and ops has always been part of the plan, and has to be done for 1.0.

Alternatives

Comparison traits

We could pursue a more aggressive change to the comparison traits by not having PartialOrd be a super trait of Ord, but instead providing a blanket impl for PartialOrd for any T: Ord. Unfortunately, this design poses some problems when it comes to things like tuples, which want to provide PartialOrd and Ord if all their components do: you would end up with overlapping PartialOrd impls. It’s possible to work around this, but at the expense of additional language features (like “negative bounds”, the ability to make an impl apply only when certain things are not true).

Since it’s unlikely that these other changes can happen in time for 1.0, this RFC takes a more conservative approach.

Slicing

We may want to drop the [] notation. This notation was introduced to improve ergonomics (from foo(v.as_slice()) to foo(v[]), but now that collections reform is starting to land we can instead write foo(&*v). If we also had deref coercions, that would be just foo(&v).

While &*v notation is more ergonomic than v.as_slice(), it is also somewhat intimidating notation for a situation that newcomers to the language are likely to face quickly.

In the opinion of this RFC author, we should either keep [] notation, or provide deref coercions so that you can just say &v.

Unresolved questions

In the long run, we should support overloading of operators like += which often have a more efficient implementation than desugaring into a + and an =. However, this can be added backwards-compatibly and is not significantly blocking library stabilization, so this RFC postpones consideration until a later date.

Summary

This is a conventions RFC establishing a definition and naming convention for extension traits: FooExt.

Motivation

This RFC is part of the ongoing API conventions and stabilization effort.

Extension traits are a programming pattern that makes it possible to add methods to an existing type outside of the crate defining that type. While they should be used sparingly, the new object safety rules have increased the need for this kind of trait, and hence the need for a clear convention.

Detailed design

What is an extension trait?

Rust currently allows inherent methods to be defined on a type only in the crate where that type is defined. But it is often the case that clients of a type would like to incorporate additional methods to it. Extension traits are a pattern for doing so:

extern crate foo;
use foo::Foo;

trait FooExt {
    fn bar(&self);
}

impl FooExt for Foo {
    fn bar(&self) { .. }
}

By defining a new trait, a client of foo can add new methods to Foo.

Of course, adding methods via a new trait happens all the time. What makes it an extension trait is that the trait is not designed for generic use, but only as way of adding methods to a specific type or family of types.

This is of course a somewhat subjective distinction. Whenever designing an extension trait, one should consider whether the trait could be used in some more generic way. If so, the trait should be named and exported as if it were just a “normal” trait. But traits offering groups of methods that really only make sense in the context of some particular type(s) are true extension traits.

The new object safety rules mean that a trait can only be used for trait objects if all of its methods are usable; put differently, it ensures that for “object safe traits” there is always a canonical way to implement Trait for Box<Trait>. To deal with this new rule, it is sometimes necessary to break traits apart into an object safe trait and extension traits:

// The core, object-safe trait
trait Iterator<A> {
    fn next(&mut self) -> Option<A>;
}

// The extension trait offering object-unsafe methods
trait IteratorExt<A>: Iterator<A> {
    fn chain<U: Iterator<A>>(self, other: U) -> Chain<Self, U> { ... }
    fn zip<B, U: Iterator<B>>(self, other: U) -> Zip<Self, U> { ... }
    fn map<B>(self, f: |A| -> B) -> Map<'r, A, B, Self> { ... }
    ...
}

// A blanket impl
impl<A, I> IteratorExt<A> for I where I: Iterator<A> {
    ...
}

Note that, although this split-up definition is somewhat more complex, it is also more flexible: because Box<Iterator<A>> will implement Iterator<A>, you can now use all of the adapter methods provided in IteratorExt on trait objects, even though they are not object safe.

The convention

The proposed convention is, first of all, to (1) prefer adding default methods to existing traits or (2) prefer generically useful traits to extension traits whenever feasible.

For true extension traits, there should be a clear type or trait that they are extending. The extension trait should be called FooExt where Foo is that type or trait.

In some cases, the extension trait only applies conditionally. For example, AdditiveIterator is an extension trait currently in std that applies to iterators over numeric types. These extension traits should follow a similar convention, putting together the type/trait name and the qualifications, together with the Ext suffix: IteratorAddExt.

What about Prelude?

A previous convention used a Prelude suffix for extension traits that were also part of the std prelude; this new convention deprecates that one.

Future proofing

In the future, the need for many of these extension traits may disappear as other languages features are added. For example, method-level where clauses will eliminate the need for AdditiveIterator. And allowing inherent impls like impl<T: Trait> T { .. } for the crate defining Trait would eliminate even more.

However, there will always be some use of extension traits, and we need to stabilize the 1.0 libraries prior to these language features landing. So this is the proposed convention for now, and in the future it may be possible to deprecate some of the resulting traits.

Alternatives

It seems clear that we need some convention here. Other possible suffixes would be Util or Methods, but Ext is both shorter and connects to the name of the pattern.

Drawbacks

In general, extension traits tend to require additional imports – especially painful when dealing with object safety. However, this is more to do with the language as it stands today than with the conventions in this RFC.

Libraries are already starting to export their own prelude module containing extension traits among other things, which by convention is glob imported.

In the long run, we should add a general “prelude” facility for external libraries that makes it possible to globally import a small set of names from the crate. Some early investigations of such a feature are already under way, but are outside the scope of this RFC.

Summary

Remove \u203D and \U0001F4A9 unicode string escapes, and add ECMAScript 6-style \u{1F4A9} escapes instead.

Motivation

The syntax of \u followed by four hexadecimal digits dates from when Unicode was a 16-bit encoding, and only went up to U+FFFF. \U followed by eight hex digits was added as a band-aid when Unicode was extended to U+10FFFF, but neither four nor eight digits particularly make sense now.

Having two different syntaxes with the same meaning but that apply to different ranges of values is inconsistent and arbitrary. This proposal unifies them into a single syntax that has a precedent in ECMAScript a.k.a. JavaScript.

Detailed design

In terms of the grammar in The Rust Reference, replace:

unicode_escape : 'u' hex_digit 4
               | 'U' hex_digit 8 ;

with

unicode_escape : 'u' '{' hex_digit+ 6 '}'

That is, \u{ followed by one to six hexadecimal digits, followed by }.

The behavior would otherwise be identical.

Migration strategy

In order to provide a graceful transition from the old \uDDDD and \UDDDDDDDD syntax to the new \u{DDDDD} syntax, this feature should be added in stages:

  • Stage 1: Add support for the new \u{DDDDD} syntax, without removing previous support for \uDDDD and \UDDDDDDDD.

  • Stage 2: Warn on occurrences of \uDDDD and \UDDDDDDDD. Convert all library code to use \u{DDDDD} instead of the old syntax.

  • Stage 3: Remove support for the old syntax entirely (preferably during a separate release from the one that added the warning from Stage 2).

Drawbacks

  • This is a breaking change and updating code for it manually is annoying. It is however very mechanical, and we could provide scripts to automate it.
  • Formatting templates already use curly braces. Having multiple curly braces pairs in the same strings that have a very different meaning can be surprising: format!("\u{e8}_{e8}", e8 = "é") would be "è_é". However, there is a precedent of overriding characters: \ can start an escape sequence both in the Rust lexer for strings and in regular expressions.

Alternatives

  • Status quo: don’t change the escaping syntax.
  • Add the new \u{…} syntax, but also keep the existing \u and \U syntax. This is what ES 6 does, but only to keep compatibility with ES 5. We don’t have that constraint pre-1.0.

Unresolved questions

None so far.

Summary

Disallow unconstrained type parameters from impls. In practice this means that every type parameter must either:

  1. appear in the trait reference of the impl, if any;
  2. appear in the self type of the impl; or,
  3. be bound as an associated type.

This is an informal description, see below for full details.

Motivation

Today it is legal to have impls with type parameters that are effectively unconstrainted. This RFC proses to make these illegal by requiring that all impl type parameters must appear in either the self type of the impl or, if the impl is a trait impl, an (input) type parameter of the trait reference. Type parameters can also be constrained by associated types.

There are many reasons to make this change. First, impls are not explicitly instantiated or named, so there is no way for users to manually specify the values of type variables; the values must be inferred. If the type parameters do not appear in the trait reference or self type, however, there is no basis on which to infer them; this almost always yields an error in any case (unresolved type variable), though there are some corner cases where the inferencer can find a constraint.

Second, permitting unconstrained type parameters to appear on impls can potentially lead to ill-defined semantics later on. The current way that the language works for cross-crate inlining is that the body of the method is effectively reproduced within the target crate, but in a fully elaborated form where it is as if the user specified every type explicitly that they possibly could. This should be sufficient to reproduce the same trait selections, even if the crate adds additional types and additional impls – but this cannot be guaranteed if there are free-floating type parameters on impls, since their values are not written anywhere. (This semantics, incidentally, is not only convenient, but also required if we wish to allow for specialization as a possibility later on.)

Finally, there is little to no loss of expressiveness. The type parameters in question can always be moved somewhere else.

Here are some examples to clarify what’s allowed and disallowed. In each case, we also clarify how the example can be rewritten to be legal.

// Legal:
// - A is used in the self type.
// - B is used in the input trait type parameters.
impl<A,B> SomeTrait<Option<B>> for Foo<A> {
    type Output = Result<A, IoError>;
}

// Legal:
// - A and B are used in the self type
impl<A,B> Vec<(A,B)> {
    ...
}

// Illegal:
// - A does not appear in the self type nor trait type parameters.
//
// This sort of pattern can generally be written by making `Bar` carry
// `A` as a phantom type parameter, or by making `Elem` an input type
// of `Foo`.
impl<A> Foo for Bar {
    type Elem = A; // associated types do not count
    ...
}

// Illegal: B does not appear in the self type.
//
// Note that B could be moved to the method `get()` with no
// loss of expressiveness.
impl<A,B:Default> Foo<A> {
    fn do_something(&self) {
    }

    fn get(&self) -> B {
        B::Default
    }
}

// Legal: `U` does not appear in the input types,
// but it bound as an associated type of `T`.
impl<T,U> Foo for T
    where T : Bar<Out=U> {
}

Detailed design

Type parameters are legal if they are “constrained” according to the following inference rules:

If T appears in the impl trait reference,
  then: T is constrained

If T appears in the impl self type,
  then: T is constrained

If <T0 as Trait<T1...Tn>>::U == V appears in the impl predicates,
  and T0...Tn are constrained
  and T0 as Trait<T1...Tn> is not the impl trait reference
  then: V is constrained

The interesting rule is of course the final one. It says that type parameters whose value is determined by an associated type reference are legal. A simple example is:

impl<T,U> Foo for T
    where T : Bar<Out=U>

However, we have to be careful to avoid cases where the associated type is an associated type of things that are not themselves constrained:

impl<T,U,V> Foo for T
    where U: Bar<Out=V>

Similarly, the final clause in the rule aims to prevent an impl from “self-referentially” constraining an output type parameter:

impl<T,U> Bar for T
    where T : Bar<Out=U>

This last case isn’t that important because impls like this, when used, tend to result in overflow in the compiler, but it’s more user-friendly to report an error earlier.

Drawbacks

This pattern requires a non-local rewrite to reproduce:

impl<A> Foo for Bar {
    type Elem = A; // associated types do not count
    ...
}

Alternatives

To make these type parameters well-defined, we could also create a syntax for specifying impl type parameter instantiations and/or have the compiler track the full tree of impl type parameter instantiations at type-checking time and supply this to the translation phase. This approach rules out the possibility of impl specialization.

Unresolved questions

None.

  • Start Date: 2014-12-02
  • RFC PR: 450
  • Rust Issue: 19469

Summary

Remove the tuple_indexing, if_let, and while_let feature gates and add them to the language.

Motivation

Tuple Indexing

This feature has proven to be quite useful for tuples and struct variants, and it allows for the removal of some unnecessary tuple accessing traits in the standard library (TupleN).

The implementation has also proven to be quite solid with very few reported internal compiler errors related to this feature.

if let and while let

This feature has also proven to be quite useful over time. Many projects are now leveraging these feature gates which is a testament to their usefulness.

Additionally, the implementation has also proven to be quite solid with very few reported internal compiler errors related to this feature.

Detailed design

  • Remove the if_let, while_let, and tuple_indexing feature gates.
  • Add these features to the language (do not require a feature gate to use them).
  • Deprecate the TupleN traits in std::tuple.

Drawbacks

Adding features to the language this late in the game is always somewhat of a risky business. These features, while having baked for a few weeks, haven’t had much time to bake in the grand scheme of the language. These are both backwards compatible to accept, and it could be argued that this could be done later rather than sooner.

In general, the major drawbacks of this RFC are the scheduling risks and “feature bloat” worries. This RFC, however, is quite easy to implement (reducing schedule risk) and concerns two fairly minor features which are unambiguously nice to have.

Alternatives

  • Instead of un-feature-gating before 1.0, these features could be released after 1.0 (if at all). The TupleN traits would then be required to be deprecated for the entire 1.0 release cycle.

Unresolved questions

None at the moment.

Summary

Various enhancements to macros ahead of their standardization in 1.0.

Note: This is not the final Rust macro system design for all time. Rather, it addresses the largest usability problems within the limited time frame for 1.0. It’s my hope that a lot of these problems can be solved in nicer ways in the long term (there is some discussion of this below).

Motivation

macro_rules! has many rough edges. A few of the big ones:

  • You can’t re-export macros
  • Even if you could, names produced by the re-exported macro won’t follow the re-export
  • You can’t use the same macro in-crate and exported, without the “curious inner-module” hack
  • There’s no namespacing at all
  • You can’t control which macros are imported from a crate
  • You need the feature-gated #[phase(plugin)] to import macros

These issues in particular are things we have a chance of addressing for 1.0. This RFC contains plans to do so.

Semantic changes

These are the substantial changes to the macro system. The examples also use the improved syntax, described later.

$crate

The first change is to disallow importing macros from an extern crate that is not at the crate root. In that case, if

extern crate "bar" as foo;

imports macros, then it’s also introducing ordinary paths of the form ::foo::.... We call foo the crate ident of the extern crate.

We introduce a special macro metavar $crate which expands to ::foo when a macro was imported through crate ident foo, and to nothing when it was defined in the crate where it is being expanded. $crate::bar::baz will be an absolute path either way.

This feature eliminates the need for the “curious inner-module” and also enables macro re-export (see below). It is implemented and tested but needs a rebase.

We can add a lint to warn about cases where an exported macro has paths that are not absolute-with-crate or $crate-relative. This will have some (hopefully rare) false positives.

Macro scope

In this document, the “syntax environment” refers to the set of syntax extensions that can be invoked at a given position in the crate. The names in the syntax environment are simple unqualified identifiers such as panic and vec. Informally we may write vec! to distinguish from an ordinary item. However, the exclamation point is really part of the invocation syntax, not the name, and some syntax extensions are invoked with no exclamation point, for example item decorators like deriving.

We introduce an attribute macro_use to specify which macros from an external crate should be imported to the syntax environment:

#[macro_use(vec, panic="fail")]
extern crate std;

#[macro_use]
extern crate core;

The list of macros to import is optional. Omitting the list imports all macros, similar to a glob use. (This is also the mechanism by which std will inject its macros into every non-no_std crate.)

Importing with rename is an optional part of this proposal that will be implemented for 1.0 only if time permits.

Macros imported this way can be used anywhere in the module after the extern crate item, including in child modules. Since a macro-importing extern crate must appear at the crate root, and view items come before other items, this effectively means imported macros will be visible for the entire crate.

Any name collision between macros, whether imported or defined in-crate, is a hard error.

Many macros expand using other “helper macros” as an implementation detail. For example, librustc’s declare_lint! uses lint_initializer!. The client should not know about this macro, although it still needs to be exported for cross-crate use. For this reason we allow #[macro_use] on a macro definition.

/// Not to be imported directly.
#[macro_export]
macro_rules! lint_initializer { ... }

/// Declare a lint.
#[macro_export]
#[macro_use(lint_initializer)]
macro_rules! declare_lint {
    ($name:ident, $level:ident, $desc:expr) => (
        static $name: &'static $crate::lint::Lint
            = &lint_initializer!($name, $level, $desc);
    )
}

The macro lint_initializer!, imported from the same crate as declare_lint!, will be visible only during further expansion of the result of invoking declare_lint!.

macro_use on macro_rules is an optional part of this proposal that will be implemented for 1.0 only if time permits. Without it, libraries that use helper macros will need to list them in documentation so that users can import them.

Procedural macros need their own way to manipulate the syntax environment, but that’s an unstable internal API, so it’s outside the scope of this RFC.

New syntax

We also clean up macro syntax in a way that complements the semantic changes above.

#[macro_use(...)] mod

The macro_use attribute can be applied to a mod item as well. The specified macros will “escape” the module and become visible throughout the rest of the enclosing module, including any child modules. A crate might start with

#[macro_use]
mod macros;

to define some macros for use by the whole crate, without putting those definitions in lib.rs.

Note that #[macro_use] (without a list of names) is equivalent to the current #[macro_escape]. However, the new convention is to use an outer attribute, in the file whose syntax environment is affected, rather than an inner attribute in the file defining the macros.

Macro export and re-export

Currently in Rust, a macro definition qualified by #[macro_export] becomes available to other crates. We keep this behavior in the new system. A macro qualified by #[macro_export] can be the target of #[macro_use(...)], and will be imported automatically when #[macro_use] is given with no list of names.

#[macro_export] has no effect on the syntax environment for the current crate.

We can also re-export macros that were imported from another crate. For example, libcollections defines a vec! macro, which would now look like:

#[macro_export]
macro_rules! vec {
    ($($e:expr),*) => ({
        let mut _temp = $crate::vec::Vec::new();
        $(_temp.push($e);)*
        _temp
    })
}

Currently, libstd duplicates this macro in its own macros.rs. Now it could do

#[macro_reexport(vec)]
extern crate collections;

as long as the module std::vec is interface-compatible with collections::vec.

(Actually the current libstd vec! is completely different for efficiency, but it’s just an example.)

Because macros are exported in crate metadata as strings, macro re-export “just works” as soon as $crate is available. It’s implemented as part of the $crate branch mentioned above.

#[plugin] attribute

#[phase(plugin)] becomes simply #[plugin] and is still feature-gated. It only controls whether to search for and run a plugin registrar function. The plugin itself will decide whether it’s to be linked at runtime, by calling a Registry method.

#[plugin] can optionally take any meta items as “arguments”, e.g.

#[plugin(foo, bar=3, baz(quux))]
extern crate myplugin;

rustc itself will not interpret these arguments, but will make them available to the plugin through a Registry method. This facilitates plugin configuration. The alternative in many cases is to use interacting side effects between procedural macros, which are harder to reason about.

Syntax convention

macro_rules! already allows { } for the macro body, but the convention is ( ) for some reason. In accepting this RFC we would change to a { } convention for consistency with the rest of the language.

Reserve macro as a keyword

A lot of the syntax alternatives discussed for this RFC involved a macro keyword. The consensus is that macros are too unfinished to merit using the keyword now. However, we should reserve it for a future macro system.

Implementation and transition

I will coordinate implementation of this RFC, and I expect to write most of the code myself.

To ease the transition, we can keep the old syntax as a deprecated synonym, to be removed before 1.0.

Drawbacks

This is big churn on a major feature, not long before 1.0.

We can ship improved versions of macro_rules! in a back-compatible way (in theory; I would like to smoke test this idea before 1.0). So we could defer much of this reform until after 1.0. The main reason not to is macro import/export. Right now every macro you import will be expanded using your local copy of macro_rules!, regardless of what the macro author had in mind.

Alternatives

We could try to implement proper hygienic capture of crate names in macros. This would be wonderful, but I don’t think we can get it done for 1.0.

We would have to actually parse the macro RHS when it’s defined, find all the paths it wants to emit (somehow), and then turn each crate reference within such a path into a globally unique thing that will still work when expanded in another crate. Right now libsyntax is oblivious to librustc’s name resolution rules, and those rules can’t be applied until macro expansion is done, because (for example) a macro can expand to a use item.

nrc suggested dropping the #![macro_escape] functionality as part of this reform. Two ways this could work out:

  • All macros are visible throughout the crate. This seems bad; I depend on module scoping to stay (marginally) sane when working with macros. You can have private helper macros in two different modules without worrying that the names will clash.

  • Only macros at the crate root are visible throughout the crate. I’m also against this because I like keeping lib.rs as a declarative description of crates, modules, etc. without containing any actual code. Forcing the user’s hand as to which file a particular piece of code goes in seems un-Rusty.

Unresolved questions

Should we forbid $crate in non-exported macros? It seems useless, however I think we should allow it anyway, to encourage the habit of writing $crate:: for any references to the local crate.

Should #[macro_reexport] support the “glob” behavior of #[macro_use] with no names listed?

Acknowledgements

This proposal is edited by Keegan McAllister. It has been refined through many engaging discussions with:

  • Brian Anderson, Shachaf Ben-Kiki, Lars Bergstrom, Nick Cameron, John Clements, Alex Crichton, Cathy Douglass, Steven Fackler, Manish Goregaokar, Dave Herman, Steve Klabnik, Felix S. Klock II, Niko Matsakis, Matthew McPherrin, Paul Stansifer, Sam Tobin-Hochstadt, Erick Tryzelaar, Aaron Turon, Huon Wilson, Brendan Zabarauskas, Cameron Zwarich
  • GitHub: @bill-myers @blaenk @comex @glaebhoerl @Kimundi @mitchmindtree @mitsuhiko @P1Start @petrochenkov @skinner
  • Reddit: gnusouth ippa !kibwen Mystor Quxxy rime-frost Sinistersnare tejp UtherII yigal100
  • IRC: bstrie ChrisMorgan cmr Earnestly eddyb tiffany

My apologies if I’ve forgotten you, used an un-preferred name, or accidentally categorized you as several different people. Pull requests are welcome :)

Summary

I propose altering the Send trait as proposed by RFC #17 as follows:

  • Remove the implicit 'static bound from Send.
  • Make &T Send if and only if T is Sync.
    impl<'a, T> !Send for &'a T {}
    
    unsafe impl<'a, T> Send for &'a T where T: Sync + 'a {}
  • Evaluate each Send bound currently in libstd and either leave it as-is, add an explicit 'static bound, or bound it with another lifetime parameter.

Motivation

Currently, Rust has two types that deal with concurrency: Sync and Send

If T is Sync, then &T is threadsafe (that is, can cross task boundaries without data races). This is always true of any type with simple inherited mutability, and it is also true of types with interior mutability that perform explicit synchronization (e.g. Mutex and Arc). By fiat, in safe code all static items require a Sync bound. Sync is most interesting as the proposed bound for closures in a fork-join concurrency model, where the thread running the closure can be guaranteed to terminate before some lifetime 'a, and as one of the required bounds for Arc.

If T is Send, then T is threadsafe to send between tasks. At an initial glance, this type is harder to define. Send currently requires a 'static bound, which excludes types with non-’static references, and there are a few types (notably, Rc and local_data::Ref) that opt out of Send. All static items other than those that are Sync but not Send (in the stdlib this is just local_data::Ref and its derivatives) are Send. Send is most interesting as a required bound for Mutex, channels, spawn(), and other concurrent types and functions.

This RFC is mostly motivated by the challenges of writing a safe interface for fork-join concurrency in current Rust. Specifically:

  • It is not clear what it means for a type to be Sync but not Send. Currently there is nothing in the type system preventing these types from being instantiated. In a fork-join model with a bounded, non-'static lifetime 'a for worker tasks, using a Sync + 'a bound on a closure is the intended way to make sure the operation is safe to run in another thread in parallel with the main thread. But there is no way of preventing the main and worker tasks from concurrently accessing an item that is Sync + NoSend.
  • Because Send has a 'static bound, most concurrency constructs cannot be used if they have any non-static references in them, even in a thread with a bounded lifetime. It seems like there should be a way to extend Send to shorter lifetimes. But naively removing the 'static bound causes memory unsafety in existing APIs like Mutex.

Detailed Design

Proposal

Extend the current meaning of Send in a (mostly) backwards-compatible way that retains memory-safety, but allows for existing concurrent types like Arc and Mutex to be used across non-'static boundaries. Use Send with a bounded lifetime instead of Sync for fork-join concurrency.

The first proposed change is to remove the 'static bound from Send. Without doing this, we would have to write brand new types for fork-join libraries that took Sync bounds but were otherwise identical to the existing implementations. For example, we cannot create a Mutex<Vec<&'a mut uint>> as long as Mutex requires a 'static bound. By itself, though, this causes unsafety. For example, a Mutex<&'a Cell<bool>> does not necessarily actually lock the data in the Cell:

let cell = Cell:new(true);
let ref_ = &cell;
let mutex = Mutex::new(&cell);
ref_.set(false); // Modifying the cell without locking the Mutex.

This leads us to our second refinement. We add the rule that &T is Send if and only if T is Sync–in other words, we disallow Sending shared references with a non-threadsafe interior. We do, however, still allow &mut T where T is Send, even if it is not Sync. This is safe because &mut T linearizes access–the only way to access the original data is through the unique reference, so it is safe to send to other threads. Similarly, we allow &T where T is Sync, even if it is not Send, since by the definition of Sync &T is already known to be threadsafe.

Note that this definition of Send is identical to the old definition of Send when restricted to 'static lifetimes in safe code. Since static mut items are not accessible in safe code, and it is not possible to create a safe &'static mut outside of such an item, we know that if T: Send + 'static, it either has only &'static references, or has no references at all. Since 'static references can only be created in static items and literals in safe code, and all static items (and literals) are Sync, we know that any such references are Sync. Thus, our new rule that T must be Sync for &'static T to be Send does not actually remove Send from any existing types. And since T has no &'static mut references, unless any were created in unsafe code, we also know that our rule allowing &'static mut T did not add Send to any new types. We conclude that the second refinement is backwards compatible with the old behavior, provided that old interfaces are updated to require 'static bounds and they did not create unsafe 'static and 'static mut references. But unsafe types like these were already not guaranteed to be threadsafe by Rust’s type system.

Another important note is that with this definition, Send will fulfill the proposed role of Sync in a fork-join concurrency library. At present, to use Sync in a fork-join library one must make the implicit assumption that if T is Sync, T is Send. One might be tempted to codify this by making Sync a subtype of Send. Unfortunately, this is not always the case, though it should be most of the time. A type can be created with &mut methods that are not thread safe, but no &-methods that are not thread safe. An example would be a version of Rc called RcMut. RcMut would have a clone_mut() method that took &mut self and no other clone() method. RcMut could be thread-safely shared provided that a &mut RcMut was not sent to another thread. As long as that invariant was upheld, RcMut could only be cloned in its original thread and could not be dropped while shared (hence, RcMut is Sync) but a mutable reference could not be thread-safely shared, nor could it be moved into another thread (hence, &mut RcMut is not Send, which means that RcMut is not Send). Because &T is Send if T is Sync (per the new definition), adding a Send bound will guarantee that only shared pointers of this type are moved between threads, so our new definition of Send preserves thread safety in the presence of such types.

Finally, we’d hunt through existing instances of Send in Rust libraries and replace them with sensible defaults. For example, the spawn() APIs should all have 'static bounds, preserving current behavior. I don’t think this would be too difficult, but it may be that there are some edge cases here where it’s tricky to determine what the right solution is.

More unusual types

We discussed whether a type with a destructor that manipulated thread-local data could be non-Send even though &mut T was. In general it could not, because you can call a destructor through &mut references (through swap or simply assigning a new value to *x where x: &mut T). It was noted that since &uniq T cannot be dropped, this suggests a role for such types.

Some unusual types proposed by arielb1 and myself to explain why T: Send does not mean &mut T is threadsafe, and T: Sync does not imply T: Send. The first type is a bottom type, the second takes self by value (so RcMainTask is not Send but &mut RcMainTask is Send).

Comments from arielb1:

Observe that RcMainTask::main_clone would be unsafe outside the main task.

&mut Xyz and &mut RcMainTask are perfectly fine Send types. However, Xyz is a bottom (can be used to violate memory safety), and RcMainTask is not Send.

#![feature(tuple_indexing)]
use std::rc::Rc;
use std::mem;
use std::kinds::marker;

// Invariant: &mut Xyz always points to a valid C xyz.
// Xyz rvalues don't exist.

// These leak. I *could* wrap a box or arena, but that would
// complicate things.

extern "C" {
    // struct Xyz;
    fn xyz_create() -> *mut Xyz;
    fn xyz_play(s: *mut Xyz);
}

pub struct Xyz(marker::NoCopy);

impl Xyz {
    pub fn new() -> &'static mut Xyz {
        unsafe {
            let x = xyz_create();
            mem::transmute(x)
        }
    }

    pub fn play(&mut self) {
        unsafe { xyz_play(mem::transmute(self)) }
    }
}

// Invariant: only the main task has RcMainTask values

pub struct RcMainTask<T>(Rc<T>);
impl<T> RcMainTask<T> {
    pub fn new(t: T) -> Option<RcMainTask<T>> {
        if on_main_task() {
            Some(RcMainTask(Rc::new(t)))
        } else { None }
    }

    pub fn main_clone(self) -> (RcMainTask<T>, RcMainTask<T>) {
        let new = RcMainTask(self.0.clone());
        (self, new)
    }
}

impl<T> Deref<T> for RcMainTask<T> {
    fn deref(&self) -> &T { &*self.0 }
}

//  - by Sharp

pub struct RcMut<T>(Rc<T>);
impl<T> RcMut<T> {
    pub fn new(t: T) -> RcMut<T> {
        RcMut(Rc::new(t))
    }

    pub fn mut_clone(&mut self) -> RcMut<T> {
        RcMut(self.0.clone())
    }
}

impl<T> Deref<T> for RcMut<T> {
    fn deref(&self) -> &T { &*self.0 }
}

// fn on_main_task() -> bool { false /* XXX: implement */ }
// fn main() {}

Drawbacks

Libraries get a bit more complicated to write, since you may have to write Send + 'static where previously you just wrote Send.

Alternatives

We could accept the status quo. This would mean that any existing Sync NoSend type like those described above would be unsafe (that is, it would not be possible to write a non-'static closure with the correct bounds to make it safe to use), and it would not be possible to write a type like Arc<T> for a T with a bounded lifetime, as well as other safe concurrency constructs for fork-join concurrency. I do not think this is a good alternative.

We could do as proposed above, but change Sync to be a subtype of Send. Things wouldn’t be too different, but you wouldn’t be able to write types like those discussed above. I am not sure that types like that are actually useful, but even if we did this I think you would usually want to use a Send bound anyway.

We could do as proposed above, but instead of changing Send, create a new type for this purpose. I suppose the advantage of this would be that user code currently using Send as a way to get a 'static bound would not break. However, I don’t think it makes a lot of sense to keep the current Send type around if this is implemented, since the new type should be backwards compatible with it where it was being used semantically correctly.

Unresolved questions

  • Is the new scheme actually safe? I think it is, but I certainly haven’t proved it.

  • Can this wait until after Rust 1.0, if implemented? I think it is backwards incompatible, but I believe it will also be much easier to implement once opt-in kinds are fully implemented.

  • Is this actually necessary? I’ve asserted that I think it’s important to be able to do the same things in bounded-lifetime threads that you can in regular threads, but it may be that it isn’t.

  • Are types that are Sync and NoSend actually useful?

Summary

Disallow type/lifetime parameter shadowing.

Motivation

Today we allow type and lifetime parameters to be shadowed. This is a common source of bugs as well as confusing errors. An example of such a confusing case is:

struct Foo<'a> {
    x: &'a int
}

impl<'a> Foo<'a> {
    fn set<'a>(&mut self, v: &'a int) {
        self.x = v;
    }
}

fn main() { }

In this example, the lifetime parameter 'a is shadowed on the method, leading to two logically distinct lifetime parameters with the same name. This then leads to the error message:

mismatched types: expected `&'a int`, found `&'a int` (lifetime mismatch)

which is obviously completely unhelpful.

Similar errors can occur with type parameters:

struct Foo<T> {
    x: T
}

impl<T> Foo<T> {
    fn set<T>(&mut self, v: T) {
        self.x = v;
    }
}

fn main() { }

Compiling this program yields:

mismatched types: expected `T`, found `T` (expected type parameter, found a different type parameter)

Here the error message was improved by a recent PR, but this is still a somewhat confusing situation.

Anecdotally, this kind of accidental shadowing is fairly frequent occurrence. It recently arose on this discuss thread, for example.

Detailed design

Disallow shadowed type/lifetime parameter declarations. An error would be reported by the resolve/resolve-lifetime passes in the compiler and hence fairly early in the pipeline.

Drawbacks

We otherwise allow shadowing, so it is inconsistent.

Alternatives

We could use a lint instead. However, we’d want to ensure that the lint error messages were printed before type-checking begins. We could do this, perhaps, by running the lint printing pass multiple times. This might be useful in any case as the placement of lints in the compiler pipeline has proven problematic before.

We could also attempt to improve the error messages. Doing so for lifetimes is definitely important in any case, but also somewhat tricky due to the extensive inference. It is usually easier and more reliable to help avoid the error in the first place.

Unresolved questions

None.

Summary

Introduce a new thread local storage module to the standard library, std::tls, providing:

  • Scoped TLS, a non-owning variant of TLS for any value.
  • Owning TLS, an owning, dynamically initialized, dynamically destructed variant, similar to std::local_data today.

Motivation

In the past, the standard library’s answer to thread local storage was the std::local_data module. This module was designed based on the Rust task model where a task could be either a 1:1 or M:N task. This design constraint has since been lifted, allowing for easier solutions to some of the current drawbacks of the module. While redesigning std::local_data, it can also be scrutinized to see how it holds up to modern-day Rust style, guidelines, and conventions.

In general the amount of work being scheduled for 1.0 is being trimmed down as much as possible, especially new work in the standard library that isn’t focused on cutting back what we’re shipping. Thread local storage, however, is such a critical part of many applications and opens many doors to interesting sets of functionality that this RFC sees fit to try and wedge it into the schedule. The current std::local_data module simply doesn’t meet the requirements of what one may expect out of a TLS implementation for a language like Rust.

Current Drawbacks

Today’s implementation of thread local storage, std::local_data, suffers from a few drawbacks:

  • The implementation is not super speedy, and it is unclear how to enhance the existing implementation to be on par with OS-based TLS or #[thread_local] support. As an example, today a lookup takes O(log N) time where N is the number of set TLS keys for a task.

    This drawback is also not to be taken lightly. TLS is a fundamental building block for rich applications and libraries, and an inefficient implementation will only deter usage of an otherwise quite useful construct.

  • The types which can be stored into TLS are not maximally flexible. Currently only types which ascribe to 'static can be stored into TLS. It’s often the case that a type with references needs to be placed into TLS for a short period of time, however.

  • The interactions between TLS destructors and TLS itself is not currently very well specified, and it can easily lead to difficult-to-debug runtime panics or undocumented leaks.

  • The implementation currently assumes a local Task is available. Once the runtime removal is complete, this will no longer be a valid assumption.

Current Strengths

There are, however, a few pros to the usage of the module today which should be required for any replacement:

  • All platforms are supported.
  • std::local_data allows consuming ownership of data, allowing it to live past the current stack frame.

Building blocks available

There are currently two primary building blocks available to Rust when building a thread local storage abstraction, #[thread_local] and OS-based TLS. Neither of these are currently used for std::local_data, but are generally seen as “adequately efficient” implementations of TLS. For example, an TLS access of a #[thread_local] global is simply a pointer offset, which when compared to a O(log N) lookup is quite speedy!

With these available, this RFC is motivated in redesigning TLS to make use of these primitives.

Detailed design

Three new modules will be added to the standard library:

  • The std::sys::tls module provides platform-agnostic bindings the OS-based TLS support. This support is intended to only be used in otherwise unsafe code as it supports getting and setting a *mut u8 parameter only.

  • The std::tls module provides a dynamically initialized and dynamically destructed variant of TLS. This is very similar to the current std::local_data module, except that the implicit Option<T> is not mandated as an initialization expression is required.

  • The std::tls::scoped module provides a flavor of TLS which can store a reference to any type T for a scoped set of time. This is a variant of TLS not provided today. The backing idea is that if a reference only lives in TLS for a fixed set of time then there’s no need for TLS to consume ownership of the value itself.

    This pattern of TLS is quite common throughout the compiler’s own usage of std::local_data and often more expressive as no dances are required to move a value into and out of TLS.

The design described below can be found as an existing cargo package: https://github.com/alexcrichton/tls-rs.

The OS layer

While LLVM has support for #[thread_local] statics, this feature is not supported on all platforms that LLVM can target. Almost all platforms, however, provide some form of OS-based TLS. For example Unix normally comes with pthread_key_create while Windows comes with TlsAlloc.

This RFC proposes introducing a std::sys::tls module which contains bindings to the OS-based TLS mechanism. This corresponds to the os module in the example implementation. While not currently public, the contents of sys are slated to become public over time, and the API of the std::sys::tls module will go under API stabilization at that time.

This module will support “statically allocated” keys as well as dynamically allocated keys. A statically allocated key will actually allocate a key on first use.

Destructor support

The major difference between Unix and Windows TLS support is that Unix supports a destructor function for each TLS slot while Windows does not. When each Unix TLS key is created, an optional destructor is specified. If any key has a non-NULL value when a thread exits, the destructor is then run on that value.

One possibility for this std::sys::tls module would be to not provide destructor support at all (least common denominator), but this RFC proposes implementing destructor support for Windows to ensure that functionality is not lost when writing Unix-only code.

Destructor support for Windows will be provided through a custom implementation of tracking known destructors for TLS keys.

Scoped TLS

As discussed before, one of the motivations for this RFC is to provide a method of inserting any value into TLS, not just those that ascribe to 'static. This provides maximal flexibility in storing values into TLS to ensure any “thread local” pattern can be encompassed.

Values which do not adhere to 'static contain references with a constrained lifetime, and can therefore not be moved into TLS. They can, however, be borrowed by TLS. This scoped TLS api provides the ability to insert a reference for a particular period of time, and then a non-escaping reference can be extracted at any time later on.

In order to implement this form of TLS, a new module, std::tls::scoped, will be added. It will be coupled with a scoped_tls! macro in the prelude. The API looks like:

/// Declares a new scoped TLS key. The keyword `static` is required in front to
/// emphasize that a `static` item is being created. There is no initializer
/// expression because this key initially contains no value.
///
/// A `pub` variant is also provided to generate a public `static` item.
macro_rules! scoped_tls(
    (static $name:ident: $t:ty) => (/* ... */);
    (pub static $name:ident: $t:ty) => (/* ... */);
)

/// A structure representing a scoped TLS key.
///
/// This structure cannot be created dynamically, and it is accessed via its
/// methods.
pub struct Key<T> { /* ... */ }

impl<T> Key<T> {
    /// Insert a value into this scoped TLS slot for a duration of a closure.
    ///
    /// While `cb` is running, the value `t` will be returned by `get` unless
    /// this function is called recursively inside of cb.
    ///
    /// Upon return, this function will restore the previous TLS value, if any
    /// was available.
    pub fn set<R>(&'static self, t: &T, cb: || -> R) -> R { /* ... */ }

    /// Get a value out of this scoped TLS variable.
    ///
    /// This function takes a closure which receives the value of this TLS
    /// variable, if any is available. If this variable has not yet been set,
    /// then None is yielded.
    pub fn with<R>(&'static self, cb: |Option<&T>| -> R) -> R { /* ... */ }
}

The purpose of this module is to enable the ability to insert a value into TLS for a scoped period of time. While able to cover many TLS patterns, this flavor of TLS is not comprehensive, motivating the owning variant of TLS.

Variations

Specifically the with API can be somewhat unwieldy to use. The with function takes a closure to run, yielding a value to the closure. It is believed that this is required for the implementation to be sound, but it also goes against the “use RAII everywhere” principle found elsewhere in the stdlib.

Additionally, the with function is more commonly called get for accessing a contained value in the stdlib. The name with is recommended because it may be possible in the future to express a get function returning a reference with a lifetime bound to the stack frame of the caller, but it is not currently possible to do so.

The with functions yields an Option<&T> instead of &T. This is to cover the use case where the key has not been set before it used via with. This is somewhat unergonomic, however, as it will almost always be followed by unwrap(). An alternative design would be to provide a is_set function and have with panic! instead.

Owning TLS

Although scoped TLS can store any value, it is also limited in the fact that it cannot own a value. This means that TLS values cannot escape the stack from which they originated from. This is itself another common usage pattern of TLS, and to solve this problem the std::tls module will provided support for placing owned values into TLS.

These values must not contain references as that could trigger a use-after-free, but otherwise there are no restrictions on placing statics into owned TLS. The module will support dynamic initialization (run on first use of the variable) as well as dynamic destruction (implementors of Drop).

The interface provided will be similar to what std::local_data provides today, except that the replace function has no analog (it would be written with a RefCell<Option<T>>).

/// Similar to the `scoped_tls!` macro, except allows for an initializer
/// expression as well.
macro_rules! tls(
    (static $name:ident: $t:ty = $init:expr) => (/* ... */)
    (pub static $name:ident: $t:ty = $init:expr) => (/* ... */)
)

pub struct Key<T: 'static> { /* ... */ }

impl<T: 'static> Key<T> {
    /// Access this TLS variable, lazily initializing it if necessary.
    ///
    /// The first time this function is called on each thread the TLS key will
    /// be initialized by having the specified init expression evaluated on the
    /// current thread.
    ///
    /// This function can return `None` for the same reasons of static TLS
    /// returning `None` (destructors are running or may have run).
    pub fn with<R>(&'static self, f: |Option<&T>| -> R) -> R { /* ... */ }
}

Destructors

One of the major points about this implementation is that it allows for values with destructors, meaning that destructors must be run when a thread exits. This is similar to placing a value with a destructor into std::local_data. This RFC attempts to refine the story around destructors:

  • A TLS key cannot be accessed while its destructor is running. This is currently manifested with the Option return value.
  • A TLS key may not be accessible after its destructor has run.
  • Re-initializing TLS keys during destruction may cause memory leaks (e.g. setting the key FOO during the destructor of BAR, and initializing BAR in the destructor of FOO). An implementation will strive to destruct initialized keys whenever possible, but it may also result in a memory leak.
  • A panic! in a TLS destructor will result in a process abort. This is similar to a double-failure.

These semantics are still a little unclear, and the final behavior may still need some more hammering out. The sample implementation suffers from a few extra drawbacks, but it is believed that some more implementation work can overcome some of the minor downsides.

Variations

Like the scoped TLS variation, this key has a with function instead of the normally expected get function (returning a reference). One possible alternative would be to yield &T instead of Option<&T> and panic! if the variable has been destroyed. Another possible alternative is to have a get function returning a Ref<T>. Currently this is unsafe, however, as there is no way to ensure that Ref<T> does not satisfy 'static. If the returned reference satisfies 'static, then it’s possible for TLS values to reference each other after one has been destroyed, causing a use-after-free.

Drawbacks

  • There is no variant of TLS for statically initialized data. Currently the std::tls module requires dynamic initialization, which means a slight penalty is paid on each access (a check to see if it’s already initialized).
  • The specification of destructors on owned TLS values is still somewhat shaky at best. It’s possible to leak resources in unsafe code, and it’s also possible to have different behavior across platforms.
  • Due to the usage of macros for initialization, all fields of Key in all scenarios must be public. Note that os is excepted because its initializers are a const.
  • This implementation, while declared safe, is not safe for systems that do any form of multiplexing of many threads onto one thread (aka green tasks or greenlets). This RFC considers it the multiplexing systems’ responsibility to maintain native TLS if necessary, or otherwise strongly recommend not using native TLS.

Alternatives

Alternatives on the API can be found in the “Variations” sections above.

Some other alternatives might include:

  • A 0-cost abstraction over #[thread_local] and OS-based TLS which does not have support for destructors but requires static initialization. Note that this variant still needs destructor support somehow because OS-based TLS values must be pointer-sized, implying that the rust value must itself be boxed (whereas #[thread_local] can support any type of any size).

  • A variant of the tls! macro could be used where dynamic initialization is opted out of because it is not necessary for a particular use case.

  • A previous PR from @thestinger leveraged macros more heavily than this RFC and provided statically constructible Cell and RefCell equivalents via the usage of transmute. The implementation provided did not, however, include the scoped form of this RFC.

Unresolved questions

  • Are the questions around destructors vague enough to warrant the get method being unsafe on owning TLS?
  • Should the APIs favor panic!-ing internally, or exposing an Option?
  • Start Date: 2014-09-28
  • RFC PR: #463
  • Rust Issue: #19088

Summary

Include identifiers immediately after literals in the literal token to allow future expansion, e.g. "foo"bar and a 1baz are considered whole (but semantically invalid) tokens, rather than two separate tokens "foo", bar and 1, baz respectively. This allows future expansion of handling literals without risking breaking (macro) code.

Motivation

Currently a few kinds of literals (integers and floats) can have a fixed set of suffixes and other kinds do not include any suffixes. The valid suffixes on numbers are:

u, u8, u16, u32, u64
i, i8, i16, i32, i64
f32, f64

Most things not in this list are just ignored and treated as an entirely separate token (prefixes of 128 are errors: e.g. 1u12 has an error "invalid int suffix"), and similarly any suffixes on other literals are also separate tokens. For example:

#![feature(macro_rules)]

// makes a tuple
macro_rules! foo( ($($a: expr)*) => { ($($a, )+) } )

fn main() {
    let bar = "suffix";
    let y = "suffix";

    let t: (uint, uint) = foo!(1u256);
    println!("{}", foo!("foo"bar));
    println!("{}", foo!('x'y));
}
/*
output:
(1, 256)
(foo, suffix)
(x, suffix)
*/

The compiler is eating the 1u and then seeing the invalid suffix 256 and so treating that as a separate token, and similarly for the string and character literals. (This problem is only visible in macros, since that is the only place where two literals/identifiers can be placed directly adjacent.)

This behaviour means we would be unable to expand the possibilities for literals after freezing the language/macros, which would be unfortunate, since user defined literals in C++ are reportedly very nice, proposals for “bit data” would like to use types like u1 and u5 (e.g. RFC PR 327), and there are “fringe” types like f16, f128 and u128 that have uses but are not common enough to warrant adding to the language now.

Detailed design

The tokenizer will have grammar literal: raw_literal identifier? where raw_literal covers strings, characters and numbers without suffixes (e.g. "foo", 'a', 1, 0x10).

Examples of “valid” literals after this change (that is, entities that will be consumed as a single token):

"foo"bar "foo"_baz
'a'x 'a'_y

15u16 17i18 19f20 21.22f23
0b11u25 0x26i27 28.29e30f31

123foo 0.0bar

Placing a space between the letter of the suffix and the literal will cause it to be parsed as two separate tokens, just like today. That is "foo"bar is one token, "foo" bar is two tokens.

The example above would then be an error, something like:

    let t: (uint, uint) = foo!(1u256); // error: literal with unsupported size
    println!("{}", foo!("foo"bar)); // error: literal with unsupported suffix
    println!("{}", foo!('x'y)); // error: literal with unsupported suffix

The above demonstrates that numeric suffixes could be special cased to detect u<...> and i<...> to give more useful error messages.

(The macro example there is definitely an error because it is using the incorrectly-suffixed literals as exprs. If it was only handling them as a token, i.e. tt, there is the possibility that it wouldn’t have to be illegal, e.g. stringify!(1u256) doesn’t have to be illegal because the 1u256 never occurs at runtime/in the type system.)

Drawbacks

None beyond outlawing placing a literal immediately before a pattern, but the current behaviour can easily be restored with a space: 123u 456. (If a macro is using this for the purpose of hacky generalised literals, the unresolved question below touches on this.)

Alternatives

Don’t do this, or consider doing it for adjacent suffixes with an alternative syntax, e.g. 10'bar or 10$bar.

Unresolved questions

  • Should it be the parser or the tokenizer rejecting invalid suffixes? This is effectively asking if it is legal for syntax extensions to be passed the raw literals? That is, can a foo procedural syntax extension accept and handle literals like foo!(1u2)?

  • Should this apply to all expressions, e.g. (1 + 2)bar?

Summary

Move box patterns behind a feature gate.

Motivation

A recent RFC (https://github.com/rust-lang/rfcs/pull/462) proposed renaming box patterns to deref. The discussion that followed indicates that while the language community may be in favour of some sort of renaming, there is no significant consensus around any concrete proposal, including the original one or any that emerged from the discussion.

This RFC proposes moving box patterns behind a feature gate to postpone that discussion and decision to when it becomes more clear how box patterns should interact with types other than Box.

In addition, in the future box patterns are expected to be made more general by enabling them to destructure any type that implements one of the Deref family of traits. As such a generalisation may potentially lead to some currently valid programs being rejected due to the interaction with type inference or other language features, it is desirable that this particular feature stays feature gated until then.

Detailed design

A feature gate box_patterns will be defined and all uses of the box pattern will require said gate to be enabled.

Drawbacks

Some currently valid Rust programs will have to opt in to another feature gate.

Alternatives

Pursue https://github.com/rust-lang/rfcs/pull/462 before 1.0 and stabilise box patterns without a feature gate.

Leave box patterns as-is without putting them behind a feature gate.

Unresolved questions

None.

Summary

This RFC reforms the design of the std::path module in preparation for API stabilization. The path API must deal with many competing demands, and the current design handles many of them, but suffers from some significant problems given in “Motivation” below. The RFC proposes a redesign modeled loosely on the current API that addresses these problems while maintaining the advantages of the current design.

Motivation

The design of a path abstraction is surprisingly hard. Paths work radically differently on different platforms, so providing a cross-platform abstraction is challenging. On some platforms, paths are not required to be in Unicode, posing ergonomic and semantic difficulties for a Rust API. These difficulties are compounded if one also tries to provide efficient path manipulation that does not, for example, require extraneous copying. And, of course, the API should be easy and pleasant to use.

The current std::path module makes a strong effort to balance these design constraints, but over time a few key shortcomings have emerged.

Semantic problems

Most importantly, the current std::path module makes some semantic assumptions about paths that have turned out to be incorrect.

Normalization

Paths in std::path are always normalized, meaning that a/../b is treated like b (among other things). Unfortunately, this kind of normalization changes the meaning of paths when symbolic links are present: if a is a symbolic link, then the relative paths a/../b and b may refer to completely different locations. See this issue for more detail.

For this reason, most path libraries do not perform full normalization of paths, though they may normalize paths like a/./b to a/b. Instead, they offer (1) methods to optionally normalize and (2) methods to normalize based on the contents of the underlying file system.

Since our current normalization scheme can silently and incorrectly alter the meaning of paths, it needs to be changed.

Unicode and Windows

In the original std::path design, it was assumed that all paths on Windows were Unicode. However, it turns out that the Windows filesystem APIs actually work with UCS-2, which roughly means that they accept arbitrary sequences of u16 values but interpret them as UTF-16 when it is valid to do so.

The current std::path implementation is built around the assumption that Windows paths can be represented as Rust string slices, and will need to be substantially revised.

Ergonomic problems

Because paths in general are not in Unicode, the std::path module cannot rely on an internal string or string slice representation. That in turn causes trouble for methods like dirname that are intended to extract a subcomponent of a path – what should it return?

There are basically three possible options, and today’s std::path module chooses all of them:

  • Yield a byte sequence: dirname yields an &[u8]
  • Yield a string slice, accounting for potential non-UTF-8 values: dirname_str yields an Option<&str>
  • Yield another path: dir_path yields a Path

This redundancy is present for most of the decomposition methods. The saving grace is that, in general, path methods consume BytesContainer values, so one can use the &[u8] variant but continue to work with other path methods. But in general &[u8] values are not ergonomic to work with, and the explosion in methods makes the module more (superficially) complex than one might expect.

You might be tempted to provide only the third option, but Path values are owned and mutable, so that would imply cloning on every decomposition operation. For applications like Cargo that work heavily with paths, this would be an unfortunate (and seemingly unnecessary) overhead.

Organizational problems

Finally, the std::path module presents a somewhat complex API organization:

  • The Path type is a direct alias of a platform-specific path type.
  • The GenericPath trait provides most of the common API expected on both platforms.
  • The GenericPathUnsafe trait provides a few unsafe/unchecked functions for performance reasons.
  • The posix and windows submodules provide their own Path types and a handful of platform-specific functionality (in particular, windows provides support for working with volumes and “verbatim” paths prefixed with \\?\)

This organization needs to be updated to match current conventions and simplified if possible.

One thing to note: with the current organization, it is possible to work with non-native paths, which can sometimes be useful for interoperation. The new design should retain this functionality.

Detailed design

Note: this design is influenced by the Boost filesystem library and Scheme48 and Racket’s approach to encoding issues on windows.

Overview

The basic design uses DST to follow the same pattern as Vec<T>/[T] and String/str: there is a PathBuf type for owned, mutable paths and an unsized Path type for slices. The various “decomposition” methods for extracting components of a path all return slices, and PathBuf itself derefs to Path.

The result is an API that is both efficient and ergonomic: there is no need to allocate/copy when decomposing a path, but there is also no need to provide multiple variants of methods to extract bytes versus Unicode strings. For example, the Path slice type provides a single method for converting to a str slice (when applicable).

A key aspect of the design is that there is no internal normalization of paths at all. Aside from solving the symbolic link problem, this choice also has useful ramifications for the rest of the API, described below.

The proposed API deals with the other problems mentioned above, and also brings the module in line with current Rust patterns and conventions. These details will be discussed after getting a first look at the core API.

The cross-platform API

The proposed core, cross-platform API provided by the new std::path is as follows:

// A sized, owned type akin to String:
pub struct PathBuf { .. }

// An unsized slice type akin to str:
pub struct Path { .. }

// Some ergonomics and generics, following the pattern in String/str and Vec<T>/[T]
impl Deref<Path> for PathBuf { ... }
impl BorrowFrom<PathBuf> for Path { ... }

// A replacement for BytesContainer; used to cut down on explicit coercions
pub trait AsPath for Sized? {
    fn as_path(&self) -> &Path;
}

impl<Sized? P> PathBuf where P: AsPath {
    pub fn new<T: IntoString>(path: T) -> PathBuf;

    pub fn push(&mut self, path: &P);
    pub fn pop(&mut self) -> bool;

    pub fn set_file_name(&mut self, file_name: &P);
    pub fn set_extension(&mut self, extension: &P);
}

// These will ultimately replace the need for `push_many`
impl<Sized? P> FromIterator<P> for PathBuf where P: AsPath { .. }
impl<Sized? P> Extend<P> for PathBuf where P: AsPath { .. }

impl<Sized? P> Path where P: AsPath {
    pub fn new(path: &str) -> &Path;

    pub fn as_str(&self) -> Option<&str>
    pub fn to_str_lossy(&self) -> Cow<String, str>; // Cow will replace MaybeOwned
    pub fn to_owned(&self) -> PathBuf;

    // iterate over the components of a path
    pub fn iter(&self) -> Iter;

    pub fn is_absolute(&self) -> bool;
    pub fn is_relative(&self) -> bool;
    pub fn is_ancestor_of(&self, other: &P) -> bool;

    pub fn path_relative_from(&self, base: &P) -> Option<PathBuf>;
    pub fn starts_with(&self, base: &P) -> bool;
    pub fn ends_with(&self, child: &P) -> bool;

    // The "root" part of the path, if absolute
    pub fn root_path(&self) -> Option<&Path>;

    // The "non-root" part of the path
    pub fn relative_path(&self) -> &Path;

    // The "directory" portion of the path
    pub fn dir_path(&self) -> &Path;

    pub fn file_name(&self) -> Option<&Path>;
    pub fn file_stem(&self) -> Option<&Path>;
    pub fn extension(&self) -> Option<&Path>;

    pub fn join(&self, path: &P) -> PathBuf;

    pub fn with_file_name(&self, file_name: &P) -> PathBuf;
    pub fn with_extension(&self, extension: &P) -> PathBuf;
}

pub struct Iter<'a> { .. }

impl<'a> Iterator<&'a Path> for Iter<'a> { .. }

pub const SEP: char = ..
pub const ALT_SEPS: &'static [char] = ..

pub fn is_separator(c: char) -> bool { .. }

There is plenty of overlap with today’s API, and the methods being retained here largely have the same semantics.

But there are also a few potentially surprising aspects of this design that merit comment:

  • Why does PathBuf::new take IntoString? It needs an owned buffer internally, and taking a string means that Unicode input is guaranteed, which works on all platforms. (In general, the assumption is that non-Unicode paths are most commonly produced by reading a path from the filesystem, rather than creating now ones. As we’ll see below, there are platform-specific ways to crate non-Unicode paths.)

  • Why no Path::as_bytes method? There is no cross-platform way to expose paths directly in terms of byte sequences, because each platform extends beyond Unicode in its own way. In particular, Unix platforms accept arbitrary u8 sequences, while Windows accepts arbitrary u16 sequences (both modulo disallowing interior 0s). The u16 sequences provided by Windows do not have a canonical encoding as bytes; this RFC proposed to use WTF-8 (see below), but does not reveal that choice.

  • What about interior nulls? Currently various Rust system APIs will panic when given strings containing interior null values because, while these are valid UTF-8, it is not possible to send them as-is to C APIs that expect null-terminated strings. The API here follows the same approach, panicking if given a path with an interior null.

  • Why do file_name and extension operations work with Path rather than some other type? In particular, it may seem strange to view an extension as a path. But doing so allows us to not reveal platform differences about the various character sets used in paths. By and large, extensions in practice will be valid Unicode, so the various methods going to and from str will suffice. But as with paths in general, there are platform-specific ways of working with non-Unicode data, explained below.

  • Where did push_many and friends go? They’re replaced by implementing FromIterator and Extend, following a similar pattern with the Vec type. (Some work will be needed to retain full efficiency when doing so.)

  • How does Path::new work? The ability to directly get a &Path from an &str (i.e., with no allocation or other work) is a key part of the representation choices, which are described below.

  • Where is the normalize method? Since the path type no longer internally normalizes, it may be useful to explicitly request normalization. This can be done by writing let normalized: PathBuf = p.iter().collect() for a path p, because the iterator performs some on-the-fly normalization (see below). *NOTE this normalization does not include removing .., for the reasons explained at the beginning of the RFC.

  • What does the iterator yield? Unlike today’s components, the iter method here will begin with root_path if there is one. Thus, a/b/c will yield a, b and c, while /a/b/c will yield /, a, b and c.

Important semantic rules

The path API is designed to satisfy several semantic rules described below. Note that == here is lazily normalizing, treating ./b as b and a//b as a/b; see the next section for more details.

Suppose p is some &Path and dot == Path::new("."):

p == p.join(dot)
p == dot.join(p)

p == p.root_path().unwrap_or(dot)
      .join(p.relative_path())

p.relative_path() == match p.root_path() {
    None => p,
    Some(root) => p.path_relative_from(root).unwrap()
}

p == p.dir_path()
      .join(p.file_name().unwrap_or(dot))

p == p.iter().collect()

p == match p.file_name() {
    None => p,
    Some(name) => p.with_file_name(name)
}

p == match p.extension() {
    None => p,
    Some(ext) => p.with_extension(ext)
}

p == match (p.file_stem(), p.extension()) {
    (Some(stem), Some(ext)) => p.with_file_name(name).with_extension(ext),
    _ => p
}

Representation choices, Unicode, and normalization

A lot of the design in this RFC depends on a key property: both Unix and Windows paths can be easily represented as a flat byte sequence “compatible” with UTF-8. For Unix platforms, this is trivial: they accept any byte sequence, and will generally interpret the byte sequences as UTF-8 when valid to do so. For Windows, this representation involves a clever hack – proposed formally as WTF-8 – that encodes its native UCS-2 in a generalization of UTF-8. This RFC will not go into the details of that hack; please read Simon’s excellent writeup if you’re interested.

The upshot of all of this is that we can uniformly represent path slices as newtyped byte slices, and any UTF-8 encoded data will “do the right thing” on all platforms.

Furthermore, by not doing any internal, up-front normalization, it’s possible to provide a Path::new that goes from &str to &Path with no intermediate allocation or validation. In the common case that you’re working with Rust strings to construct paths, there is zero overhead. It also means that Path::new(some_str).as_str = Some(some_str).

The main downside of this choice is that some of the path functionality must cope with non-normalized paths. So, for example, the iterator must skip . path components (unless it is the entire path), and similarly for methods like pop. In general, methods that yield new path slices are expected to work as if:

  • ./b is just b
  • a//b is just a/b

and comparisons between paths should also behave as if the paths had been normalized in this way.

Organization and platform-specific APIs

Finally, the proposed API is organized as std::path with unix and windows submodules, as today. However, there is no GenericPath or GenericPathUnsafe; instead, the API given above is implemented as a trivial wrapper around path implementations provided by either the unix or the windows submodule (based on #[cfg]). In other words:

  • std::path::windows::Path works with Windows-style paths
  • std::path::unix::Path works with Unix-style paths
  • std::path::Path is a thin newtype wrapper around the current platform’s path implementation

This organization makes it possible to manipulate foreign paths by working with the appropriate submodule.

In addition, each submodule defines some extension traits, explained below, that supplement the path API with functionality relevant to its variant of path.

But what if you’re writing a platform-specific application and wish to use the extended functionality directly on std::path::Path? In this case, you will be able to import the appropriate extension trait via os::unix or os::windows, depending on your platform. This is part of a new, general strategy for explicitly “opting-in” to platform-specific features by importing from os::some_platform (where the some_platform submodule is available only on that platform.)

Unix

On Unix platforms, the only additional functionality is to let you work directly with the underlying byte representation of various path types:

pub trait UnixPathBufExt {
    fn from_vec(path: Vec<u8>) -> Self;
    fn into_vec(self) -> Vec<u8>;
}

pub trait UnixPathExt {
    fn from_bytes(path: &[u8]) -> &Self;
    fn as_bytes(&self) -> &[u8];
}

This is acceptable because the platform supports arbitrary byte sequences (usually interpreted as UTF-8).

Windows

On Windows, the additional APIs allow you to convert to/from UCS-2 (roughly, arbitrary u16 sequences interpreted as UTF-16 when applicable); because the name “UCS-2” does not have a clear meaning, these APIs use u16_slice and will be carefully documented. They also provide the remaining Windows-specific path decomposition functionality that today’s path module supports.

pub trait WindowsPathBufExt {
    fn from_u16_slice(path: &[u16]) -> Self;
    fn make_non_verbatim(&mut self) -> bool;
}

pub trait WindowsPathExt {
    fn is_cwd_relative(&self) -> bool;
    fn is_vol_relative(&self) -> bool;
    fn is_verbatim(&self) -> bool;
    fn prefix(&self) -> PathPrefix;
    fn to_u16_slice(&self) -> Vec<u16>;
}

enum PathPrefix<'a> {
    Verbatim(&'a Path),
    VerbatimUNC(&'a Path, &'a Path),
    VerbatimDisk(&'a Path),
    DeviceNS(&'a Path),
    UNC(&'a Path, &'a Path),
    Disk(&'a Path),
}

Drawbacks

The DST/slice approach is conceptually more complex than today’s API, but in practice seems to yield a much tighter API surface.

Alternatives

Due to the known semantic problems, it is not really an option to retain the current path implementation. As explained above, supporting UCS-2 also means that the various byte-slice methods in the current API are untenable, so the API also needs to change.

Probably the main alternative to the proposed API would be to not use DST/slices, and instead use owned paths everywhere (probably doing some normalization of . at the same time). While the resulting API would be simpler in some respects, it would also be substantially less efficient for common operations.

Unresolved questions

It is not clear how best to incorporate the WTF-8 implementation (or how much to incorporate) into libstd.

There has been a long debate over whether paths should implement Show given that they may contain non-UTF-8 data. This RFC does not take a stance on that (the API may include something like today’s display adapter), but a follow-up RFC will address the question more generally.

Summary

Move the std::ascii::Ascii type and related traits to a new Cargo package on crates.io, and instead expose its functionality for u8, [u8], char, and str types.

Motivation

The std::ascii::Ascii type is a u8 wrapper that enforces (unless unsafe code is used) that the value is in the ASCII range, similar to char with u32 in the range of Unicode scalar values, and String with Vec<u8> containing well-formed UTF-8 data. [Ascii] and Vec<Ascii> are naturally strings of text entirely in the ASCII range.

Using the type system like this to enforce data invariants is interesting, but in practice Ascii is not that useful. Data (such as from the network) is rarely guaranteed to be ASCII only, nor is it desirable to remove or replace non-ASCII bytes, even if ASCII-range-only operations are used. (For example, ASCII case-insensitive matching is common in HTML and CSS.)

Every single use of the Ascii type in the Rust distribution is only to use the to_lowercase or to_uppercase method, then immediately convert back to u8 or char.

Detailed design

The Ascii type as well as the AsciiCast, OwnedAsciiCast, AsciiStr, and IntoBytes traits should be copied into a new ascii Cargo package on crates.io. The std::ascii copy should be deprecated and removed at some point before Rust 1.0.

Currently, the AsciiExt trait is:

pub trait AsciiExt<T> {
    fn to_ascii_upper(&self) -> T;
    fn to_ascii_lower(&self) -> T;
    fn eq_ignore_ascii_case(&self, other: &Self) -> bool;
}

impl AsciiExt<String> for str { ... }
impl AsciiExt<Vec<u8>> for [u8] { ... }

It should gain new methods for the functionality that is being removed with Ascii, be implemented for u8 and char, and (if this is stable enough yet) use an associated type instead of the T parameter:

pub trait AsciiExt {
    type Owned = Self;
    fn to_ascii_upper(&self) -> Owned;
    fn to_ascii_lower(&self) -> Owned;
    fn eq_ignore_ascii_case(&self, other: &Self) -> bool;
    fn is_ascii(&self) -> bool;

    // Maybe? See unresolved questions
    fn is_ascii_lowercase(&self) -> bool;
    fn is_ascii_uppercase(&self) -> bool;
    ...
}

impl AsciiExt for str { type Owned = String; ... }
impl AsciiExt for [u8] { type Owned = Vec<u8>; ... }
impl AsciiExt char { ... }
impl AsciiExt u8 { ... }

The OwnedAsciiExt trait should stay as it is:

pub trait OwnedAsciiExt {
    fn into_ascii_upper(self) -> Self;
    fn into_ascii_lower(self) -> Self;
}

impl OwnedAsciiExt for String { ... }
impl OwnedAsciiExt for Vec<u8> { ... }

The std::ascii::escape_default function has little to do with ASCII. I think it’s relevant to b'x' and b"foo" byte literals, which have types u8 and &'static [u8]. I suggest moving it into std::u8.

I (@SimonSapin) can help with the implementation work.

Drawbacks

Code using Ascii (not only for e.g. to_lowercase) would need to install a Cargo package to get it. This is strictly more work than having it in std, but should still be easy.

Alternatives

  • The Ascii type could stay in std::ascii
  • Some variations per Unresolved questions below.

Unresolved questions

  • What to do with std::ascii::escape_default?
  • Rename the AsciiExt and OwnedAsciiExt traits?
  • Should they be in the prelude? The Ascii type and the related traits currently are.
  • Are associated type stable enough yet? If not, AsciiExt should temporarily keep its type parameter.
  • Which of all the Ascii::is_* methods should AsciiExt include? Those included should have ascii added in their name.
    • Maybe is_lowercase, is_uppercase, is_alphabetic, or is_alphanumeric could be useful, but I’d be fine with dropping them and reconsider if someone asks for them. The same result can be achieved with .is_ascii() && and the corresponding UnicodeChar method, which in most cases has an ASCII fast path. And in some cases it’s an easy range check like 'a' <= c && c <= 'z'.
    • is_digit and is_hex are identical to Char::is_digit(10) and Char::is_digit(16).
    • is_blank, is_control, is_graph, is_print, and is_punctuation are never used in the Rust distribution or Servo.
  • Start Date: 2014-11-29
  • RFC PR: 490
  • Rust Issue: 19607

Summary

Change the syntax for dynamically sized type parameters from Sized? T to T: ?Sized, and change the syntax for traits for dynamically sized types to trait Foo for ?Sized. Extend this new syntax to work with where clauses.

Motivation

History of the DST syntax

When dynamically sized types were first designed, and even when they were first being implemented, the syntax for dynamically sized type parameters had not been fully settled on. Initially, dynamically sized type parameters were denoted by a leading unsized keyword:

fn foo<unsized T>(x: &T) { ... }
struct Foo<unsized T> { field: T }
// etc.

This is the syntax used in Niko Matsakis’s initial design for DST. This syntax makes sense to those who are familiar with DST, but has some issues which could be perceived as problems for those learning to work with dynamically sized types:

  • It implies that the parameter must be unsized, where really it’s only optional;
  • It does not visually relate to the Sized trait, which is fundamentally related to declaring a type as unsized (removing the default Sized bound).

Later, Felix S. Klock II came up with an alternative syntax using the type keyword:

fn foo<type T>(x: &T) { ... }
struct Foo<type T> { field: T }
// etc.

The inspiration behind this is that the union of all sized types and all unsized types is simply all types. Thus, it makes sense for the most general type parameter to be written as type T.

This syntax resolves the first problem listed above (i.e., it no longer implies that the type must be unsized), but does not resolve the second. Additionally, it is possible that some people could be confused by the use of the type keyword, as it contains little meaning—one would assume a bare T as a type parameter to be a type already, so what does adding a type keyword mean?

Perhaps because of these concerns, the syntax for dynamically sized type parameters has since been changed one more time, this time to use the Sized trait’s name followed by a question mark:

fn foo<Sized? T>(x: &T) { ... }
struct Foo<Sized? T> { field: T }
// etc.

This syntax simply removes the implicit Sized bound on every type parameter using the ? symbol. It resolves the problem about not mentioning Sized that the first two syntaxes didn’t. It also hints towards being related to sizedness, resolving the problem that plagued type. It also successfully states that unsizedness is only optional—that the parameter may be sized or unsized. This syntax has stuck, and is the syntax used today. Additionally, it could potentially be extended to other traits: for example, a new pointer type that cannot be dropped, &uninit, could be added, requiring that it be written to before being dropped. However, many generic functions assume that any parameter passed to them can be dropped. Drop could be made a default bound to resolve this, and Drop? would remove this bound from a type parameter.

The problem with Sized? T

There is some inconsistency present with the Sized syntax. After going through multiple syntaxes for DST, all of which were keywords preceding type parameters, the Sized? annotation stayed before the type parameter’s name when it was adopted as the syntax for dynamically sized type parameters. This can be considered inconsistent in some ways—Sized? looks like a bound, contains a trait name like a bound does, and changes what types can unify with the type parameter like a bound does, but does not come after the type parameter’s name like a bound does. This also is inconsistent with Rust’s general pattern of not using C-style variable declarations (int x) but instead using a colon and placing the type after the name (x: int). (A type parameter is not strictly a variable declaration, but is similar: it declares a new name in a scope.) These problems together make Sized? the only marker that comes before type parameter or even variable names, and with the addition of negative bounds, it looks even more inconsistent:

// Normal bound
fn foo<T: Foo>() {}
// Negative bound
fn foo<T: !Foo>() {}
// Generalising ‘anti-bound’
fn foo<Foo? T>() {}

The syntax also looks rather strange when recent features like associated types and where clauses are considered:

// This `where` clause syntax doesn’t work today, but perhaps should:
trait Foo<T> where Sized? T {
    type Sized? Bar;
}

Furthermore, the ? on Sized? comes after the trait name, whereas most unary-operator-like symbols in the Rust language come before what they are attached to.

This RFC proposes to change the syntax for dynamically sized type parameters to T: ?Sized to resolve these issues.

Detailed design

Change the syntax for dynamically sized type parameters to T: ?Sized:

fn foo<T: ?Sized>(x: &T) { ... }
struct Foo<T: Send + ?Sized + Sync> { field: Box<T> }
trait Bar { type Baz: ?Sized; }
// etc.

Change the syntax for traits for dynamically-sized types to have a prefix ? instead of a postfix one:

trait Foo for ?Sized { ... }

Allow using this syntax in where clauses:

fn foo<T>(x: &T) where T: ?Sized { ... }

Drawbacks

  • The current syntax uses position to distinguish between removing and adding bounds, while the proposed syntax only uses a symbol. Since ?Sized is actually an anti-bound (it removes a bound), it (in some ways) makes sense to put it on the opposite side of a type parameter to show this.

  • Only a single character separates adding a Sized bound and removing an implicit one. This shouldn’t be a problem in general, as adding a Sized bound to a type parameter is pointless (because it is implicitly there already). A lint could be added to check for explicit default bounds if this turns out to be a problem.

Alternatives

  • Choose one of the previous syntaxes or a new syntax altogether. The drawbacks of the previous syntaxes are discussed in the ‘History of the DST syntax’ section of this RFC.

  • Change the syntax to T: Sized? instead. This is less consistent with things like negative bounds (which would probably be something like T: !Foo), and uses a suffix operator, which is less consistent with other parts of Rust’s syntax. It is, however, closer to the current syntax (Sized? T), and looks more natural because of how ? is used in natural languages such as English.

Unresolved questions

None.

Summary

  • Remove the std::c_vec module
  • Move std::c_str under a new std::ffi module, not exporting the c_str module.
  • Focus CString on Rust-owned bytes, providing a static assertion that a pile of bytes has no interior nuls but has a trailing nul.
  • Provide convenience functions for translating C-owned types into slices in Rust.

Motivation

The primary motivation for this RFC is to work out the stabilization of the c_str and c_vec modules. Both of these modules exist for interoperating with C types to ensure that values can cross the boundary of Rust and C relatively safely. These types also need to be designed with ergonomics in mind to ensure that it’s tough to get them wrong and easy to get them right.

The current CString and CVec types are quite old and are long due for a scrutinization, and these types are currently serving a number of competing concerns:

  1. A CString can both take ownership of a pointer as well as inspect a pointer.
  2. A CString is always allocated/deallocated on the libc heap.
  3. A CVec looks like a slice but does not quite act like one.
  4. A CString looks like a byte slice but does not quite act like one.
  5. There are a number of pieces of duplicated functionality throughout the standard library when dealing with raw C types. There are a number of conversion functions on the Vec and String types as well as the str and slice modules.

In general all of this functionality needs to be reconciled with one another to provide a consistent and coherence interface when operating with types originating from C.

Detailed design

In refactoring all usage could be categorized into one of three categories:

  1. A Rust type wants to be passed into C.
  2. A C type was handed to Rust, but Rust does not own it.
  3. A C type was handed to Rust, and Rust owns it.

The current CString attempts to handle all three of these concerns all at once, somewhat conflating desires. Additionally, CVec provides a fairly different interface than CString while providing similar functionality.

A new std::ffi

Note: an old implementation of the design below can be found in a branch of mine

The entire c_str module will be deleted as-is today and replaced with the following interface at the new location std::ffi:

#[deriving(Clone, PartialEq, PartialOrd, Eq, Ord, Hash)]
pub struct CString { /* ... */ }

impl CString {
    pub fn from_slice(s: &[u8]) -> CString { /* ... */ }
    pub fn from_vec(s: Vec<u8>) -> CString { /* ... */ }
    pub unsafe fn from_vec_unchecked(s: Vec<u8>) -> CString { /* ... */ }

    pub fn as_slice(&self) -> &[libc::c_char] { /* ... */ }
    pub fn as_slice_with_nul(&self) -> &[libc::c_char] { /* ... */ }
    pub fn as_bytes(&self) -> &[u8] { /* ... */ }
    pub fn as_bytes_with_nul(&self) -> &[u8] { /* ... */ }
}

impl Deref<[libc::c_char]> for CString { /* ... */ }
impl Show for CString { /* ... */ }

pub unsafe fn c_str_to_bytes<'a>(raw: &'a *const libc::c_char) -> &'a [u8] { /* ... */ }
pub unsafe fn c_str_to_bytes_with_nul<'a>(raw: &'a *const libc::c_char) -> &'a [u8] { /* ... */ }

The new CString API is focused solely on providing a static assertion that a byte slice contains no interior nul bytes and there is a terminating nul byte. A CString is usable as a slice of libc::c_char similar to how a Vec is usable as a slice, but a CString can also be viewed as a byte slice with a concrete u8 type. The default of libc::c_char was chosen to ensure that .as_ptr() returns a pointer of the right value. Note that CString does not provide a DerefMut implementation to maintain the static guarantee that there are no interior nul bytes.

Constructing a CString

One of the major departures from today’s API is how a CString is constructed. Today this can be done through the CString::new function or the ToCStr trait. These two construction vectors serve two very different purposes, one for C-originating data and one for Rust-originating data. This redesign of CString is solely focused on going from Rust to C (case 1 above) and only supports constructors in this flavor.

The first constructor, from_slice, is intended to allow CString to implement an on-the-stack buffer optimization in the future without having to resort to a Vec with its allocation. This is similar to the optimization performed by with_c_str today. Of the other two constructors, from_vec will consume a vector, assert there are no 0 bytes, an then push a 0 byte on the end. The from_vec_unchecked constructor will not perform the verification, but will still push a zero. Note that both of these constructors expose the fact that a CString is not necessarily valid UTF-8.

The ToCStr trait is removed entirely (including from the prelude) in favor of these construction functions. This could possibly be re-added in the future, but for now it will be removed from the module.

Working with *const libc::c_char

Instead of using CString to look at a *const libc::c_char, the module now provides two conversion functions to go from a C string to a byte slice. The signature of this function is similar to the new std::slice::from_raw_buf function and will use the lifetime of the pointer itself as an anchor for the lifetime of the returned slice.

These two functions solve the use case (2) above where a C string just needs to be inspected. Because a C string is fundamentally just a pile of bytes, it’s interpreted in Rust as a u8 slice. With these two functions, all of the following functions will also be deprecated:

  • std::str::from_c_str - this function should be replaced with ffi::c_str_to_bytes plus one of str::from_utf8 or str::from_utf8_unchecked.
  • String::from_raw_buf - similarly to from_c_str, each step should be composed individually to perform the required checks. This would involve using ffi::c_str_to_bytes, str::from_utf8, and .to_string().
  • String::from_raw_buf_len - this should be replaced the same way as String::from_raw_buf except that slice::from_raw_buf is used instead of ffi.

Removing c_vec

The new ffi module serves as a solution to desires (1) and (2) above, but the third use case is left unsolved so far. This is what the current c_vec module is attempting to solve, but it does so in a somewhat ad-hoc fashion. The constructor for the type takes a proc destructor to invoke when the vector is dropped to allow for custom destruction. To make matters a little more interesting, the CVec type provides a default constructor which invokes libc::free on the pointer.

Transferring ownership of pointers without a custom deallocation function is in general quite a dangerous operation for libraries to perform. Not all platforms support the ability to malloc in one library and free in the other, and this is also generally considered an antipattern.

Creating a custom wrapper struct with a simple Deref and Drop implementation as necessary is likely to be sufficient for this use case, so this RFC proposes removing the entire c_vec module with no replacement. It is expected that a utility crate for interoperating with raw pointers in this fashion may manifest itself on crates.io, and inclusion into the standard library can be considered at that time.

Working with C Strings

The design above has been implemented in a branch of mine where the fallout can be seen. The primary impact of this change is that the to_c_str and with_c_str methods are no longer in the prelude by default, and CString::from_* must be called in order to create a C string.

Drawbacks

  • Whenever Rust works with a C string, it’s tough to avoid the cost associated with the initial length calculation. All types provided here involve calculating the length of a C string up front, and no type is provided to operate on a C string without calculating its length.

  • With the removal of the ToCStr trait, unnecessary allocations may be made when converting to a CString. For example, a Vec<u8> can be called by directly calling CString::from_vec, but it may be more frequently called via CString::from_slice, resulting in an unnecessary allocation. Note, however, that one would have to remember to call into_c_str on the ToCStr trait, so it doesn’t necessarily help too much.

  • The ergonomics of operating C strings have been somewhat reduced as part of this design. The CString::from_slice method is somewhat long to call (compared to to_c_string), and convenience methods of going straight from a *const libc::c_char were deprecated in favor of only supporting a conversion to a slice.

Alternatives

  • There is an alternative RFC which discusses pursuit of today’s general design of the c_str module as well as a refinement of its current types.

  • The from_vec_unchecked function could do precisely 0 work instead of always pushing a 0 at the end.

Unresolved questions

  • On some platforms, libc::c_char is not necessarily just one byte, which these types rely on. It’s unclear how much this should affect the design of this module as to how important these platforms are.

  • Are the *_with_nul functions necessary on CString?

Summary

Change array/slice patterns in the following ways:

  • Make them only match on arrays ([T; n] and [T]), not slices;
  • Make subslice matching yield a value of type [T; n] or [T], not &[T] or &mut [T];
  • Allow multiple mutable references to be made to different parts of the same array or slice in array patterns (resolving rust-lang/rust issue #8636).

Motivation

Before DST (and after the removal of ~[T]), there were only two types based on [T]: &[T] and &mut [T]. With DST, we can have many more types based on [T], Box<[T]> in particular, but theoretically any pointer type around a [T] could be used. However, array patterns still match on &[T], &mut [T], and [T; n] only, meaning that to match on a Box<[T]>, one must first convert it to a slice, which disallows moves. This may prove to significantly limit the amount of useful code that can be written using array patterns.

Another problem with today’s array patterns is in subslice matching, which specifies that the rest of a slice not matched on already in the pattern should be put into a variable:

let foo = [1i, 2, 3];
match foo {
    [head, tail..] => {
        assert_eq!(head, 1);
        assert_eq!(tail, &[2, 3]);
    },
    _ => {},
}

This makes sense, but still has a few problems. In particular, tail is a &[int], even though the compiler can always assert that it will have a length of 2, so there is no way to treat it like a fixed-length array. Also, all other bindings in array patterns are by-value, whereas bindings using subslice matching are by-reference (even though they don’t use ref). This can create confusing errors because of the fact that the .. syntax is the only way of taking a reference to something within a pattern without using the ref keyword.

Finally, the compiler currently complains when one tries to take multiple mutable references to different values within the same array in a slice pattern:

let foo: &mut [int] = &mut [1, 2, 3];
match foo {
    [ref mut a, ref mut b] => ...,
    ...
}

This fails to compile, because the compiler thinks that this would allow multiple mutable borrows to the same value (which is not the case).

Detailed design

  • Make array patterns match only on arrays ([T; n] and [T]). For example, the following code:

    let foo: &[u8] = &[1, 2, 3];
    match foo {
        [a, b, c] => ...,
        ...
    }

    Would have to be changed to this:

    let foo: &[u8] = &[1, 2, 3];
    match foo {
        &[a, b, c] => ...,
        ...
    }

    This change makes slice patterns mirror slice expressions much more closely.

  • Make subslice matching in array patterns yield a value of type [T; n] (if the array is of fixed size) or [T] (if not). This means changing most code that looks like this:

    let foo: &[u8] = &[1, 2, 3];
    match foo {
        [a, b, c..] => ...,
        ...
    }

    To this:

    let foo: &[u8] = &[1, 2, 3];
    match foo {
        &[a, b, ref c..] => ...,
        ...
    }

    It should be noted that if a fixed-size array is matched on using subslice matching, and ref is used, the type of the binding will be &[T; n], not &[T].

  • Improve the compiler’s analysis of multiple mutable references to the same value within array patterns. This would be done by allowing multiple mutable references to different elements of the same array (including bindings from subslice matching):

    let foo: &mut [u8] = &mut [1, 2, 3, 4];
    match foo {
        &[ref mut a, ref mut b, ref c, ref mut d..] => ...,
        ...
    }

Drawbacks

  • This will break a non-negligible amount of code, requiring people to add &s and refs to their code.

  • The modifications to subslice matching will require ref or ref mut to be used in almost all cases. This could be seen as unnecessary.

Alternatives

  • Do a subset of this proposal; for example, the modifications to subslice matching in patterns could be removed.

Unresolved questions

  • What are the precise implications to the borrow checker of the change to multiple mutable borrows in the same array pattern? Since it is a backwards-compatible change, it can be implemented after 1.0 if it turns out to be difficult to implement.

Summary

Make name and behavior of the #![no_std] and #![no_implicit_prelude] attributes consistent by renaming the latter to #![no_prelude] and having it only apply to the current module.

Motivation

Currently, Rust automatically inserts an implicit extern crate std; in the crate root that can be disabled with the #[no_std] attribute.

It also automatically inserts an implicit use std::prelude::*; in every module that can be disabled with the #[no_implicit_prelude] attribute.

Lastly, if #[no_std] is used, all module automatically don’t import the prelude, so the #[no_implicit_prelude] attribute is unneeded in those cases.

However, the later attribute is inconsistent with the former in two regards:

  • Naming wise, it redundantly contains the word “implicit”
  • Semantic wise, it applies to the current module and all submodules.

That last one is surprising because normally, whether or not a module contains a certain import does not affect whether or not a sub module contains a certain import, so you’d expect a attribute that disables an implicit import to only apply to that module as well.

This behavior also gets in the way in some of the already rare cases where you want to disable the prelude while still linking to std.

As an example, the author had been made aware of this behavior of #[no_implicit_prelude] while attempting to prototype a variation of the Iterator traits, leading to code that looks like this:

mod my_iter {
    #![no_implicit_prelude]

    trait Iterator<T> { /* ... */ }

    mod adapters {
        /* Tries to access the existing prelude, and fails to resolve */
    }
}

While such use cases might be resolved by just requiring an explicit use std::prelude::*; in the submodules, it seems like just making the attribute behave as expected is the better outcome.

Of course, for the cases where you want the prelude disabled for a whole sub tree of modules, it would now become necessary to add a #[no_prelude] attribute in each of them - but that is consistent with imports in general.

Detailed design

libsyntax needs to be changed to accept both the name no_implicit_prelude and no_prelude for the attribute. Then the attributes effect on the AST needs to be changed to not deeply remove all imports, and all fallout of this change needs to be fixed in order for the new semantic to bootstrap.

Then a snapshot needs to be made, and all uses of #[no_implicit_prelude] can be changed to #[no_prelude] in both the main code base, and user code.

Finally, the old attribute name should emit a deprecated warning, and be removed in time.

Drawbacks

  • The attribute is a rare use case to begin with, so any effort put into this would distract from more important stabilization work.

Alternatives

  • Keep the current behavior
  • Remove the #[no_implicit_prelude] attribute all together, instead forcing users to use #[no_std] in combination with extern crate std; and use std::prelude::*.
  • Generalize preludes more to allow custom ones, which might supersede the attributes from this RFC.

Summary

Stabilize the std::prelude module by removing some of the less commonly used functionality of it.

Motivation

The prelude of the standard library is included into all Rust programs by default, and is consequently quite an important module to consider when stabilizing the standard library. Some of the primary tasks of the prelude are:

  • The prelude is used to represent imports that would otherwise occur in nearly all Rust modules. The threshold for entering the prelude is consequently quite high as it is unlikely to be able to change in a backwards compatible fashion as-is.
  • Primitive types such as str and char are unable to have inherent methods attached to them. In order to provide methods extension traits must be used. All of these traits are members of the prelude in order to enable methods on language-defined types.

This RFC currently focuses on removing functionality from the prelude rather than adding it. New additions can continue to happen before 1.0 and will be evaluated on a case-by-case basis. The rationale for removal or inclusion will be provided below.

Detailed Design

The current std::prelude module was copied into the document of this RFC, and each reexport should be listed below and categorized. The rationale for inclusion of each type is included inline.

Reexports to retain

This section provides the exact prelude that this RFC proposes:

// Boxes are a ubiquitous type in Rust used for representing an allocation with
// a known fixed address. It is also one of the canonical examples of an owned
// type, appearing in many examples and tests. Due to its common usage, the Box
// type is present.
pub use boxed::Box;

// These two traits are present to provide methods on the `char` primitive type.
// The two traits will be collapsed into one `CharExt` trait in the `std::char`
// module, however instead of reexporting two traits.
pub use char::{Char, UnicodeChar};

// One of the most common operations when working with references in Rust is the
// `clone()` method to promote the reference to an owned value. As one of the
// core concepts in Rust used by virtually all programs, this trait is included
// in the prelude.
pub use clone::Clone;

// It is expected that these traits will be used in generic bounds much more
// frequently than there will be manual implementations. This common usage in
// bounds to provide the fundamental ability to compare two values is the reason
// for the inclusion of these traits in the prelude.
pub use cmp::{PartialEq, PartialOrd, Eq, Ord};

// Iterators are one of the most core primitives in the standard library which is
// used to interoperate between any sort of sequence of data. Due to the
// widespread use, these traits and extension traits are all present in the
// prelude.
//
// The `Iterator*Ext` traits can be removed if generalized where clauses for
// methods are implemented, and they are currently included to represent the
// functionality provided today. The various traits other than `Iterator`, such
// as `DoubleEndedIterator` and `ExactSizeIterator` are provided in order to
// ensure that the methods are available like the `Iterator` methods.
pub use iter::{DoubleEndedIteratorExt, CloneIteratorExt};
pub use iter::{Extend, ExactSizeIterator};
pub use iter::{Iterator, IteratorExt, DoubleEndedIterator};
pub use iter::{IteratorCloneExt};
pub use iter::{IteratorOrdExt};

// As core language concepts and frequently used bounds on generics, these kinds
// are all included in the prelude by default. Note, however, that the exact
// set of kinds in the prelude will be determined by the stabilization of this
// module.
pub use kinds::{Copy, Send, Sized, Sync};

// One of Rust's fundamental principles is ownership, and understanding movement
// of types is key to this. The drop function, while a convenience, represents
// the concept of ownership and relinquishing ownership, so it is included.
pub use mem::drop;

// As described below, very few `ops` traits will continue to remain in the
// prelude. `Drop`, however, stands out from the other operations for many of
// the similar reasons as to the `drop` function.
pub use ops::Drop;

// Similarly to the `cmp` traits, these traits are expected to be bounds on
// generics quite commonly to represent a pending computation that can be
// executed.
pub use ops::{Fn, FnMut, FnOnce};

// The `Option` type is one of Rust's most common and ubiquitous types,
// justifying its inclusion into the prelude along with its two variants.
pub use option::Option::{mod, Some, None};

// In order to provide methods on raw pointers, these two traits are included
// into the prelude. It is expected that these traits will be renamed to
// `PtrExt` and `MutPtrExt`.
pub use ptr::{RawPtr, RawMutPtr};

// This type is included for the same reasons as the `Option` type.
pub use result::Result::{mod, Ok, Err};

// The slice family of traits are all provided in order to export methods on the
// language slice type. The `SlicePrelude` and `SliceAllocPrelude` will be
// collapsed into one `SliceExt` trait by the `std::slice` module. Many of the
// remaining traits require generalized where clauses on methods to be merged
// into the `SliceExt` trait, which may not happen for 1.0.
pub use slice::{SlicePrelude, SliceAllocPrelude, CloneSlicePrelude};
pub use slice::{CloneSliceAllocPrelude, OrdSliceAllocPrelude};
pub use slice::{PartialEqSlicePrelude, OrdSlicePrelude};

// These traits, like the above traits, are providing inherent methods on
// slices, but are not candidates for merging into `SliceExt`. Nevertheless
// these common operations are included for the purpose of adding methods on
// language-defined types.
pub use slice::{BoxedSlicePrelude, AsSlice, VectorVector};

// The str family of traits provide inherent methods on the `str` type. The
// `StrPrelude`, `StrAllocating`, and `UnicodeStrPrelude` traits will all be
// collapsed into one `StrExt` trait to be reexported in the prelude. The `Str`
// trait itself will be handled in the stabilization of the `str` module, but
// for now is included for consistency. Similarly, the `StrVector` trait is
// still undergoing stabilization but remains for consistency.
pub use str::{Str, StrPrelude};
pub use str::{StrAllocating, UnicodeStrPrelude};
pub use str::{StrVector};

// As the standard library's default owned string type, `String` is provided in
// the prelude. Many of the same reasons for `Box`'s inclusion apply to `String`
// as well.
pub use string::String;

// Converting types to a `String` is seen as a common-enough operation for
// including this trait in the prelude.
pub use string::ToString;

// Included for the same reasons as `String` and `Box`.
pub use vec::Vec;

Reexports to remove

All of the following reexports are currently present in the prelude and are proposed for removal by this RFC.

// While currently present in the prelude, these traits do not need to be in
// scope to use the language syntax associated with each trait. These traits are
// also only rarely used in bounds on generics and are consequently
// predominately used for `impl` blocks. Due to this lack of need to be included
// into all modules in Rust, these traits are all removed from the prelude.
pub use ops::{Add, Sub, Mul, Div, Rem, Neg, Not};
pub use ops::{BitAnd, BitOr, BitXor};
pub use ops::{Deref, DerefMut};
pub use ops::{Shl, Shr};
pub use ops::{Index, IndexMut};
pub use ops::{Slice, SliceMut};

// Now that tuple indexing is a feature of the language, these traits are no
// longer necessary and can be deprecated.
pub use tuple::{Tuple1, Tuple2, Tuple3, Tuple4};
pub use tuple::{Tuple5, Tuple6, Tuple7, Tuple8};
pub use tuple::{Tuple9, Tuple10, Tuple11, Tuple12};

// Interoperating with ascii data is not necessarily a core language operation
// and the ascii module itself is currently undergoing stabilization. The design
// will likely end up with only one trait (as opposed to the many listed here).
// The prelude will be responsible for providing unicode-respecting methods on
// primitives while requiring that ascii-specific manipulation is imported
// manually.
pub use ascii::{Ascii, AsciiCast, OwnedAsciiCast, AsciiStr};
pub use ascii::IntoBytes;

// Inclusion of this trait is mostly a relic of old behavior and there is very
// little need for the `into_cow` method to be ubiquitously available. Although
// mostly used in bounds on generics, this trait is not itself as commonly used
// as `FnMut`, for example.
pub use borrow::IntoCow;

// The `c_str` module is currently undergoing stabilization as well, but it's
// unlikely for `to_c_str` to be a common operation in almost all Rust code in
// existence, so this trait, if it survives stabilization, is removed from the
// prelude.
pub use c_str::ToCStr;

// This trait is `#[experimental]` in the `std::cmp` module and the prelude is
// intended to be a stable subset of Rust. If later marked #[stable] the trait
// may re-enter the prelude but it will be removed until that time.
pub use cmp::Equiv;

// Actual usage of the `Ordering` enumeration and its variants is quite rare in
// Rust code. Implementors of the `Ord` and `PartialOrd` traits will likely be
// required to import these names, but it is not expected that Rust code at
// large will require these names to be in the prelude.
pub use cmp::Ordering::{mod, Less, Equal, Greater};

// With language-defined `..` syntax there is no longer a need for the `range`
// function to remain in the prelude. This RFC does, however, recommend leaving
// this function in the prelude until the `..` syntax is implemented in order to
// provide a smoother deprecation strategy.
pub use iter::range;

// The FromIterator trait does not need to be present in the prelude as it is
// not adding methods to iterators and is mostly only required to be imported by
// implementors, which is not common enough for inclusion.
pub use iter::{FromIterator};

// Like `cmp::Equiv`, these two iterators are `#[experimental]` and are
// consequently removed from the prelude.
pub use iter::{RandomAccessIterator, MutableDoubleEndedIterator};

// I/O stabilization will have its own RFC soon, and part of that RFC involves
// creating a `std::io::prelude` module which will become the home for these
// traits. This RFC proposes leaving these in the current prelude, however,
// until the I/O stabilization is complete.
pub use io::{Buffer, Writer, Reader, Seek, BufferPrelude};

// These two traits are relics of an older `std::num` module which need not be
// included in the prelude any longer. Their methods are not called often, nor
// are they taken as bounds frequently enough to justify inclusion into the
// prelude.
pub use num::{ToPrimitive, FromPrimitive};

// As part of the Path stabilization RFC, these traits and structures will be
// removed from the prelude. Note that the ergonomics of opening a File today
// will decrease in the sense that `Path` must be imported, but eventually
// importing `Path` will not be necessary due to the `AsPath` trait. More
// details can be found in the path stabilization RFC.
pub use path::{GenericPath, Path, PosixPath, WindowsPath};

// This function is included in the prelude as a convenience function for the
// `FromStr::from_str` associated function. Inclusion of this method, however,
// is inconsistent with respect to the lack of inclusion of a `default` method,
// for example. It is also not necessarily seen as `from_str` being common
// enough to justify its inclusion.
pub use str::from_str;

// This trait is currently only implemented for `Vec<Ascii>` which is likely to
// be removed as part of `std::ascii` stabilization, obsoleting the need for the
// trait and its inclusion in the prelude.
pub use string::IntoString;

// The focus of Rust's story about concurrent program has been constantly
// shifting since it was incepted, and the prelude doesn't necessarily always
// keep up. Message passing is only one form of concurrent primitive that Rust
// provides, and inclusion in the prelude can provide the wrong impression that
// it is the *only* concurrent primitive that Rust offers. In order to
// facilitate a more unified front in Rust's concurrency story, these primitives
// will be removed from the prelude (and soon moved to std::sync as well).
//
// Additionally, while spawning a new thread is a common operation in concurrent
// programming, it is not a frequent operation in code in general. For example
// even highly concurrent applications may end up only calling `spawn` in one or
// two locations which does not necessarily justify its inclusion in the prelude
// for all Rust code in existence.
pub use comm::{sync_channel, channel};
pub use comm::{SyncSender, Sender, Receiver};
pub use task::spawn;

Move to an inner v1 module

This RFC also proposes moving all reexports to std::prelude::v1 module instead of just inside std::prelude. The compiler will then start injecting use std::prelude::v1::*.

This is a pre-emptive move to help provide room to grow the prelude module over time. It is unlikely that any reexports could ever be added to the prelude backwards-compatibly, so newer preludes (which may happen over time) will have to live in new modules. If the standard library grows multiple preludes over time, then it is expected for crates to be able to specify which prelude they would like to be compiled with. This feature is left as an open question, however, and movement to an inner v1 module is simply preparation for this possible move happening in the future.

The versioning scheme for the prelude over time (if it happens) is also left as an open question by this RFC.

Drawbacks

A fairly large amount of functionality was removed from the prelude in order to hone in on the driving goals of the prelude, but this unfortunately means that many imports must be added throughout code currently using these reexports. It is expected, however, that the most painful removals will have roughly equal ergonomic replacements in the future. For example:

  • Removal of Path and friends will retain the current level of ergonomics with no imports via the AsPath trait.
  • Removal of iter::range will be replaced via the more ergonomic .. syntax.

Many other cases which may be initially seen as painful to migrate are intended to become aligned with other Rust conventions and practices today. For example getting into the habit of importing implemented traits (such as the ops traits) is consistent with how many implementations will work. Similarly removal of synchronization primitives allows for consistence in usage of all concurrent primitives that Rust provides.

Alternatives

A number of alternatives were discussed above, and this section can otherwise largely be filled with various permutations of moving reexports between the “keep” and “remove” sections above.

Unresolved Questions

This RFC is fairly aggressive about removing functionality from the prelude, but is unclear how necessary this is. If Rust grows the ability to backwards-compatibly modify the prelude in some fashion (for example introducing multiple preludes that can be opted into) then the aggressive removal may not be necessary.

If user-defined preludes are allowed in some form, it is also unclear about how this would impact the inclusion of reexports in the standard library’s prelude in some form.

Summary

Today’s Show trait will be tasked with the purpose of providing the ability to inspect the representation of implementors of the trait. A new trait, String, will be introduced to the std::fmt module to in order to represent data that can essentially be serialized to a string, typically representing the precise internal state of the implementor.

The String trait will take over the {} format specifier and the Show trait will move to the now-open {:?} specifier.

Motivation

The formatting traits today largely provide clear guidance to what they are intended for. For example the Binary trait is intended for printing the binary representation of a data type. The ubiquitous Show trait, however, is not quite well defined in its purpose. It is currently used for a number of use cases which are typically at odds with one another.

One of the use cases of Show today is to provide a “debugging view” of a type. This provides the easy ability to print some string representation of a type to a stream in order to debug an application. The Show trait, however, is also used for printing user-facing information. This flavor of usage is intended for display to all users as opposed to just developers. Finally, the Show trait is connected to the ToString trait providing the to_string method unconditionally.

From these use cases of Show, a number of pain points have arisen over time:

  1. It’s not clear whether all types should implement Show or not. Types like Path quite intentionally avoid exposing a string representation (due to paths not being valid UTF-8 always) and hence do not want a to_string method to be defined on them.
  2. It is quite common to use #[deriving(Show)] to easily print a Rust structure. This is not possible, however, when particular members do not implement Show (for example a Path).
  3. Some types, such as a String, desire the ability to “inspect” the representation as well as printing the representation. An inspection mode, for example, would escape characters like newlines.
  4. Common pieces of functionality, such as assert_eq! are tied to the Show trait which is not necessarily implemented for all types.

The purpose of this RFC is to clearly define what the Show trait is intended to be used for, as well as providing guidelines to implementors of what implementations should do.

Detailed Design

As described in the motivation section, the intended use cases for the current Show trait are actually motivations for two separate formatting traits. One trait will be intended for all Rust types to implement in order to easily allow debugging values for macros such as assert_eq! or general println! statements. A separate trait will be intended for Rust types which are faithfully represented as a string. These types can be represented as a string in a non-lossy fashion and are intended for general consumption by more than just developers.

This RFC proposes naming these two traits Show and String, respectively.

The String trait

A new formatting trait will be added to std::fmt as follows:

pub trait String for Sized? {
    fn fmt(&self, f: &mut Formatter) -> Result;
}

This trait is identical to all other formatting traits except for its name. The String trait will be used with the {} format specifier, typically considered the default specifier for Rust.

An implementation of the String trait is an assertion that the type can be faithfully represented as a UTF-8 string at all times. If the type can be reconstructed from a string, then it is recommended, but not required, that the following relation be true:

assert_eq!(foo, from_str(format!("{}", foo).as_slice()).unwrap());

If the type cannot necessarily be reconstructed from a string, then the output may be less descriptive than the type can provide, but it is guaranteed to be human readable for all users.

It is not expected that all types implement the String trait. Not all types can satisfy the purpose of this trait, and for example the following types will not implement the String trait:

  • Path will abstain as it is not guaranteed to contain valid UTF-8 data.
  • CString will abstain for the same reasons as Path.
  • RefCell will abstain as it may not be accessed at all times to be represented as a String.
  • Weak references will abstain for the same reasons as RefCell.

Almost all types that implement Show in the standard library today, however, will implement the String trait. For example all primitive integer types, vectors, slices, strings, and containers will all implement the String trait. The output format will not change from what it is today (no extra escaping or debugging will occur).

The compiler will not provide an implementation of #[deriving(String)] for types.

The Show trait

The current Show trait will not change location nor definition, but it will instead move to the {:?} specifier instead of the {} specifier (which String now uses).

An implementation of the Show trait is expected for all types in Rust and provides very few guarantees about the output. Output will typically represent the internal state as faithfully as possible, but it is not expected that this will always be true. The output of Show should never be used to reconstruct the object itself as it is not guaranteed to be possible to do so.

The purpose of the Show trait is to facilitate debugging Rust code which implies that it needs to be maximally useful by extending to all Rust types. All types in the standard library which do not currently implement Show will gain an implementation of the Show trait including Path, RefCell, and Weak references.

Many implementations of Show in the standard library will differ from what they currently are today. For example str’s implementation will escape all characters such as newlines and tabs in its output. Primitive integers will print the suffix of the type after the literal in all cases. Characters will also be printed with surrounding single quotes while escaping values such as newlines. The purpose of these implementations are to provide debugging views into these types.

Implementations of the Show trait are expected to never panic! and always produce valid UTF-8 data. The compiler will continue to provide a #[deriving(Show)] implementation to facilitate printing and debugging user-defined structures.

The ToString trait

Today the ToString trait is connected to the Show trait, but this RFC proposes wiring it to the newly-proposed String trait instead. This switch enables users of to_string to rely on the same guarantees provided by String as well as not erroneously providing the to_string method on types that are not intended to have one.

It is strongly discouraged to provide an implementation of the ToString trait and not the String trait.

Drawbacks

It is inherently easier to understand fewer concepts from the standard library and introducing multiple traits for common formatting implementations may lead to frequently mis-remembering which to implement. It is expected, however, that this will become such a common idiom in Rust that it will become second nature.

This RFC establishes a convention that Show and String produce valid UTF-8 data, but no static guarantee of this requirement is provided. Statically guaranteeing this invariant would likely involve adding some form of TextWriter which we are currently not willing to stabilize for the 1.0 release.

The default format specifier, {}, will quickly become unable to print many types in Rust. Without a #[deriving] implementation, manual implementations are predicted to be fairly sparse. This means that the defacto default may become {:?} for inspecting Rust types, providing pressure to re-shuffle the specifiers. Currently it is seen as untenable, however, for the default output format of a String to include escaped characters (as opposed to printing the string). Due to the debugging nature of Show, it is seen as a non-starter to make it the “default” via {}.

It may be too ambitious to define that String is a non-lossy representation of a type, eventually motivating other formatting traits.

Alternatives

The names String and Show may not necessarily imply “user readable” and “debuggable”. An alternative proposal would be to use Show for user readability and Inspect for debugging. This alternative also opens up the door for other names of the debugging trait like Repr. This RFC, however, has chosen String for user readability to provide a clearer connection with the ToString trait as well as emphasizing that the type can be faithfully represented as a String. Additionally, this RFC considers the name Show roughly on par with other alternatives and would help reduce churn for code migrating today.

Unresolved Questions

None at this time.

Note

This RFC has been amended by RFC 1574, which contains a combined version of the conventions.

Summary

This is a conventions RFC, providing guidance on providing API documentation for Rust projects, including the Rust language itself.

Motivation

Documentation is an extremely important part of any project. It’s important that we have consistency in our documentation.

For the most part, the RFC proposes guidelines that are already followed today, but it tries to motivate and clarify them.

Detailed design

There are a number of individual guidelines. Most of these guidelines are for any Rust project, but some are specific to documenting rustc itself and the standard library. These are called out specifically in the text itself.

Use line comments

Avoid block comments. Use line comments instead:

// Wait for the main task to return, and set the process error code
// appropriately.

Instead of:

/*
 * Wait for the main task to return, and set the process error code
 * appropriately.
 */

Only use inner doc comments //! to write crate and module-level documentation, nothing else. When using mod blocks, prefer /// outside of the block:

/// This module contains tests
mod tests {
    // ...
}

over

mod tests {
    //! This module contains tests

    // ...
}

Formatting

The first line in any doc comment should be a single-line short sentence providing a summary of the code. This line is used as a summary description throughout Rustdoc’s output, so it’s a good idea to keep it short.

All doc comments, including the summary line, should be properly punctuated. Prefer full sentences to fragments.

The summary line should be written in third person singular present indicative form. Basically, this means write “Returns” instead of “Return”.

Using Markdown

Within doc comments, use Markdown to format your documentation.

Use top level headings # to indicate sections within your comment. Common headings:

  • Examples
  • Panics
  • Failure

Even if you only include one example, use the plural form: “Examples” rather than “Example”. Future tooling is easier this way.

Use graves (`) to denote a code fragment within a sentence.

Use triple graves (```) to write longer examples, like this:

This code does something cool.

```rust
let x = foo();
x.bar();
```

When appropriate, make use of Rustdoc’s modifiers. Annotate triple grave blocks with the appropriate formatting directive. While they default to Rust in Rustdoc, prefer being explicit, so that it highlights syntax in places that do not, like GitHub.

```rust
println!("Hello, world!");
```

```ruby
puts "Hello"
```

Rustdoc is able to test all Rust examples embedded inside of documentation, so it’s important to mark what is not Rust so your tests don’t fail.

References and citation should be linked ‘reference style.’ Prefer

[Rust website][1]

[1]: http://www.rust-lang.org

to

[Rust website](http://www.rust-lang.org)

English

This section applies to rustc and the standard library.

All documentation is standardized on American English, with regards to spelling, grammar, and punctuation conventions. Language changes over time, so this doesn’t mean that there is always a correct answer to every grammar question, but there is often some kind of formal consensus.

Drawbacks

None.

Alternatives

Not having documentation guidelines.

Unresolved questions

None.

Summary

This RFC describes changes to the Rust release process, primarily the division of Rust’s time-based releases into ‘release channels’, following the ‘release train’ model used by e.g. Firefox and Chrome; as well as ‘feature staging’, which enables the continued development of unstable language features and libraries APIs while providing strong stability guarantees in stable releases.

It also redesigns and simplifies stability attributes to better integrate with release channels and the other stability-moderating system in the language, ‘feature gates’. While this version of stability attributes is only suitable for use by the standard distribution, we leave open the possibility of adding a redesigned system for the greater cargo ecosystem to annotate feature stability.

Finally, it discusses how Cargo may leverage feature gates to determine compatibility of Rust crates with specific revisions of the Rust language.

Motivation

We soon intend to provide stable releases of Rust that offer backwards compatibility with previous stable releases. Still, we expect to continue developing new features at a rapid pace for some time to come. We need to be able to provide these features to users for testing as they are developed while also proving strong stability guarantees to users.

Detailed design

The Rust release process moves to a ‘release train’ model, in which there are three ‘release channels’ through which the official Rust binaries are published: ‘nightly’, ‘beta’, and ‘stable’, and these release channels correspond to development branches.

’Nightly` is exactly as today, and where most development occurs; a separate ‘beta’ branch provides time for vetting a release and fixing bugs - particularly in backwards compatibility - before it gets wide use. Each release cycle beta gets promoted to stable (the release), and nightly gets promoted to beta.

The benefits of this model are a few:

  • It provides a window for testing the next release before committing to it. Currently we release straight from the (very active) master branch, with almost no testing.

  • It provides a window in which library developers can test their code against the next release, and - importantly - report unintended breakage of stable features.

  • It provides a testing ground for unstable features in the nightly release channel, while allowing the primary releases to contain only features which are complete and backwards-compatible (‘feature-staging’).

This proposal describes the practical impact to users of the release train, particularly with regard to feature staging. A more detailed description of the impact on the development process is [available elsewhere][3].

Versioning and releases

The nature of development and releases differs between channels, as each serves a specific purpose: nightly is for active development, beta is for testing and bugfixing, and stable is for final releases.

Each pending version of Rust progresses in sequence through the ‘nightly’ and ‘beta’ channels before being promoted to the ‘stable’ channel, at which time the final commit is tagged and that version is considered ‘released’.

Development cycles are reduced to six weeks from the current twelve.

Under normal circumstances, the version is only bumped on the nightly branch, once per development cycle, with the release channel controlling the label (-nightly, -beta) appended to the version number. Other circumstances, such as security incidents, may require point releases on the stable channel, the policy around which is yet undetermined.

Builds of the ‘nightly’ channel are published every night based on the content of the master branch. Each published build during a single development cycle carries the same version number, e.g. ‘1.0.0-nightly’, though for debugging purposes rustc builds can be uniquely identified by reporting the commit number from which they were built. As today, published nightly artifacts are simply referred to as ‘rust-nightly’ (not named after their version number). Artifacts produced from the nightly release channel should be considered transient, though we will maintain historical archives for convenience of projects that occasionally need to pin to specific revisions.

Builds of the ‘beta’ channel are published periodically as fixes are merged, and like the ‘nightly’ channel each published build during a single development cycle retains the same version number, but can be uniquely identified by the commit number. Beta artifacts are likewise simply named ‘rust-beta’.

We will ensure that it is convenient to perform continuous integration of Cargo packages against the beta channel on Travis CI. This will help detect any accidental breakage early, while not interfering with their build status.

Stable builds are versioned and named the same as today’s releases, both with just a bare version number, e.g. ‘1.0.0’. They are published at the beginning of each development cycle and once published are never refreshed or overwritten. Provisions for stable point releases will be made at a future time.

Exceptions for the 1.0.0 beta period

Under the release train model version numbers are incremented automatically each release cycle on a predetermined schedule. Six weeks after 1.0.0 is released 1.1.0 will be released, and six weeks after that 1.2.0, etc.

The release cycles approaching 1.0.0 will break with this pattern to give us leeway to extend 1.0.0 betas for multiple cycles until we are confident the intended stability guarantees are in place.

In detail, when the development cycle begins in which we are ready to publish the 1.0.0 beta, we will not publish anything on the stable channel, and the release on the beta channel will be called 1.0.0-beta1. If 1.0.0 betas extend for multiple cycles, the will be called 1.0.0-beta2, -beta3, etc, before being promoted to the stable channel as 1.0.0 and beginning the release train process in full.

During the beta cycles, as with the normal release cycles, primary development will be on the nightly branch, with only bugfixes on the beta branch.

Feature staging

In builds of Rust distributed through the ‘beta’ and ‘stable’ release channels, it is impossible to turn on unstable features by writing the #[feature(...)] attribute. This is accomplished primarily through a new lint called unstable_features. This lint is set to allow by default in nightlies and forbid in beta and stable releases (and by the forbid setting cannot be disabled).

The unstable_features lint simply looks for all ‘feature’ attributes and emits the message ‘unstable feature’.

The decision to set the feature staging lint is driven by a new field of the compilation Session, disable_staged_features. When set to true the lint pass will configure the feature staging lint to ‘forbid’, with a LintSource of ReleaseChannel. When a ReleaseChannel lint is triggered, in addition to the lint’s error message, it is accompanied by the note ‘this feature may not be used in the {channel} release channel’, where {channel} is the name of the release channel.

In feature-staged builds of Rust, rustdoc sets disable_staged_features to false. Without doing so, it would not be possible for rustdoc to successfully run against e.g. the accompanying std crate, as rustdoc runs the lint pass. Additionally, in feature-staged builds, rustdoc does not generate documentation for unstable APIs for crates (read below for the impact of feature staging on unstable APIs).

With staged features disabled, the Rust build itself is not possible, and some portion of the test suite will fail. To build the compiler itself and keep the test suite working the build system activates a hack via environment variables to disable the feature staging lint, a mechanism that is not be available under typical use. The build system additionally includes a way to run the test suite with the feature staging lint enabled, providing a means of tracking what portion of the test suite can be run without invoking unstable features.

The prelude causes complications with this scheme because prelude injection presently uses two feature gates: globs, to import the prelude, and phase, to import the standard macro_rules! macros. In the short term this will be worked-around with hacks in the compiler. It’s likely that these hacks can be removed before 1.0 if globs and macro_rules! imports become stable.

Merging stability attributes and feature gates

In addition to the feature gates that, in conjunction with the aforementioned unstable_features lint, manage the stable evolution of language features, Rust additionally has another independent system for managing the evolution of library features, ‘stability attributes’. This system, inspired by node.js, divides APIs into a number of stability levels: #[experimental], #[unstable], #[stable], #[frozen], #[locked], and #[deprecated], along with unmarked functions (which are in most cases considered unstable).

As a simplifying measure stability attributes are unified with feature gates, and thus tied to release channels and Rust language versions.

  • All existing stability attributes are removed of any semantic meaning by the compiler. Existing code that uses these attributes will continue to compile, but neither rustc nor rustdoc will interpret them in any way.
  • New #[staged_unstable(...)], #[staged_stable(...)], and #[staged_deprecated(...)] attributes are added.
  • All three require a feature parameter, e.g. #[staged_unstable(feature = "chicken_dinner")]. This signals that the item tagged by the attribute is part of the named feature.
  • The staged_stable and staged_deprecated attributes require an additional parameter since, whose value is equal to a version of the language (where currently the language version is equal to the compiler version), e.g. #[stable(feature = "chicken_dinner", since = "1.6")].

All stability attributes continue to support an optional description parameter.

The intent of adding the ‘staged_’ prefix to the stability attributes is to leave the more desirable attribute names open for future use.

With these modifications, new API surface area becomes a new “language feature” which is controlled via the #[feature] attribute just like other normal language features. The compiler will disallow all usage of #[staged_unstable(feature = "foo")] APIs unless the current crate declares #![feature(foo)]. This enables crates to declare what API features of the standard library they rely on without opting in to all unstable API features.

Examples of APIs tagged with stability attributes:

#[staged_unstable(feature = "a")]
fn foo() { }

#[staged_stable(feature = "b", since = "1.6")]
fn bar() { }

#[staged_stable(feature = "c", since = "1.6")]
#[staged_deprecated(feature = "c", since = "1.7")]
fn baz() { }

Since all feature additions to Rust are associated with a language version, source code can be finely analyzed for language compatibility. Association with distinct feature names leads to a straightforward process for tracking the progression of new features into the language. More detail on these matters below.

Some additional restrictions are enforced by the compiler as a sanity check that they are being used correctly.

  • The staged_deprecated attribute must be paired with a staged_stable attribute, enforcing that the progression of all features is from ‘staged_unstable’ to ‘staged_stable’ to ‘staged_deprecated’ and that the version in which the feature was promoted to stable is recorded and maintained as well as the version in which a feature was deprecated.
  • Within a crate, the compiler enforces that for all APIs with the same feature name where any are marked staged_stable, all are either staged_stable or staged_deprecated. In other words, no single feature may be partially promoted from unstable to stable, but features may be partially deprecated. This ensures that no APIs are accidentally excluded from stabilization and that entire features may be considered either ‘unstable’ or ‘stable’.

It’s important to note that these stability attributes are only known to be useful to the standard distribution, because of the explicit linkage to language versions and release channels. There is though no mechanism to explicitly forbid their use outside of the standard distribution. A general mechanism for indicating API stability will be reconsidered in the future.

API lifecycle

These attributes alter the process of how new APIs are added to the standard library slightly. First an API will be proposed via the RFC process, and a name for the API feature being added will be assigned at that time. When the RFC is accepted, the API will be added to the standard library with an #[staged_unstable(feature = "...")]attribute indicating what feature the API was assigned to.

After receiving test coverage from nightly users (who have opted into the feature) or thorough review, all APIs with a given feature will be changed from staged_unstable to staged_stable, adding since = "..." to mark the version in which the promotion occurred, and the feature is considered stable and may be used on the stable release channel.

When a stable API becomes deprecated the staged_deprecated attribute is added in addition to the existing staged_stable attribute, as well recording the version in which the deprecation was performed with the since parameter.

(Occasionally unstable APIs may be deprecated for the sake of easing user transitions, in which case they receive both the staged_stable and staged_deprecated attributes at once.)

Checking #[feature]

The names of features will no longer be a hardcoded list in the compiler due to the free-form nature of the #[staged_unstable] feature names. Instead, the compiler will perform the following steps when inspecting #[feature] attributes lists:

  1. The compiler will discover all #![feature] directives enabled for the crate and calculate a list of all enabled features.
  2. While compiling, all unstable language features used will be removed from this list. If a used feature is not enabled, then an error is generated.
  3. A new pass, the stability pass, will be extracted from the current stability lint pass to detect usage of all unstable APIs. If an unstable API is used, an error is generated if the feature is not used, and otherwise the feature is removed from the list.
  4. If the remaining list of enabled features is not empty, then the features were not used when compiling the current crate. The compiler will generate an error in this case unconditionally.

These steps ensure that the #[feature] attribute is used exhaustively and will check unstable language and library features.

Features, Cargo and version detection

Over time, it has become clear that with an ever-growing number of Rust releases that crates will want to be able to manage what versions of rust they indicate they can be compiled with. Some specific use cases are:

  • Although upgrades are highly encouraged, not all users upgrade immediately. Cargo should be able to help out with the process of downloading a new dependency and indicating that a newer version of the Rust compiler is required.
  • Not all users will be able to continuously upgrade. Some enterprises, for example, may upgrade rarely for technical reasons. In doing so, however, a large portion of the crates.io ecosystem becomes unusable once accepted features begin to propagate.
  • Developers may wish to prepare new releases of libraries during the beta channel cycle in order to have libraries ready for the next stable release. In this window, however, published versions will not be compatible with the current stable compiler (they use new features).

To solve this problem, Cargo and crates.io will grow the knowledge of the minimum required Rust language version required to compile a crate. Currently the Rust language version coincides with the version of the rustc compiler.

In the absence of user-supplied information about minimum language version requirements, Cargo will attempt to use feature information to determine version compatibility: by knowing in which version each feature of the language and each feature of the library was stabilized, and by detecting every feature used by a crate, rustc can determine the minimum version required; and rustc may assume that the crate will be compatible with future stable releases. There are two caveats: first, conditional compilation makes it not possible in some cases to detect all features in use, which may result in Cargo detecting a minimum version less than that required on all platforms. For this and other reasons Cargo will allow the minimum version to be specified manually. Second, rustc can not make any assumptions about compatibility across major revisions of the language.

To calculate this information, Cargo will compile crates just before publishing. In this process, the Rust compiler will record all used language features as well as all used #[staged_stable] APIs. Each compiler will contain archival knowledge of what stable version of the compiler language features were added to, and each #[staged_stable] API has the since metadata to tell which version of the compiler it was released in. The compiler will calculate the maximum of all these versions (language plus library features) to pass to Cargo. If any #[feature] directive is detected, however, the required Rust language version is “nightly”.

Cargo will then pass this required language version to crates.io which will both store it in the index as well as present it as part of the UI. Each crate will have a “badge” indicating what version of the Rust compiler is needed to compile it. The “badge” may indicate that the nightly or beta channels must be used if the version required has not yet been released (this happens when a crate is published on a non-stable channel). If the required language version is “nightly”, then the crate will permanently indicate that it requires the “nightly” version of the language.

When resolving dependencies, Cargo will discard all incompatible candidates based on the version of the available compiler. This will enable authors to publish crates which rely on the current beta channel while not interfering with users taking advantage of the stable channel.

Drawbacks

Adding multiple release channels and reducing the release cycle from 12 to 6 weeks both increase the amount of release engineering work required.

The major risk in feature staging is that, at the 1.0 release not enough of the language is available to foster a meaningful library ecosystem around the stable release. While we might expect many users to continue using nightly releases with or without this change, if the stable 1.0 release cannot be used in any practical sense it will be problematic from a PR perspective. Implementing this RFC will require careful attention to the libraries it affects.

Recognizing this risk, we must put in place processes to monitor the compatibility of known Cargo crates with the stable release channel, using evidence drawn from those crates to prioritize the stabilization of features and libraries. This work has already begun, with popular feature gates being ungated, and library stabilization work being prioritized based on the needs of Cargo crates.

Syntax extensions, lints, and any program using the compiler APIs will not be compatible with the stable release channel at 1.0 since it is not possible to stabilize #[plugin_registrar] in time. Plugins are very popular. This pain will partially be alleviated by a proposed Cargo feature that enables Rust code generation. macro_rules! is expected to be stable by 1.0 though.

With respect to stability attributes and Cargo, the proposed design is very specific to the standard library and the Rust compiler without being intended for use by third-party libraries. It is planned to extend Cargo’s own support for features (distinct from Rust features) to enable this form of feature development in a first-class method through Cargo. At this time, however, there are no concrete plans for this design and it is unlikely to happen soon.

The attribute syntax for declaring feature names is different for declaring feature names (a string) and for turning them on (an ident). This is done as a judgement call that in each context the given syntax looks best, and accepting that since this is a feature that is not intended for general use the discrepancy is not a major problem.

Having Cargo do version detection through feature analysis is known not to be foolproof, and may present further unknown obstacles.

Alternatives

Leave feature gates and unstable APIs exposed to the stable channel, as precedented by Haskell, web vendor prefixes, and node.js.

Make the beta channel a compromise between the nightly and stable channels, allowing some set of unstable features and APIs. This would allow more projects to use a ‘more stable’ release, but would make beta no longer representative of the pending stable release.

Unresolved questions

The exact method for working around the prelude’s use of feature gates is undetermined. Fixing #18102 will complicate the situation as the prelude relies on a bug in lint checking to work at all.

Rustdoc disables the feature-staging lints so they don’t cause it to fail, but I don’t know why rustdoc needs to be running lints. It may be possible to just stop running lints in rustdoc.

If stability attributes are only for std, that takes away the #[deprecated] attribute from Cargo libs, which is more clearly applicable.

What mechanism ensures that all API’s have stability coverage? Probably the will just default to unstable with some ‘default’ feature name.

See Also

[2]: https://github.com/rust-lang/meeting-minutes/blob/master/workweek-2014-08-18/versioning.md) [3]: http://discuss.rust-lang.org/t/rfc-impending-changes-to-the-release-process/508

Summary

This RFC shores up the finer details of collections reform. In particular, where the previous RFC focused on general conventions and patterns, this RFC focuses on specific APIs. It also patches up any errors that were found during implementation of part 1. Some of these changes have already been implemented, and simply need to be ratified.

Motivation

Collections reform stabilizes “standard” interfaces, but there’s a lot that still needs to be hashed out.

Detailed design

The fate of entire collections:

  • Stable: Vec, RingBuf, HashMap, HashSet, BTreeMap, BTreeSet, DList, BinaryHeap
  • Unstable: Bitv, BitvSet, VecMap
  • Move to collect-rs for incubation: EnumSet, bitflags!, LruCache, TreeMap, TreeSet, TrieMap, TrieSet

The stable collections have solid implementations, well-maintained APIs, are non-trivial, fundamental, and clearly useful.

The unstable collections are effectively “on probation”. They’re ok, but they need some TLC and further consideration before we commit to having them in the standard library forever. Bitv in particular won’t have quite the right API without IndexGet and IndexSet.

The collections being moved out are in poor shape. EnumSet is weird/trivial, bitflags is awkward, LruCache is niche. Meanwhile Tree* and Trie* have simply bit-rotted for too long, without anyone clearly stepping up to maintain them. Their code is scary, and their APIs are out of date. Their functionality can also already reasonably be obtained through either HashMap or BTreeMap.

Of course, instead of moving them out-of-tree, they could be left experimental, but that would perhaps be a fate worse than death, as it would mean that these collections would only be accessible to those who opt into running the Rust nightly. This way, these collections will be available for everyone through the cargo ecosystem. Putting them in collect-rs also gives them a chance to still benefit from a network effect and active experimentation. If they thrive there, they may still return to the standard library at a later time.

Add the following methods:

  • To all collections
/// Moves all the elements of `other` into `Self`, leaving `other` empty.
pub fn append(&mut self, other: &mut Self)

Collections know everything about themselves, and can therefore move data more efficiently than any more generic mechanism. Vec’s can safely trust their own capacity and length claims. DList and TreeMap can also reuse nodes, avoiding allocating.

This is by-ref instead of by-value for a couple reasons. First, it adds symmetry (one doesn’t have to be owned). Second, in the case of array-based structures, it allows other’s capacity to be reused. This shouldn’t have much expense in the way of making other valid, as almost all of our collections are basically a no-op to make an empty version of if necessary (usually it amounts to zeroing a few words of memory). BTree is the only exception the author is aware of (root is pre- allocated to avoid an Option).

  • To DList, Vec, RingBuf, BitV:
/// Splits the collection into two at the given index. Useful for similar reasons as `append`.
pub fn split_off(&mut self, at: uint) -> Self;
  • To all other “sorted” collections
/// Splits the collection into two at the given key. Returns everything after the given key,
/// including the key.
pub fn split_off<B: Borrow<K>>(&mut self, at: B) -> Self;

Similar reasoning to append, although perhaps even more needed, as there’s no other mechanism for moving an entire subrange of a collection efficiently like this. into_iterator consumes the whole collection, and using remove methods will do a lot of unnecessary work. For instance, in the case of Vec, using pop and push will involve many length changes, bounds checks, unwraps, and ultimately produce a reversed Vec.

  • To BitvSet, VecMap:
/// Reserves capacity for an element to be inserted at `len - 1` in the given
/// collection. The collection may reserve more space to avoid frequent reallocations.
pub fn reserve_len(&mut self, len: uint)

/// Reserves the minimum capacity for an element to be inserted at `len - 1` in the given
/// collection.
pub fn reserve_len_exact(&mut self, len: uint)

The “capacity” of these two collections isn’t really strongly related to the number of elements they hold, but rather the largest index an element is stored at. See Errata and Alternatives for extended discussion of this design.

  • For Ringbuf:
/// Gets two slices that cover the whole range of the RingBuf.
/// The second one may be empty. Otherwise, it continues *after* the first.
pub fn as_slices(&'a self) -> (&'a [T], &'a [T])

This provides some amount of support for viewing the RingBuf like a slice. Unfortunately the RingBuf may be wrapped, making this impossible. See Alternatives for other designs.

There is an implementation of this at rust-lang/rust#19903.

  • For Vec:
/// Resizes the `Vec` in-place so that `len()` equals to `new_len`.
///
/// Calls either `grow()` or `truncate()` depending on whether `new_len`
/// is larger than the current value of `len()` or not.
pub fn resize(&mut self, new_len: uint, value: T) where T: Clone

This is actually easy to implement out-of-tree on top of the current Vec API, but it has been frequently requested.

  • For Vec, RingBuf, BinaryHeap, HashMap and HashSet:
/// Clears the container, returning its owned contents as an iterator, but keeps the
/// allocated memory for reuse.
pub fn drain(&mut self) -> Drain<T>;

This provides a way to grab elements out of a collection by value, without deallocating the storage for the collection itself.

There is a partial implementation of this at rust-lang/rust#19946.

==============

Deprecate

  • Vec::from_fn(n, f) use (0..n).map(f).collect()
  • Vec::from_elem(n, v) use repeat(v).take(n).collect()
  • Vec::grow use extend(repeat(v).take(n))
  • Vec::grow_fn use extend((0..n).map(f))
  • dlist::ListInsertion in favour of inherent methods on the iterator

==============

Misc Stabilization:

  • Rename BinaryHeap::top to BinaryHeap::peek. peek is a more clear name than top, and is already used elsewhere in our APIs.

  • Bitv::get, Bitv::set, where set panics on OOB, and get returns an Option. set may want to wait on IndexSet being a thing (see Alternatives).

  • Rename SmallIntMap to VecMap. (already done)

  • Stabilize front/back/front_mut/back_mut for peeking on the ends of Deques

  • Explicitly specify HashMap’s iterators to be non-deterministic between iterations. This would allow e.g. next_back to be implemented as next, reducing code complexity. This can be undone in the future backwards-compatibly, but the reverse does not hold.

  • Move Vec from std::vec to std::collections::vec.

  • Stabilize RingBuf::swap

==============

Clarifications and Errata from Part 1

  • Not every collection can implement every kind of iterator. This RFC simply wishes to clarify that iterator implementation should be a “best effort” for what makes sense for the collection.

  • Bitv was marked as having explicit growth capacity semantics, when in fact it is implicit growth. It has the same semantics as Vec.

  • BitvSet and VecMap are part of a surprise fourth capacity class, which isn’t really based on the number of elements contained, but on the maximum index stored. This RFC proposes the name of maximum growth.

  • reserve(x) should specifically reserve space for x + len() elements, as opposed to e.g. x + capacity() elements.

  • Capacity methods should be based on a “best effort” model:

    • capacity() can be regarded as a lower bound on the number of elements that can be inserted before a resize occurs. It is acceptable for more elements to be insertable. A collection may also randomly resize before capacity is met if highly degenerate behaviour occurs. This is relevant to HashMap, which due to its use of integer multiplication cannot precisely compute its “true” capacity. It also may wish to resize early if a long chain of collisions occurs. Note that Vec should make clear guarantees about the precision of capacity, as this is important for unsafe usage.

    • reserve_exact may be subverted by the collection’s own requirements (e.g. many collections require a capacity related to a power of two for fast modular arithmetic). The allocator may also give the collection more space than it requests, in which case it may as well use that space. It will still give you at least as much capacity as you request.

    • shrink_to_fit may not shrink to the true minimum size for similar reasons as reserve_exact.

    • Neither reserve nor reserve_exact can be trusted to reliably produce a specific capacity. At best you can guarantee that there will be space for the number you ask for. Although even then capacity itself may return a smaller number due to its own fuzziness.

==============

Entry API V2.0

The old Entry API:

impl Map<K, V> {
    fn entry<'a>(&'a mut self, key: K) -> Entry<'a, K, V>
}

pub enum Entry<'a, K: 'a, V: 'a> {
    Occupied(OccupiedEntry<'a, K, V>),
    Vacant(VacantEntry<'a, K, V>),
}

impl<'a, K, V> VacantEntry<'a, K, V> {
    fn set(self, value: V) -> &'a mut V
}

impl<'a, K, V> OccupiedEntry<'a, K, V> {
    fn get(&self) -> &V
    fn get_mut(&mut self) -> &mut V
    fn into_mut(self) -> &'a mut V
    fn set(&mut self, value: V) -> V
    fn take(self) -> V
}

Based on feedback and collections reform landing, this RFC proposes the following new API:

impl Map<K, V> {
    fn entry<'a, O: ToOwned<K>>(&'a mut self, key: &O) -> Entry<'a, O, V>
}

pub enum Entry<'a, O: 'a, V: 'a> {
    Occupied(OccupiedEntry<'a, O, V>),
    Vacant(VacantEntry<'a, O, V>),
}

impl Entry<'a, O: 'a, V:'a> {
    fn get(self) -> Result<&'a mut V, VacantEntry<'a, O, V>>
}

impl<'a, K, V> VacantEntry<'a, K, V> {
    fn insert(self, value: V) -> &'a mut V
}

impl<'a, K, V> OccupiedEntry<'a, K, V> {
    fn get(&self) -> &V
    fn get_mut(&mut self) -> &mut V
    fn into_mut(self) -> &'a mut V
    fn insert(&mut self, value: V) -> V
    fn remove(self) -> V
}

Replacing get/get_mut with Deref is simply a nice ergonomic improvement. Renaming set and take to insert and remove brings the API more inline with other collection APIs, and makes it more clear what they do. The convenience method on Entry itself makes it just nicer to use. Permitting the following map.entry(key).get().or_else(|vacant| vacant.insert(Vec::new())).

This API should be stabilized for 1.0 with the exception of the impl on Entry itself.

Alternatives

Traits vs Inherent Impls on Entries

The Entry API as proposed would leave Entry and its two variants defined by each collection. We could instead make the actual concrete VacantEntry/OccupiedEntry implementors implement a trait. This would allow Entry to be hoisted up to root of collections, with utility functions implemented once, as well as only requiring one import when using multiple collections. This would require that the traits be imported, unless we get inherent trait implementations.

These traits can of course be introduced later.

==============

Alternatives to ToOwned on Entries

The Entry API currently is a bit wasteful in the by-value key case. If, for instance, a user of a HashMap<String, _> happens to have a String they don’t mind losing, they can’t pass the String by -value to the Map. They must pass it by-reference, and have it get cloned.

One solution to this is to actually have the bound be IntoCow. This will potentially have some runtime overhead, but it should be dwarfed by the cost of an insertion anyway, and would be a clear win in the by-value case.

Another alternative would be an IntoOwned trait, which would have the signature (self) -> Owned, as opposed to the current ToOwned (&self) -> Owned. IntoOwned more closely matches the semantics we actually want for our entry keys, because we really don’t care about preserving them after the conversion. This would allow us to dispatch to either a no-op or a full clone as necessary. This trait would also be appropriate for the CoW type, and in fact all of our current uses of the type. However the relationship between FromBorrow and IntoOwned is currently awkward to express with our type system, as it would have to be implemented e.g. for &str instead of str. IntoOwned also has trouble co-existing “fully” with ToOwned due to current lack of negative bounds in where clauses. That is, we would want a blanket impl of IntoOwned for ToOwned, but this can’t be properly expressed for coherence reasons.

This RFC does not propose either of these designs in favour of choosing the conservative ToOwned now, with the possibility of “upgrading” into IntoOwned, IntoCow, or something else when we have a better view of the type-system landscape.

==============

Don’t stabilize Bitv::set

We could wait for IndexSet, Or make set return a result. set really is redundant with an IndexSet implementation, and we don’t like to provide redundant APIs. On the other hand, it’s kind of weird to have only get.

==============

reserve_index vs reserve_len

reserve_len is primarily motivated by BitvSet and VecMap, whose capacity semantics are largely based around the largest index they have set, and not the number of elements they contain. This design was chosen for its equivalence to with_capacity, as well as possible future-proofing for adding it to other collections like Vec or RingBuf.

However one could instead opt for reserve_index, which are effectively the same method, but with an off-by-one. That is, reserve_len(x) == reserve_index(x - 1). This more closely matches the intent (let me have index 7), but has tricky off-by-one with capacity.

Alternatively reserve_len could just be called reserve_capacity.

==============

RingBuf as_slice

Other designs for this usecase were considered:

/// Attempts to get a slice over all the elements in the RingBuf, but may instead
/// have to return two slices, in the case that the elements aren't contiguous.
pub fn as_slice(&'a self) -> RingBufSlice<'a, T>

enum RingBufSlice<'a, T> {
    Contiguous(&'a [T]),
    Split((&'a [T], &'a [T])),
}
/// Gets a slice over all the elements in the RingBuf. This may require shifting
/// all the elements to make this possible.
pub fn to_slice(&mut self) -> &[T]

The one settled on had the benefit of being the simplest. In particular, having the enum wasn’t very helpful, because most code would just create an empty slice anyway in the contiguous case to avoid code-duplication.

Unresolved questions

reserve_index vs reserve_len and Ringbuf::as_slice are the two major ones.

Summary

This RFC proposes a significant redesign of the std::io and std::os modules in preparation for API stabilization. The specific problems addressed by the redesign are given in the Problems section below, and the key ideas of the design are given in Vision for IO.

Note about RFC structure

This RFC was originally posted as a single monolithic file, which made it difficult to discuss different parts separately.

It has now been split into a skeleton that covers (1) the problem statement, (2) the overall vision and organization, and (3) the std::os module.

Other parts of the RFC are marked with (stub) and will be filed as follow-up PRs against this RFC.

Table of contents

Problems

The io and os modules are the last large API surfaces of std that need to be stabilized. While the basic functionality offered in these modules is largely traditional, many problems with the APIs have emerged over time. The RFC discusses the most significant problems below.

This section only covers specific problems with the current library; see Vision for IO for a higher-level view. section.

Atomicity and the Reader/Writer traits

One of the most pressing – but also most subtle – problems with std::io is the lack of atomicity in its Reader and Writer traits.

For example, the Reader trait offers a read_to_end method:

fn read_to_end(&mut self) -> IoResult<Vec<u8>>

Executing this method may involve many calls to the underlying read method. And it is possible that the first several calls succeed, and then a call returns an Err – which, like TimedOut, could represent a transient problem. Unfortunately, given the above signature, there is no choice but to simply throw this data away.

The Writer trait suffers from a more fundamental problem, since its primary method, write, may actually involve several calls to the underlying system – and if a failure occurs, there is no indication of how much was written.

Existing blocking APIs all have to deal with this problem, and Rust can and should follow the existing tradition here. See Revising Reader and Writer for the proposed solution.

Timeouts

The std::io module supports “timeouts” on virtually all IO objects via a set_timeout method. In this design, every IO object (file, socket, etc.) has an optional timeout associated with it, and set_timeout mutates the associated timeout. All subsequent blocking operations are implicitly subject to this timeout.

This API choice suffers from two problems, one cosmetic and the other deeper:

  • The “timeout” is actually a deadline and should be named accordingly.

  • The stateful API has poor composability: when passing a mutable reference of an IO object to another function, it’s possible that the deadline has been changed. In other words, users of the API can easily interfere with each other by accident.

See Deadlines for the proposed solution.

Posix and libuv bias

The current io and os modules were originally designed when librustuv was providing IO support, and to some extent they reflect the capabilities and conventions of libuv – which in turn are loosely based on Posix.

As such, the modules are not always ideal from a cross-platform standpoint, both in terms of forcing Windows programmings into a Posix mold, and also of offering APIs that are not actually usable on all platforms.

The modules have historically also provided no platform-specific APIs.

Part of the goal of this RFC is to set out a clear and extensible story for both cross-platform and platform-specific APIs in std. See Design principles for the details.

Unicode

Rust has followed the utf8 everywhere approach to its strings. However, at the borders to platform APIs, it is revealed that the world is not, in fact, UTF-8 (or even Unicode) everywhere.

Currently our story for platform APIs is that we either assume they can take or return Unicode strings (suitably encoded) or an uninterpreted byte sequence. Sadly, this approach does not actually cover all platform needs, and is also not highly ergonomic as presently implemented. (Consider os::getenv which introduces replacement characters (!) versus os::getenv_as_bytes which yields a Vec<u8>; neither is ideal.)

This topic was covered in some detail in the Path Reform RFC, but this RFC gives a more general account in String handling.

stdio

The stdio module provides access to readers/writers for stdin, stdout and stderr, which is essential functionality. However, it also provides a means of changing e.g. “stdout” – but there is no connection between these two! In particular, set_stdout affects only the writer that println! and friends use, while set_stderr affects panic!.

This module needs to be clarified. See The std::io facade and [Functionality moved elsewhere] for the detailed design.

Overly high-level abstractions

There are a few places where io provides high-level abstractions over system services without also providing more direct access to the service as-is. For example:

  • The Writer trait’s write method – a cornerstone of IO – actually corresponds to an unbounded number of invocations of writes to the underlying IO object. This RFC changes write to follow more standard, lower-level practice; see Revising Reader and Writer.

  • Objects like TcpStream are Clone, which involves a fair amount of supporting infrastructure. This RFC tackles the problems that Clone was trying to solve more directly; see Splitting streams and cancellation.

The motivation for going lower-level is described in Design principles below.

The error chaining pattern

The std::io module is somewhat unusual in that most of the functionality it proves are used through a few key traits (like Reader) and these traits are in turn “lifted” over IoResult:

impl<R: Reader> Reader for IoResult<R> { ... }

This lifting and others makes it possible to chain IO operations that might produce errors, without any explicit mention of error handling:

File::open(some_path).read_to_end()
                      ^~~~~~~~~~~ can produce an error
      ^~~~ can produce an error

The result of such a chain is either Ok of the outcome, or Err of the first error.

While this pattern is highly ergonomic, it does not fit particularly well into our evolving error story (interoperation or try blocks), and it is the only module in std to follow this pattern.

Eventually, we would like to write

File::open(some_path)?.read_to_end()

to take advantage of the FromError infrastructure, hook into error handling control flow, and to provide good chaining ergonomics throughout all Rust APIs – all while keeping this handling a bit more explicit via the ? operator. (See https://github.com/rust-lang/rfcs/pull/243 for the rough direction).

In the meantime, this RFC proposes to phase out the use of impls for IoResult. This will require use of try! for the time being.

(Note: this may put some additional pressure on at least landing the basic use of ? instead of today’s try! before 1.0 final.)

Detailed design

There’s a lot of material here, so the RFC starts with high-level goals, principles, and organization, and then works its way through the various modules involved.

Vision for IO

Rust’s IO story had undergone significant evolution, starting from a libuv-style pure green-threaded model to a dual green/native model and now to a pure native model. Given that history, it’s worthwhile to set out explicitly what is, and is not, in scope for std::io

Goals

For Rust 1.0, the aim is to:

  • Provide a blocking API based directly on the services provided by the native OS for native threads.

    These APIs should cover the basics (files, basic networking, basic process management, etc) and suffice to write servers following the classic Apache thread-per-connection model. They should impose essentially zero cost over the underlying OS services; the core APIs should map down to a single syscall unless more are needed for cross-platform compatibility.

  • Provide basic blocking abstractions and building blocks (various stream and buffer types and adapters) based on traditional blocking IO models but adapted to fit well within Rust.

  • Provide hooks for integrating with low-level and/or platform-specific APIs.

  • Ensure reasonable forwards-compatibility with future async IO models.

It is explicitly not a goal at this time to support asynchronous programming models or nonblocking IO, nor is it a goal for the blocking APIs to eventually be used in a nonblocking “mode” or style.

Rather, the hope is that the basic abstractions of files, paths, sockets, and so on will eventually be usable directly within an async IO programming model and/or with nonblocking APIs. This is the case for most existing languages, which offer multiple interoperating IO models.

The long term intent is certainly to support async IO in some form, but doing so will require new research and experimentation.

Design principles

Now that the scope has been clarified, it’s important to lay out some broad principles for the io and os modules. Many of these principles are already being followed to some extent, but this RFC makes them more explicit and applies them more uniformly.

What cross-platform means

Historically, Rust’s std has always been “cross-platform”, but as discussed in Posix and libuv bias this hasn’t always played out perfectly. The proposed policy is below. With this policies, the APIs should largely feel like part of “Rust” rather than part of any legacy, and they should enable truly portable code.

Except for an explicit opt-in (see Platform-specific opt-in below), all APIs in std should be cross-platform:

  • The APIs should only expose a service or a configuration if it is supported on all platforms, and if the semantics on those platforms is or can be made loosely equivalent. (The latter requires exercising some judgment). Platform-specific functionality can be handled separately (Platform-specific opt-in) and interoperate with normal std abstractions.

    This policy rules out functions like chown which have a clear meaning on Unix and no clear interpretation on Windows; the ownership and permissions models are very different.

  • The APIs should follow Rust’s conventions, including their naming, which should be platform-neutral.

    This policy rules out names like fstat that are the legacy of a particular platform family.

  • The APIs should never directly expose the representation of underlying platform types, even if they happen to coincide on the currently-supported platforms. Cross-platform types in std should be newtyped.

    This policy rules out exposing e.g. error numbers directly as an integer type.

The next subsection gives detail on what these APIs should look like in relation to system services.

Relation to the system-level APIs

How should Rust APIs map into system services? This question breaks down along several axes which are in tension with one another:

  • Guarantees. The APIs provided in the mainline io modules should be predominantly safe, aside from the occasional unsafe function. In particular, the representation should be sufficiently hidden that most use cases are safe by construction. Beyond memory safety, though, the APIs should strive to provide a clear multithreaded semantics (using the Send/Sync kinds), and should use Rust’s type system to rule out various kinds of bugs when it is reasonably ergonomic to do so (following the usual Rust conventions).

  • Ergonomics. The APIs should present a Rust view of things, making use of the trait system, newtypes, and so on to make system services fit well with the rest of Rust.

  • Abstraction/cost. On the other hand, the abstractions introduced in std must not induce significant costs over the system services – or at least, there must be a way to safely access the services directly without incurring this penalty. When useful abstractions would impose an extra cost, they must be pay-as-you-go.

Putting the above bullets together, the abstractions must be safe, and they should be as high-level as possible without imposing a tax.

  • Coverage. Finally, the std APIs should over time strive for full coverage of non-niche, cross-platform capabilities.

Platform-specific opt-in

Rust is a systems language, and as such it should expose seamless, no/low-cost access to system services. In many cases, however, this cannot be done in a cross-platform way, either because a given service is only available on some platforms, or because providing a cross-platform abstraction over it would be costly.

This RFC proposes platform-specific opt-in: submodules of os that are named by platform, and made available via #[cfg] switches. For example, os::unix can provide APIs only available on Unix systems, and os::linux can drill further down into Linux-only APIs. (You could even imagine subdividing by OS versions.) This is “opt-in” in the sense that, like the unsafe keyword, it is very easy to audit for potential platform-specificity: just search for os::anyplatform. Moreover, by separating out subsets like linux, it’s clear exactly how specific the platform dependency is.

The APIs in these submodules are intended to have the same flavor as other io APIs and should interoperate seamlessly with cross-platform types, but:

  • They should be named according to the underlying system services when there is a close correspondence.

  • They may reveal the underlying OS type if there is nothing to be gained by hiding it behind an abstraction.

For example, the os::unix module could provide a stat function that takes a standard Path and yields a custom struct. More interestingly, os::linux might include an epoll function that could operate directly on many io types (e.g. various socket types), without any explicit conversion to a file descriptor; that’s what “seamless” means.

Each of the platform modules will offer a custom prelude submodule, intended for glob import, that includes all of the extension traits applied to standard IO objects.

The precise design of these modules is in the very early stages and will likely remain #[unstable] for some time.

Proposed organization

The io module is currently the biggest in std, with an entire hierarchy nested underneath; it mixes general abstractions/tools with specific IO objects. The os module is currently a bit of a dumping ground for facilities that don’t fit into the io category.

This RFC proposes the revamp the organization by flattening out the hierarchy and clarifying the role of each module:

std
  env           environment manipulation
  fs            file system
  io            core io abstractions/adapters
    prelude     the io prelude
  net           networking
  os
    unix        platform-specific APIs
    linux         ..
    windows       ..
  os_str        platform-sensitive string handling
  process       process management

In particular:

  • The contents of os will largely move to env, a new module for inspecting and updating the “environment” (including environment variables, CPU counts, arguments to main, and so on).

  • The io module will include things like Reader and BufferedWriter – cross-cutting abstractions that are needed throughout IO.

    The prelude submodule will export all of the traits and most of the types for IO-related APIs; a single glob import should suffice to set you up for working with IO. (Note: this goes hand-in-hand with removing the bits of io currently in the prelude, as recently proposed.)

  • The root os module is used purely to house the platform submodules discussed above.

  • The os_str module is part of the solution to the Unicode problem; see String handling below.

  • The process module over time will grow to include querying/manipulating already-running processes, not just spawning them.

Revising Reader and Writer

The Reader and Writer traits are the backbone of IO, representing the ability to (respectively) pull bytes from and push bytes to an IO object. The core operations provided by these traits follows a very long tradition for blocking IO, but they are still surprisingly subtle – and they need to be revised.

  • Atomicity and data loss. As discussed above, the Reader and Writer traits currently expose methods that involve multiple actual reads or writes, and data is lost when an error occurs after some (but not all) operations have completed.

    The proposed strategy for Reader operations is to (1) separate out various deserialization methods into a distinct framework, (2) never have the internal read implementations loop on errors, (3) cut down on the number of non-atomic read operations and (4) adjust the remaining operations to provide more flexibility when possible.

    For writers, the main change is to make write only perform a single underlying write (returning the number of bytes written on success), and provide a separate write_all method.

  • Parsing/serialization. The Reader and Writer traits currently provide a large number of default methods for (de)serialization of various integer types to bytes with a given endianness. Unfortunately, these operations pose atomicity problems as well (e.g., a read could fail after reading two of the bytes needed for a u32 value).

    Rather than complicate the signatures of these methods, the (de)serialization infrastructure is removed entirely – in favor of instead eventually introducing a much richer parsing/formatting/(de)serialization framework that works seamlessly with Reader and Writer.

    Such a framework is out of scope for this RFC, but the endian-sensitive functionality will be provided elsewhere (likely out of tree).

With those general points out of the way, let’s look at the details.

Read

The updated Reader trait (and its extension) is as follows:

trait Read {
    fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>;

    fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<(), Error> { ... }
    fn read_to_string(&self, buf: &mut String) -> Result<(), Error> { ... }
}

// extension trait needed for object safety
trait ReadExt: Read {
    fn bytes(&mut self) -> Bytes<Self> { ... }

    ... // more to come later in the RFC
}
impl<R: Read> ReadExt for R {}

Following the trait naming conventions, the trait is renamed to Read reflecting the clear primary method it provides.

The read method should not involve internal looping (even over errors like EINTR). It is intended to faithfully represent a single call to an underlying system API.

The read_to_end and read_to_string methods now take explicit buffers as input. This has multiple benefits:

  • Performance. When it is known that reading will involve some large number of bytes, the buffer can be preallocated in advance.

  • “Atomicity” concerns. For read_to_end, it’s possible to use this API to retain data collected so far even when a read fails in the middle. For read_to_string, this is not the case, because UTF-8 validity cannot be ensured in such cases; but if intermediate results are wanted, one can use read_to_end and convert to a String only at the end.

Convenience methods like these will retry on EINTR. This is partly under the assumption that in practice, EINTR will most often arise when interfacing with other code that changes a signal handler. Due to the global nature of these interactions, such a change can suddenly cause your own code to get an error irrelevant to it, and the code should probably just retry in those cases. In the case where you are using EINTR explicitly, read and write will be available to handle it (and you can always build your own abstractions on top).

Removed methods

The proposed Read trait is much slimmer than today’s Reader. The vast majority of removed methods are parsing/deserialization, which were discussed above.

The remaining methods (read_exact, read_at_least, push, push_at_least) were removed for various reasons:

  • read_exact, read_at_least: these are somewhat more obscure conveniences that are not particularly robust due to lack of atomicity.

  • push, push_at_least: these are special-cases for working with Vec, which this RFC proposes to replace with a more general mechanism described next.

To provide some of this functionality in a more composition way, extend Vec<T> with an unsafe method:

unsafe fn with_extra(&mut self, n: uint) -> &mut [T];

This method is equivalent to calling reserve(n) and then providing a slice to the memory starting just after len() entries. Using this method, clients of Read can easily recover the push method.

Write

The Writer trait is cut down to even smaller size:

trait Write {
    fn write(&mut self, buf: &[u8]) -> Result<uint, Error>;
    fn flush(&mut self) -> Result<(), Error>;

    fn write_all(&mut self, buf: &[u8]) -> Result<(), Error> { .. }
    fn write_fmt(&mut self, fmt: &fmt::Arguments) -> Result<(), Error> { .. }
}

The biggest change here is to the semantics of write. Instead of repeatedly writing to the underlying IO object until all of buf is written, it attempts a single write and on success returns the number of bytes written. This follows the long tradition of blocking IO, and is a more fundamental building block than the looping write we currently have. Like read, it will propagate EINTR.

For convenience, write_all recovers the behavior of today’s write, looping until either the entire buffer is written or an error occurs. To meaningfully recover from an intermediate error and keep writing, code should work with write directly. Like the Read conveniences, EINTR results in a retry.

The write_fmt method, like write_all, will loop until its entire input is written or an error occurs.

The other methods include endian conversions (covered by serialization) and a few conveniences like write_str for other basic types. The latter, at least, is already uniformly (and extensibly) covered via the write! macro. The other helpers, as with Read, should migrate into a more general (de)serialization library.

String handling

The fundamental problem with Rust’s full embrace of UTF-8 strings is that not all strings taken or returned by system APIs are Unicode, let alone UTF-8 encoded.

In the past, std has assumed that all strings are either in some form of Unicode (Windows), or are simply u8 sequences (Unix). Unfortunately, this is wrong, and the situation is more subtle:

  • Unix platforms do indeed work with arbitrary u8 sequences (without interior nulls) and today’s platforms usually interpret them as UTF-8 when displayed.

  • Windows, however, works with arbitrary u16 sequences that are roughly interpreted at UTF-16, but may not actually be valid UTF-16 – an “encoding” often called UCS-2; see http://justsolve.archiveteam.org/wiki/UCS-2 for a bit more detail.

What this means is that all of Rust’s platforms go beyond Unicode, but they do so in different and incompatible ways.

The current solution of providing both str and [u8] versions of APIs is therefore problematic for multiple reasons. For one, the [u8] versions are not actually cross-platform – even today, they panic on Windows when given non-UTF-8 data, a platform-specific behavior. But they are also incomplete, because on Windows you should be able to work directly with UCS-2 data.

Key observations

Fortunately, there is a solution that fits well with Rust’s UTF-8 strings and offers the possibility of platform-specific APIs.

Observation 1: it is possible to re-encode UCS-2 data in a way that is also compatible with UTF-8. This is the WTF-8 encoding format proposed by Simon Sapin. This encoding has some remarkable properties:

  • Valid UTF-8 data is valid WTF-8 data. When decoded to UCS-2, the result is exactly what would be produced by going straight from UTF-8 to UTF-16. In other words, making up some methods:

    my_ut8_data.to_wtf8().to_ucs2().as_u16_slice() == my_utf8_data.to_utf16().as_u16_slice()
  • Valid UTF-16 data re-encoded as WTF-8 produces the corresponding UTF-8 data:

    my_utf16_data.to_wtf8().as_bytes() == my_utf16_data.to_utf8().as_bytes()

These two properties mean that, when working with Unicode data, the WTF-8 encoding is highly compatible with both UTF-8 and UTF-16. In particular, the conversion from a Rust string to a WTF-8 string is a no-op, and the conversion in the other direction is just a validation.

Observation 2: all platforms can consume Unicode data (suitably re-encoded), and it’s also possible to validate the data they produce as Unicode and extract it.

Observation 3: the non-Unicode spaces on various platforms are deeply incompatible: there is no standard way to port non-Unicode data from one to another. Therefore, the only cross-platform APIs are those that work entirely with Unicode.

The design: os_str

The observations above lead to a somewhat radical new treatment of strings, first proposed in the Path Reform RFC. This RFC proposes to introduce new string and string slice types that (opaquely) represent platform-sensitive strings, housed in the std::os_str module.

The OsString type is analogous to String, and OsStr is analogous to str. Their backing implementation is platform-dependent, but they offer a cross-platform API:

pub mod os_str {
    /// Owned OS strings
    struct OsString {
        inner: imp::Buf
    }
    /// Slices into OS strings
    struct OsStr {
        inner: imp::Slice
    }

    // Platform-specific implementation details:
    #[cfg(unix)]
    mod imp {
        type Buf = Vec<u8>;
        type Slice = [u8];
        ...
    }

    #[cfg(windows)]
    mod imp {
        type Buf = Wtf8Buf; // See https://github.com/SimonSapin/rust-wtf8
        type Slice = Wtf8;
        ...
    }

    impl OsString {
        pub fn from_string(String) -> OsString;
        pub fn from_str(&str) -> OsString;
        pub fn as_slice(&self) -> &OsStr;
        pub fn into_string(Self) -> Result<String, OsString>;
        pub fn into_string_lossy(Self) -> String;

        // and ultimately other functionality typically found on vectors,
        // but CRUCIALLY NOT as_bytes
    }

    impl Deref<OsStr> for OsString { ... }

    impl OsStr {
        pub fn from_str(value: &str) -> &OsStr;
        pub fn as_str(&self) -> Option<&str>;
        pub fn to_string_lossy(&self) -> CowString;

        // and ultimately other functionality typically found on slices,
        // but CRUCIALLY NOT as_bytes
    }

    trait IntoOsString {
        fn into_os_str_buf(self) -> OsString;
    }

    impl IntoOsString for OsString { ... }
    impl<'a> IntoOsString for &'a OsStr { ... }

    ...
}

These APIs make OS strings appear roughly as opaque vectors (you cannot see the byte representation directly), and can always be produced starting from Unicode data. They make it possible to collapse functions like getenv and getenv_as_bytes into a single function that produces an OS string, allowing the client to decide how (or whether) to extract Unicode data. It will be possible to do things like concatenate OS strings without ever going through Unicode.

It will also likely be possible to do things like search for Unicode substrings. The exact details of the API are left open and are likely to grow over time.

In addition to APIs like the above, there will also be platform-specific ways of viewing or constructing OS strings that reveals more about the space of possible values:

pub mod os {
    #[cfg(unix)]
    pub mod unix {
        trait OsStringExt {
            fn from_vec(Vec<u8>) -> Self;
            fn into_vec(Self) -> Vec<u8>;
        }

        impl OsStringExt for os_str::OsString { ... }

        trait OsStrExt {
            fn as_byte_slice(&self) -> &[u8];
            fn from_byte_slice(&[u8]) -> &Self;
        }

        impl OsStrExt for os_str::OsStr { ... }

        ...
    }

    #[cfg(windows)]
    pub mod windows{
        // The following extension traits provide a UCS-2 view of OS strings

        trait OsStringExt {
            fn from_wide_slice(&[u16]) -> Self;
        }

        impl OsStringExt for os_str::OsString { ... }

        trait OsStrExt {
            fn to_wide_vec(&self) -> Vec<u16>;
        }

        impl OsStrExt for os_str::OsStr { ... }

        ...
    }

    ...
}

By placing these APIs under os, using them requires a clear opt in to platform-specific functionality.

The future

Introducing an additional string type is a bit daunting, since many existing APIs take and consume only standard Rust strings. Today’s solution demands that strings coming from the OS be assumed or turned into Unicode, and the proposed API continues to allow that (with more explicit and finer-grained control).

In the long run, however, robust applications are likely to work opaquely with OS strings far beyond the boundary to the system to avoid data loss and ensure maximal compatibility. If this situation becomes common, it should be possible to introduce an abstraction over various string types and generalize most functions that work with String/str to instead work generically. This RFC does not propose taking any such steps now – but it’s important that we can do so later if Rust’s standard strings turn out to not be sufficient and OS strings become commonplace.

Deadlines

To be added in a follow-up PR.

Splitting streams and cancellation

To be added in a follow-up PR.

Modules

Now that we’ve covered the core principles and techniques used throughout IO, we can go on to explore the modules in detail.

core::io

Ideally, the io module will be split into the parts that can live in libcore (most of it) and the parts that are added in the std::io facade. This part of the organization is non-normative, since it requires changes to today’s IoError (which currently references String); if these changes cannot be performed, everything here will live in std::io.

Adapters

The current std::io::util module offers a number of Reader and Writer “adapters”. This RFC refactors the design to more closely follow std::iter. Along the way, it generalizes the by_ref adapter:

trait ReadExt: Read {
    // ... eliding the methods already described above

    // Postfix version of `(&mut self)`
    fn by_ref(&mut self) -> &mut Self { ... }

    // Read everything from `self`, then read from `next`
    fn chain<R: Read>(self, next: R) -> Chain<Self, R> { ... }

    // Adapt `self` to yield only the first `limit` bytes
    fn take(self, limit: u64) -> Take<Self> { ... }

    // Whenever reading from `self`, push the bytes read to `out`
    #[unstable] // uncertain semantics of errors "halfway through the operation"
    fn tee<W: Write>(self, out: W) -> Tee<Self, W> { ... }
}

trait WriteExt: Write {
    // Postfix version of `(&mut self)`
    fn by_ref<'a>(&'a mut self) -> &mut Self { ... }

    // Whenever bytes are written to `self`, write them to `other` as well
    #[unstable] // uncertain semantics of errors "halfway through the operation"
    fn broadcast<W: Write>(self, other: W) -> Broadcast<Self, W> { ... }
}

// An adaptor converting an `Iterator<u8>` to `Read`.
pub struct IterReader<T> { ... }

As with std::iter, these adapters are object unsafe and hence placed in an extension trait with a blanket impl.

Free functions

The current std::io::util module also includes a number of primitive readers and writers, as well as copy. These are updated as follows:

// A reader that yields no bytes
fn empty() -> Empty; // in theory just returns `impl Read`

impl Read for Empty { ... }

// A reader that yields `byte` repeatedly (generalizes today's ZeroReader)
fn repeat(byte: u8) -> Repeat;

impl Read for Repeat { ... }

// A writer that ignores the bytes written to it (/dev/null)
fn sink() -> Sink;

impl Write for Sink { ... }

// Copies all data from a `Read` to a `Write`, returning the amount of data
// copied.
pub fn copy<R, W>(r: &mut R, w: &mut W) -> Result<u64, Error>

Like write_all, the copy method will discard the amount of data already written on any error and also discard any partially read data on a write error. This method is intended to be a convenience and write should be used directly if this is not desirable.

Seeking

The seeking infrastructure is largely the same as today’s, except that tell is removed and the seek signature is refactored with more precise types:

pub trait Seek {
    // returns the new position after seeking
    fn seek(&mut self, pos: SeekFrom) -> Result<u64, Error>;
}

pub enum SeekFrom {
    Start(u64),
    End(i64),
    Current(i64),
}

The old tell function can be regained via seek(SeekFrom::Current(0)).

Buffering

The current Buffer trait will be renamed to BufRead for clarity (and to open the door to BufWrite at some later point):

pub trait BufRead: Read {
    fn fill_buf(&mut self) -> Result<&[u8], Error>;
    fn consume(&mut self, amt: uint);

    fn read_until(&mut self, byte: u8, buf: &mut Vec<u8>) -> Result<(), Error> { ... }
    fn read_line(&mut self, buf: &mut String) -> Result<(), Error> { ... }
}

pub trait BufReadExt: BufRead {
    // Split is an iterator over Result<Vec<u8>, Error>
    fn split(&mut self, byte: u8) -> Split<Self> { ... }

    // Lines is an iterator over Result<String, Error>
    fn lines(&mut self) -> Lines<Self> { ... };

    // Chars is an iterator over Result<char, Error>
    fn chars(&mut self) -> Chars<Self> { ... }
}

The read_until and read_line methods are changed to take explicit, mutable buffers, for similar reasons to read_to_end. (Note that buffer reuse is particularly common for read_line). These functions include the delimiters in the strings they produce, both for easy cross-platform compatibility (in the case of read_line) and for ease in copying data without loss (in particular, distinguishing whether the last line included a final delimiter).

The split and lines methods provide iterator-based versions of read_until and read_line, and do not include the delimiter in their output. This matches conventions elsewhere (like split on strings) and is usually what you want when working with iterators.

The BufReader, BufWriter and BufStream types stay essentially as they are today, except that for streams and writers the into_inner method yields the structure back in the case of a write error, and its behavior is clarified to writing out the buffered data without flushing the underlying reader:

// If writing fails, you get the unwritten data back
fn into_inner(self) -> Result<W, IntoInnerError<Self>>;

pub struct IntoInnerError<W>(W, Error);

impl IntoInnerError<T> {
    pub fn error(&self) -> &Error { ... }
    pub fn into_inner(self) -> W { ... }
}
impl<W> FromError<IntoInnerError<W>> for Error { ... }

Cursor

Many applications want to view in-memory data as either an implementor of Read or Write. This is often useful when composing streams or creating test cases. This functionality primarily comes from the following implementations:

impl<'a> Read for &'a [u8] { ... }
impl<'a> Write for &'a mut [u8] { ... }
impl Write for Vec<u8> { ... }

While efficient, none of these implementations support seeking (via an implementation of the Seek trait). The implementations of Read and Write for these types is not quite as efficient when Seek needs to be used, so the Seek-ability will be opted-in to with a new Cursor structure with the following API:

pub struct Cursor<T> {
    pos: u64,
    inner: T,
}
impl<T> Cursor<T> {
    pub fn new(inner: T) -> Cursor<T>;
    pub fn into_inner(self) -> T;
    pub fn get_ref(&self) -> &T;
}

// Error indicating that a negative offset was seeked to.
pub struct NegativeOffset;

impl Seek for Cursor<Vec<u8>> { ... }
impl<'a> Seek for Cursor<&'a [u8]> { ... }
impl<'a> Seek for Cursor<&'a mut [u8]> { ... }

impl Read for Cursor<Vec<u8>> { ... }
impl<'a> Read for Cursor<&'a [u8]> { ... }
impl<'a> Read for Cursor<&'a mut [u8]> { ... }

impl BufRead for Cursor<Vec<u8>> { ... }
impl<'a> BufRead for Cursor<&'a [u8]> { ... }
impl<'a> BufRead for Cursor<&'a mut [u8]> { ... }

impl<'a> Write for Cursor<&'a mut [u8]> { ... }
impl Write for Cursor<Vec<u8>> { ... }

A sample implementation can be found in a gist. Using one Cursor structure allows to emphasize that the only ability added is an implementation of Seek while still allowing all possible I/O operations for various types of buffers.

It is not currently proposed to unify these implementations via a trait. For example a Cursor<Rc<[u8]>> is a reasonable instance to have, but it will not have an implementation listed in the standard library to start out. It is considered a backwards-compatible addition to unify these various impl blocks with a trait.

The following types will be removed from the standard library and replaced as follows:

  • MemReader -> Cursor<Vec<u8>>
  • MemWriter -> Cursor<Vec<u8>>
  • BufReader -> Cursor<&[u8]> or Cursor<&mut [u8]>
  • BufWriter -> Cursor<&mut [u8]>

The std::io facade

The std::io module will largely be a facade over core::io, but it will add some functionality that can live only in std.

Errors

The IoError type will be renamed to std::io::Error, following our non-prefixing convention. It will remain largely as it is today, but its fields will be made private. It may eventually grow a field to track the underlying OS error code.

The std::io::IoErrorKind type will become std::io::ErrorKind, and ShortWrite will be dropped (it is no longer needed with the new Write semantics), which should decrease its footprint. The OtherIoError variant will become Other now that enums are namespaced. Other variants may be added over time, such as Interrupted, as more errors are classified from the system.

The EndOfFile variant will be removed in favor of returning Ok(0) from read on end of file (or write on an empty slice for example). This approach clarifies the meaning of the return value of read, matches Posix APIs, and makes it easier to use try! in the case that a “real” error should be bubbled out. (The main downside is that higher-level operations that might use Result<T, IoError> with some T != usize may need to wrap IoError in a further enum if they wish to forward unexpected EOF.)

Channel adapters

The ChanReader and ChanWriter adapters will be left as they are today, and they will remain #[unstable]. The channel adapters currently suffer from a few problems today, some of which are inherent to the design:

  • Construction is somewhat unergonomic. First a mpsc channel pair must be created and then each half of the reader/writer needs to be created.
  • Each call to write involves moving memory onto the heap to be sent, which isn’t necessarily efficient.
  • The design of std::sync::mpsc allows for growing more channels in the future, but it’s unclear if we’ll want to continue to provide a reader/writer adapter for each channel we add to std::sync.

These types generally feel as if they’re from a different era of Rust (which they are!) and may take some time to fit into the current standard library. They can be reconsidered for stabilization after the dust settles from the I/O redesign as well as the recent std::sync redesign. At this time, however, this RFC recommends they remain unstable.

stdin, stdout, stderr

The current stdio module will be removed in favor of these constructors in the io module:

pub fn stdin() -> Stdin;
pub fn stdout() -> Stdout;
pub fn stderr() -> Stderr;
  • stdin - returns a handle to a globally shared standard input of the process which is buffered as well. Due to the globally shared nature of this handle, all operations on Stdin directly will acquire a lock internally to ensure access to the shared buffer is synchronized. This implementation detail is also exposed through a lock method where the handle can be explicitly locked for a period of time so relocking is not necessary.

    The Read trait will be implemented directly on the returned Stdin handle but the BufRead trait will not be (due to synchronization concerns). The locked version of Stdin (StdinLock) will provide an implementation of BufRead.

    The design will largely be the same as is today with the old_io module.

    impl Stdin {
        fn lock(&self) -> StdinLock;
        fn read_line(&mut self, into: &mut String) -> io::Result<()>;
        fn read_until(&mut self, byte: u8, into: &mut Vec<u8>) -> io::Result<()>;
    }
    impl Read for Stdin { ... }
    impl Read for StdinLock { ... }
    impl BufRead for StdinLock { ... }
  • stderr - returns a non buffered handle to the standard error output stream for the process. Each call to write will roughly translate to a system call to output data when written to stderr. This handle is locked like stdin to ensure, for example, that calls to write_all are atomic with respect to one another. There will also be an RAII guard to lock the handle and use the result as an instance of Write.

    impl Stderr {
        fn lock(&self) -> StderrLock;
    }
    impl Write for Stderr { ... }
    impl Write for StderrLock { ... }
  • stdout - returns a globally buffered handle to the standard output of the current process. The amount of buffering can be decided at runtime to allow for different situations such as being attached to a TTY or being redirected to an output file. The Write trait will be implemented for this handle, and like stderr it will be possible to lock it and then use the result as an instance of Write as well.

    impl Stdout {
        fn lock(&self) -> StdoutLock;
    }
    impl Write for Stdout { ... }
    impl Write for StdoutLock { ... }

Windows and stdio

On Windows, standard input and output handles can work with either arbitrary [u8] or [u16] depending on the state at runtime. For example a program attached to the console will work with arbitrary [u16], but a program attached to a pipe would work with arbitrary [u8].

To handle this difference, the following behavior will be enforced for the standard primitives listed above:

  • If attached to a pipe then no attempts at encoding or decoding will be done, the data will be ferried through as [u8].

  • If attached to a console, then stdin will attempt to interpret all input as UTF-16, re-encoding into UTF-8 and returning the UTF-8 data instead. This implies that data will be buffered internally to handle partial reads/writes. Invalid UTF-16 will simply be discarded returning an io::Error explaining why.

  • If attached to a console, then stdout and stderr will attempt to interpret input as UTF-8, re-encoding to UTF-16. If the input is not valid UTF-8 then an error will be returned and no data will be written.

Raw stdio

Note: This section is intended to be a sketch of possible raw stdio support, but it is not planned to implement or stabilize this implementation at this time.

The above standard input/output handles all involve some form of locking or buffering (or both). This cost is not always wanted, and hence raw variants will be provided. Due to platform differences across unix/windows, the following structure will be supported:

mod os {
    mod unix {
        mod stdio {
            struct Stdio { .. }

            impl Stdio {
                fn stdout() -> Stdio;
                fn stderr() -> Stdio;
                fn stdin() -> Stdio;
            }

            impl Read for Stdio { ... }
            impl Write for Stdio { ... }
        }
    }

    mod windows {
        mod stdio {
            struct Stdio { ... }
            struct StdioConsole { ... }

            impl Stdio {
                fn stdout() -> io::Result<Stdio>;
                fn stderr() -> io::Result<Stdio>;
                fn stdin() -> io::Result<Stdio>;
            }
            // same constructors StdioConsole

            impl Read for Stdio { ... }
            impl Write for Stdio { ... }

            impl StdioConsole {
                // returns slice of what was read
                fn read<'a>(&self, buf: &'a mut OsString) -> io::Result<&'a OsStr>;
                // returns remaining part of `buf` to be written
                fn write<'a>(&self, buf: &'a OsStr) -> io::Result<&'a OsStr>;
            }
        }
    }
}

There are some key differences from today’s API:

  • On unix, the API has not changed much except that the handles have been consolidated into one type which implements both Read and Write (although writing to stdin is likely to generate an error).
  • On windows, there are two sets of handles representing the difference between “console mode” and not (e.g. a pipe). When not a console the normal I/O traits are implemented (delegating to ReadFile and WriteFile. The console mode operations work with OsStr, however, to show how they work with UCS-2 under the hood.

Printing functions

The current print, println, print_args, and println_args functions will all be “removed from the public interface” by prefixing them with __ and marking #[doc(hidden)]. These are all implementation details of the print! and println! macros and don’t need to be exposed in the public interface.

The set_stdout and set_stderr functions will be removed with no replacement for now. It’s unclear whether these functions should indeed control a thread local handle instead of a global handle as whether they’re justified in the first place. It is a backwards-compatible extension to allow this sort of output to be redirected and can be considered if the need arises.

std::env

Most of what’s available in std::os today will move to std::env, and the signatures will be updated to follow this RFC’s Design principles as follows.

Arguments:

  • args: change to yield an iterator rather than vector if possible; in any case, it should produce an OsString.

Environment variables:

  • vars (renamed from env): yields a vector of (OsString, OsString) pairs.

  • var (renamed from getenv): take a value bounded by AsOsStr, allowing Rust strings and slices to be ergonomically passed in. Yields an Option<OsString>.

  • var_string: take a value bounded by AsOsStr, returning Result<String, VarError> where VarError represents a non-unicode OsString or a “not present” value.

  • set_var (renamed from setenv): takes two AsOsStr-bounded values.

  • remove_var (renamed from unsetenv): takes a AsOsStr-bounded value.

  • join_paths: take an IntoIterator<T> where T: AsOsStr, yield a Result<OsString, JoinPathsError>.

  • split_paths take a AsOsStr, yield an Iterator<Path>.

Working directory:

  • current_dir (renamed from getcwd): yields a PathBuf.
  • set_current_dir (renamed from change_dir): takes an AsPath value.

Important locations:

  • home_dir (renamed from homedir): returns home directory as a PathBuf
  • temp_dir (renamed from tmpdir): returns a temporary directly as a PathBuf
  • current_exe (renamed from self_exe_name): returns the full path to the current binary as a PathBuf in an io::Result instead of an Option.

Exit status:

  • get_exit_status and set_exit_status stay as they are, but with updated docs that reflect that these only affect the return value of std::rt::start. These will remain #[unstable] for now and a future RFC will determine their stability.

Architecture information:

  • num_cpus, page_size: stay as they are, but remain #[unstable]. A future RFC will determine their stability and semantics.

Constants:

  • Stabilize ARCH, DLL_PREFIX, DLL_EXTENSION, DLL_SUFFIX, EXE_EXTENSION, EXE_SUFFIX, FAMILY as they are.
  • Rename SYSNAME to OS.
  • Remove TMPBUF_SZ.

This brings the constants into line with our naming conventions elsewhere.

Items to move to os::platform

  • pipe will move to os::unix. It is currently primarily used for hooking to the IO of a child process, which will now be done behind a trait object abstraction.

Removed items

  • errno, error_string and last_os_error provide redundant, platform-specific functionality and will be removed for now. They may reappear later in os::unix and os::windows in a modified form.
  • dll_filename: deprecated in favor of working directly with the constants.
  • _NSGetArgc, _NSGetArgv: these should never have been public.
  • self_exe_path: deprecated in favor of current_exe plus path operations.
  • make_absolute: deprecated in favor of explicitly joining with the working directory.
  • all _as_bytes variants: deprecated in favor of yielding OsString values

std::fs

The fs module will provide most of the functionality it does today, but with a stronger cross-platform orientation.

Note that all path-consuming functions will now take an AsPath-bounded parameter for ergonomic reasons (this will allow passing in Rust strings and literals directly, for example).

Free functions

Files:

  • copy. Take AsPath bound.

  • rename. Take AsPath bound.

  • remove_file (renamed from unlink). Take AsPath bound.

  • metadata (renamed from stat). Take AsPath bound. Yield a new struct, Metadata, with no public fields, but len, is_dir, is_file, perms, accessed and modified accessors. The various os::platform modules will offer extension methods on this structure.

  • set_perms (renamed from chmod). Take AsPath bound, and a Perms value. The Perms type will be revamped as a struct with private implementation; see below.

Directories:

  • create_dir (renamed from mkdir). Take AsPath bound.
  • create_dir_all (renamed from mkdir_recursive). Take AsPath bound.
  • read_dir (renamed from readdir). Take AsPath bound. Yield a newtypes iterator, which yields a new type DirEntry which has an accessor for Path, but will eventually provide other information as well (possibly via platform-specific extensions).
  • remove_dir (renamed from rmdir). Take AsPath bound.
  • remove_dir_all (renamed from rmdir_recursive). Take AsPath bound.
  • walk_dir. Take AsPath bound. Yield an iterator over IoResult<DirEntry>.

Links:

  • hard_link (renamed from link). Take AsPath bound.
  • soft_link (renamed from symlink). Take AsPath bound.
  • read_link (renamed form readlink). Take AsPath bound.

Files

The File type will largely stay as it is today, except that it will use the AsPath bound everywhere.

The stat method will be renamed to metadata, yield a Metadata structure (as described above), and take &self.

The fsync method will be renamed to sync_all, and datasync will be renamed to sync_data. (Although the latter is not available on Windows, it can be considered an optimization for flush and on Windows behave identically to sync_all, just as it does on some Unix filesystems.)

The path method will remain #[unstable], as we do not yet want to commit to its API.

The open_mode function will be removed in favor of and will take an OpenOptions struct, which will encompass today’s FileMode and FileAccess and support a builder-style API.

File kinds

The FileType type will be removed. As mentioned above, is_file and is_dir will be provided directly on Metadata; the other types need to be audited for compatibility across platforms. Platform-specific kinds will be relegated to extension traits in std::os::platform.

It’s possible that an extensible Kind will be added in the future.

File permissions

The permission models on Unix and Windows vary greatly – even between different filesystems within the same OS. Rather than offer an API that has no meaning on some platforms, we will initially provide a very limited Perms structure in std::fs, and then rich extension traits in std::os::unix and std::os::windows. Over time, if clear cross-platform patterns emerge for richer permissions, we can grow the Perms structure.

On the Unix side, the constructors and accessors for Perms will resemble the flags we have today; details are left to the implementation.

On the Windows side, initially there will be no extensions, as Windows has a very complex permissions model that will take some time to build out.

For std::fs itself, Perms will provide constructors and accessors for “world readable” – and that is all. At the moment, that is all that is known to be compatible across the platforms that Rust supports.

PathExt

This trait will essentially remain stay as it is (renamed from PathExtensions), following the same changes made to fs free functions.

Items to move to os::platform

  • lstat will move to os::unix and remain #[unstable] for now since it is not yet implemented for Windows.

  • chown will move to os::unix (it currently does nothing on Windows), and eventually os::windows will grow support for Windows’s permission model. If at some point a reasonable intersection is found, we will re-introduce a cross-platform function in std::fs.

  • In general, offer all of the stat fields as an extension trait on Metadata (e.g. os::unix::MetadataExt).

std::net

The contents of std::io::net submodules tcp, udp, ip and addrinfo will be retained but moved into a single std::net module; the other modules are being moved or removed and are described elsewhere.

SocketAddr

This structure will represent either a sockaddr_in or sockaddr_in6 which is commonly just a pairing of an IP address and a port.

enum SocketAddr {
    V4(SocketAddrV4),
    V6(SocketAddrV6),
}

impl SocketAddrV4 {
    fn new(addr: Ipv4Addr, port: u16) -> SocketAddrV4;
    fn ip(&self) -> &Ipv4Addr;
    fn port(&self) -> u16;
}

impl SocketAddrV6 {
    fn new(addr: Ipv6Addr, port: u16, flowinfo: u32, scope_id: u32) -> SocketAddrV6;
    fn ip(&self) -> &Ipv6Addr;
    fn port(&self) -> u16;
    fn flowinfo(&self) -> u32;
    fn scope_id(&self) -> u32;
}

Ipv4Addr

Represents a version 4 IP address. It has the following interface:

impl Ipv4Addr {
    fn new(a: u8, b: u8, c: u8, d: u8) -> Ipv4Addr;
    fn any() -> Ipv4Addr;
    fn octets(&self) -> [u8; 4];
    fn to_ipv6_compatible(&self) -> Ipv6Addr;
    fn to_ipv6_mapped(&self) -> Ipv6Addr;
}

Ipv6Addr

Represents a version 6 IP address. It has the following interface:

impl Ipv6Addr {
    fn new(a: u16, b: u16, c: u16, d: u16, e: u16, f: u16, g: u16, h: u16) -> Ipv6Addr;
    fn any() -> Ipv6Addr;
    fn segments(&self) -> [u16; 8]
    fn to_ipv4(&self) -> Option<Ipv4Addr>;
}

TCP

The current TcpStream struct will be pared back from where it is today to the following interface:

// TcpStream, which contains both a reader and a writer

impl TcpStream {
    fn connect<A: ToSocketAddrs>(addr: &A) -> io::Result<TcpStream>;
    fn peer_addr(&self) -> io::Result<SocketAddr>;
    fn local_addr(&self) -> io::Result<SocketAddr>;
    fn shutdown(&self, how: Shutdown) -> io::Result<()>;
    fn try_clone(&self) -> io::Result<TcpStream>;
}

impl Read for TcpStream { ... }
impl Write for TcpStream { ... }
impl<'a> Read for &'a TcpStream { ... }
impl<'a> Write for &'a TcpStream { ... }
#[cfg(unix)]    impl AsRawFd for TcpStream { ... }
#[cfg(windows)] impl AsRawSocket for TcpStream { ... }
  • clone has been replaced with a try_clone function. The implementation of try_clone will map to using dup on Unix platforms and WSADuplicateSocket on Windows platforms. The TcpStream itself will no longer be reference counted itself under the hood.
  • close_{read,write} are both removed in favor of binding the shutdown function directly on sockets. This will map to the shutdown function on both Unix and Windows.
  • set_timeout has been removed for now (as well as other timeout-related functions). It is likely that this may come back soon as a binding to setsockopt to the SO_RCVTIMEO and SO_SNDTIMEO options. This RFC does not currently proposed adding them just yet, however.
  • Implementations of Read and Write are provided for &TcpStream. These implementations are not necessarily ergonomic to call (requires taking an explicit reference), but they express the ability to concurrently read and write from a TcpStream

Various other options such as nodelay and keepalive will be left #[unstable] for now. The TcpStream structure will also adhere to both Send and Sync.

The TcpAcceptor struct will be removed and all functionality will be folded into the TcpListener structure. Specifically, this will be the resulting API:

impl TcpListener {
    fn bind<A: ToSocketAddrs>(addr: &A) -> io::Result<TcpListener>;
    fn local_addr(&self) -> io::Result<SocketAddr>;
    fn try_clone(&self) -> io::Result<TcpListener>;
    fn accept(&self) -> io::Result<(TcpStream, SocketAddr)>;
    fn incoming(&self) -> Incoming;
}

impl<'a> Iterator for Incoming<'a> {
    type Item = io::Result<TcpStream>;
    ...
}
#[cfg(unix)]    impl AsRawFd for TcpListener { ... }
#[cfg(windows)] impl AsRawSocket for TcpListener { ... }

Some major changes from today’s API include:

  • The static distinction between TcpAcceptor and TcpListener has been removed (more on this in the socket section).
  • The clone functionality has been removed in favor of try_clone (same caveats as TcpStream).
  • The close_accept functionality is removed entirely. This is not currently implemented via shutdown (not supported well across platforms) and is instead implemented via select. This functionality can return at a later date with a more robust interface.
  • The set_timeout functionality has also been removed in favor of returning at a later date in a more robust fashion with select.
  • The accept function no longer takes &mut self and returns SocketAddr. The change in mutability is done to express that multiple accept calls can happen concurrently.
  • For convenience the iterator does not yield the SocketAddr from accept.

The TcpListener type will also adhere to Send and Sync.

UDP

The UDP infrastructure will receive a similar face-lift as the TCP infrastructure will:

impl UdpSocket {
    fn bind<A: ToSocketAddrs>(addr: &A) -> io::Result<UdpSocket>;
    fn recv_from(&self, buf: &mut [u8]) -> io::Result<(usize, SocketAddr)>;
    fn send_to<A: ToSocketAddrs>(&self, buf: &[u8], addr: &A) -> io::Result<usize>;
    fn local_addr(&self) -> io::Result<SocketAddr>;
    fn try_clone(&self) -> io::Result<UdpSocket>;
}

#[cfg(unix)]    impl AsRawFd for UdpSocket { ... }
#[cfg(windows)] impl AsRawSocket for UdpSocket { ... }

Some important points of note are:

  • The send and recv function take &self instead of &mut self to indicate that they may be called safely in concurrent contexts.
  • All configuration options such as multicast and ttl are left as #[unstable] for now.
  • All timeout support is removed. This may come back in the form of setsockopt (as with TCP streams) or with a more general implementation of select.
  • clone functionality has been replaced with try_clone.

The UdpSocket type will adhere to both Send and Sync.

Sockets

The current constructors for TcpStream, TcpListener, and UdpSocket are largely “convenience constructors” as they do not expose the underlying details that a socket can be configured before it is bound, connected, or listened on. One of the more frequent configuration options is SO_REUSEADDR which is set by default for TcpListener currently.

This RFC leaves it as an open question how best to implement this pre-configuration. The constructors today will likely remain no matter what as convenience constructors and a new structure would implement consuming methods to transform itself to each of the various TcpStream, TcpListener, and UdpSocket.

This RFC does, however, recommend not adding multiple constructors to the various types to set various configuration options. This pattern is best expressed via a flexible socket type to be added at a future date.

Addresses

For the current addrinfo module:

  • The get_host_addresses should be renamed to lookup_host.
  • All other contents should be removed.

For the current ip module:

  • The ToSocketAddr trait should become ToSocketAddrs
  • The default to_socket_addr_all method should be removed.

The following implementations of ToSocketAddrs will be available:

impl ToSocketAddrs for SocketAddr { ... }
impl ToSocketAddrs for SocketAddrV4 { ... }
impl ToSocketAddrs for SocketAddrV6 { ... }
impl ToSocketAddrs for (Ipv4Addr, u16) { ... }
impl ToSocketAddrs for (Ipv6Addr, u16) { ... }
impl ToSocketAddrs for (&str, u16) { ... }
impl ToSocketAddrs for str { ... }
impl<T: ToSocketAddrs> ToSocketAddrs for &T { ... }

std::process

Currently std::io::process is used only for spawning new processes. The re-envisioned std::process will ultimately support inspecting currently-running processes, although this RFC does not propose any immediate support for doing so – it merely future-proofs the module.

Command

The Command type is a builder API for processes, and is largely in good shape, modulo a few tweaks:

  • Replace ToCStr bounds with AsOsStr.
  • Replace env_set_all with env_clear
  • Rename cwd to current_dir, take AsPath.
  • Rename spawn to run
  • Move uid and gid to an extension trait in os::unix
  • Make detached take a bool (rather than always setting the command to detached mode).

The stdin, stdout, stderr methods will undergo a more significant change. By default, the corresponding options will be considered “unset”, the interpretation of which depends on how the process is launched:

  • For run or status, these will inherit from the current process by default.
  • For output, these will capture to new readers/writers by default.

The StdioContainer type will be renamed to Stdio, and will not be exposed directly as an enum (to enable growth and change over time). It will provide a Capture constructor for capturing input or output, an Inherit constructor (which just means to use the current IO object – it does not take an argument), and a Null constructor. The equivalent of today’s InheritFd will be added at a later point.

Child

We propose renaming Process to Child so that we can add a more general notion of non-child Process later on (every Child will be able to give you a Process).

  • stdin, stdout and stderr will be retained as public fields, but their types will change to newtyped readers and writers to hide the internal pipe infrastructure.
  • The kill method is dropped, and id and signal will move to os::platform extension traits.
  • signal_exit, signal_kill, wait, and forget will all stay as they are.
  • set_timeout will be changed to use the with_deadline infrastructure.

There are also a few other related changes to the module:

  • Rename ProcessOutput to Output
  • Rename ProcessExit to ExitStatus, and hide its representation. Remove matches_exit_status, and add a status method yielding an Option<i32>
  • Remove MustDieSignal, PleaseExitSignal.
  • Remove EnvMap (which should never have been exposed).

std::os

Initially, this module will be empty except for the platform-specific unix and windows modules. It is expected to grow additional, more specific platform submodules (like linux, macos) over time.

Odds and ends

To be expanded in a follow-up PR.

The io prelude

The prelude submodule will contain most of the traits, types, and modules discussed in this RFC; it is meant to provide maximal convenience when working with IO of any kind. The exact contents of the module are left as an open question.

Drawbacks

This RFC is largely about cleanup, normalization, and stabilization of our IO libraries – work that needs to be done, but that also represents nontrivial churn.

However, the actual implementation work involved is estimated to be reasonably contained, since all of the functionality is already in place in some form (including os_str, due to @SimonSapin’s WTF-8 implementation).

Alternatives

The main alternative design would be to continue staying with the Posix tradition in terms of naming and functionality (for which there is precedent in some other languages). However, Rust is already well-known for its strong cross-platform compatibility in std, and making the library more Windows-friendly will only increase its appeal.

More radically different designs (in terms of different design principles or visions) are outside the scope of this RFC.

Unresolved questions

To be expanded in follow-up PRs.

Wide string representation

(Text from @SimonSapin)

Rather than WTF-8, OsStr and OsString on Windows could use potentially-ill-formed UTF-16 (a.k.a. “wide” strings), with a different cost trade off.

Upside:

  • No conversion between OsStr / OsString and OS calls.

Downsides:

  • More expensive conversions between OsStr / OsString and str / String.
  • These conversions have inconsistent performance characteristics between platforms. (Need to allocate on Windows, but not on Unix.)
  • Some of them return Cow, which has some ergonomic hit.

The API (only parts that differ) could look like:

pub mod os_str {
    #[cfg(windows)]
    mod imp {
        type Buf = Vec<u16>;
        type Slice = [u16];
        ...
    }

    impl OsStr {
        pub fn from_str(&str) -> Cow<OsString, OsStr>;
        pub fn to_string(&self) -> Option<CowString>;
        pub fn to_string_lossy(&self) -> CowString;
    }

    #[cfg(windows)]
    pub mod windows{
        trait OsStringExt {
            fn from_wide_slice(&[u16]) -> Self;
            fn from_wide_vec(Vec<u16>) -> Self;
            fn into_wide_vec(self) -> Vec<u16>;
        }

        trait OsStrExt {
            fn from_wide_slice(&[u16]) -> Self;
            fn as_wide_slice(&self) -> &[u16];
        }
    }
}
  • Start Date: 2014-12-13
  • RFC PR: 520
  • Rust Issue: 19999

Summary

Under this RFC, the syntax to specify the type of a fixed-length array containing N elements of type T would be changed to [T; N]. Similarly, the syntax to construct an array containing N duplicated elements of value x would be changed to [x; N].

Motivation

RFC 439 (cmp/ops reform) has resulted in an ambiguity that must be resolved. Previously, an expression with the form [x, ..N] would unambiguously refer to an array containing N identical elements, since there would be no other meaning that could be assigned to ..N. However, under RFC 439, ..N should now desugar to an object of type RangeTo<T>, with T being the type of N.

In order to resolve this ambiguity, there must be a change to either the syntax for creating an array of repeated values, or the new range syntax. This RFC proposes the former, in order to preserve existing functionality while avoiding modifications that would make the range syntax less intuitive.

Detailed design

The syntax [T, ..N] for specifying array types will be replaced by the new syntax [T; N].

In the expression [x, ..N], the ..N will refer to an expression of type RangeTo<T> (where T is the type of N). As with any other array of two elements, x will have to be of the same type, and the array expression will be of type [RangeTo<T>; 2].

The expression [x; N] will be equivalent to the old meaning of the syntax [x, ..N]. Specifically, it will create an array of length N, each element of which has the value x.

The effect will be to convert uses of arrays such as this:

let a: [uint, ..2] = [0u, ..2];

to this:

let a: [uint; 2] = [0u; 2];

Match patterns

In match patterns, .. is always interpreted as a wildcard for constructor arguments (or for slice patterns under the advanced_slice_patterns feature gate). This RFC does not change that. In a match pattern, .. will always be interpreted as a wildcard, and never as sugar for a range constructor.

Suggested implementation

While not required by this RFC, one suggested transition plan is as follows:

  • Implement the new syntax for [T; N]/[x; N] proposed above.

  • Issue deprecation warnings for code that uses [T, ..N]/[x, ..N], allowing easier identification of code that needs to be transitioned.

  • When RFC 439 range literals are implemented, remove the deprecated syntax and thus complete the implementation of this RFC.

Drawbacks

Backwards incompatibility

  • Changing the method for specifying an array size will impact a large amount of existing code. Code conversion can probably be readily automated, but will still require some labor.

Implementation time

This proposal is submitted very close to the anticipated release of Rust 1.0. Changing the array repeat syntax is likely to require more work than changing the range syntax specified in RFC 439, because the latter has not yet been implemented.

However, this decision cannot be reasonably postponed. Many users have expressed a preference for implementing the RFC 439 slicing syntax as currently specified rather than preserving the existing array repeat syntax. This cannot be resolved in a backwards-compatible manner if the array repeat syntax is kept.

Alternatives

Inaction is not an alternative due to the ambiguity introduced by RFC 439. Some resolution must be chosen in order for the affected modules in std to be stabilized.

Retain the type syntax only

In theory, it seems that the type syntax [T, ..N] could be retained, while getting rid of the expression syntax [x, ..N]. The problem with this is that, if this syntax was removed, there is currently no way to define a macro to replace it.

Retaining the current type syntax, but changing the expression syntax, would make the language somewhat more complex and inconsistent overall. There seem to be no advocates of this alternative so far.

Different array repeat syntax

The comments in pull request #498 mentioned many candidates for new syntax other than the [x; N] form in this RFC. The comments on the pull request of this RFC mentioned many more.

  • Instead of using [x; N], use [x for N].

    • This use of for would not be exactly analogous to existing for loops, because those accept an iterator rather than an integer. To a new user, the expression [x for N] would resemble a list comprehension (e.g. Python’s syntax is [expr for i in iter]), but in fact it does something much simpler.
    • It may be better to avoid uses of for that could complicate future language features, e.g. returning a value other than () from loops, or some other syntactic sugar related to iterators. However, the risk of actual ambiguity is not that high.
  • Introduce a different symbol to specify array sizes, e.g. [T # N], [T @ N], and so forth.

  • Introduce a keyword rather than a symbol. There are many other options, e.g. [x by N]. The original version of this proposal was for [N of x], but this was deemed to complicate parsing too much, since the parser would not know whether to expect a type or an expression after the opening bracket.

  • Any of several more radical changes.

Change the range syntax

The main problem here is that there are no proposed candidates that seem as clear and ergonomic as i..j. The most common alternative for slicing in other languages is i:j, but in Rust this simply causes an ambiguity with a different feature, namely type ascription.

Limit range syntax to the interior of an index (use i..j for slicing only)

This resolves the issue since indices can be distinguished from arrays. However, it removes some of the benefits of RFC 439. For instance, it removes the possibility of using for i in 1..10 to loop.

Remove RangeTo from RFC 439

The proposal in pull request #498 is to remove the sugar for RangeTo (i.e., ..j) while retaining other features of RFC 439. This is the simplest resolution, but removes some convenience from the language. It is also counterintuitive, because RangeFrom (i.e. i..) is retained, and because .. still has several different meanings in the language (ranges, repetition, and pattern wildcards).

Unresolved questions

Match patterns

There will still be two semantically distinct uses of .., for the RFC 439 range syntax and for wildcards in patterns. This could be considered harmful enough to introduce further changes to separate the two. Or this could be considered innocuous enough to introduce some additional range-related meaning for .. in certain patterns.

It is possible that the new syntax [x; N] could itself be used within patterns.

This RFC does not attempt to address any of these issues, because the current pattern syntax does not allow use of the repeated array syntax, and does not contain an ambiguity.

Behavior of for in array expressions

It may be useful to allow for to take on a new meaning in array expressions. This RFC keeps this possibility open, but does not otherwise propose any concrete changes to move towards or away from this feature.

  • Start Date: 2014-12-13
  • RFC PR: 522
  • Rust Issue: 20000

Summary

Allow Self type to be used in impls.

Motivation

Allows macros which operate on methods to do more, more easily without having to rebuild the concrete self type. Macros could use the literal self type like programmers do, but that requires extra machinery in the macro expansion code and extra work by the macro author.

Allows easier copy and pasting of method signatures from trait declarations to implementations.

Is more succinct where the self type is complex.

Motivation for doing this now

I’m hitting the macro problem in a side project. I wrote and hope to land the compiler code to make it work, but it is ugly and this is a much nicer solution. It is also really easy to implement, and since it is just a desugaring, it should not add any additional complexity to the compiler. Obviously, this should not block 1.0.

Detailed design

When used inside an impl, Self is desugared during syntactic expansion to the concrete type being implemented. Self can be used anywhere the desugared type could be used.

Drawbacks

There are some advantages to being explicit about the self type where it is possible - clarity and fewer type aliases.

Alternatives

We could just force authors to use the concrete type as we do currently. This would require macro expansion code to make available the concrete type (or the whole impl AST) to macros working on methods. The macro author would then extract/construct the self type and use it instead of Self.

Unresolved questions

None.

Summary

Statically enforce that the std::fmt module can only create valid UTF-8 data by removing the arbitrary write method in favor of a write_str method.

Motivation

Today it is conventionally true that the output from macros like format! and well as implementations of Show only create valid UTF-8 data. This is not statically enforced, however. As a consequence the .to_string() method must perform a str::is_utf8 check before returning a String.

This str::is_utf8 check is currently one of the most costly parts of the formatting subsystem while normally just being a redundant check.

Additionally, it is possible to statically enforce the convention that Show only deals with valid unicode, and as such the possibility of doing so should be explored.

Detailed design

The std::fmt::FormatWriter trait will be redefined as:

pub trait Writer {
    fn write_str(&mut self, data: &str) -> Result;
    fn write_char(&mut self, ch: char) -> Result {
        // default method calling write_str
    }
    fn write_fmt(&mut self, f: &Arguments) -> Result {
        // default method calling fmt::write
    }
}

There are a few major differences with today’s trait:

  • The name has changed to Writer in accordance with RFC 356
  • The write method has moved from taking &[u8] to taking &str instead.
  • A write_char method has been added.

The corresponding methods on the Formatter structure will also be altered to respect these signatures.

The key idea behind this API is that the Writer trait only operates on unicode data. The write_str method is a static enforcement of UTF-8-ness, and using write_char follows suit as a char can only be a valid unicode codepoint.

With this trait definition, the implementation of Writer for Vec<u8> will be removed (note this is not the io::Writer implementation) in favor of an implementation directly on String. The .to_string() method will change accordingly (as well as format!) to write directly into a String, bypassing all UTF-8 validity checks afterwards.

This change has been implemented in a branch of mine, and as expected the benchmark numbers have improved for the much larger texts.

Note that a key point of the changes implemented is that a call to write! into an arbitrary io::Writer is still valid as it’s still just a sink for bytes. The changes outlined in this RFC will only affect Show and other formatting trait implementations. As can be seen from the sample implementation, the fallout is quite minimal with respect to the rest of the standard library.

Drawbacks

A version of this RFC has been previously postponed, but this variant is much less ambitious in terms of generic TextWriter support. At this time the design of fmt::Writer is purposely conservative.

There are currently some use cases today where a &mut Formatter is interpreted as a &mut Writer, e.g. for the Show impl of Json. This is undoubtedly used outside this repository, and it would break all of these users relying on the binary functionality of the old FormatWriter.

Alternatives

Another possible solution to specifically the performance problem is to have an unsafe flag on a Formatter indicating that only valid utf-8 data was written, and if all sub-parts of formatting set this flag then the data can be assumed utf-8. In general relying on unsafe apis is less “pure” than relying on the type system instead.

The fmt::Writer trait can also be located as io::TextWriter instead to emphasize its possible future connection with I/O, although there are not concrete plans today to develop these connections.

Unresolved questions

  • It is unclear to what degree a fmt::Writer needs to interact with io::Writer and the various adaptors/buffers. For example one would have to implement their own BufferedWriter for a fmt::Writer.

Summary

Stabilize all string functions working with search patterns around a new generic API that provides a unified way to define and use those patterns.

Motivation

Right now, string slices define a couple of methods for string manipulation that work with user provided values that act as search patterns. For example, split() takes an type implementing CharEq to split the slice at all codepoints that match that predicate.

Among these methods, the notion of what exactly is being used as a search pattern varies inconsistently: Many work with the generic CharEq, which only looks at a single codepoint at a time; and some work with char or &str directly, sometimes duplicating a method to provide operations for both.

This presents a couple of issues:

  • The API is inconsistent.
  • The API duplicates similar operations on different types. (contains vs contains_char)
  • The API does not provide all operations for all types. (For example, no rsplit for &str patterns)
  • The API is not extensible, eg to allow splitting at regex matches.
  • The API offers no way to explicitly decide between different search algorithms for the same pattern, for example to use Boyer-Moore string searching.

At the moment, the full set of relevant string methods roughly looks like this:

pub trait StrExt for ?Sized {
    fn contains(&self, needle: &str) -> bool;
    fn contains_char(&self, needle: char) -> bool;

    fn split<Sep: CharEq>(&self, sep: Sep) -> CharSplits<Sep>;
    fn splitn<Sep: CharEq>(&self, sep: Sep, count: uint) -> CharSplitsN<Sep>;
    fn rsplitn<Sep: CharEq>(&self, sep: Sep, count: uint) -> CharSplitsN<Sep>;
    fn split_terminator<Sep: CharEq>(&self, sep: Sep) -> CharSplits<Sep>;
    fn split_str<'a>(&'a self, &'a str) -> StrSplits<'a>;

    fn match_indices<'a>(&'a self, sep: &'a str) -> MatchIndices<'a>;

    fn starts_with(&self, needle: &str) -> bool;
    fn ends_with(&self, needle: &str) -> bool;

    fn trim_chars<C: CharEq>(&self, to_trim: C) -> &'a str;
    fn trim_left_chars<C: CharEq>(&self, to_trim: C) -> &'a str;
    fn trim_right_chars<C: CharEq>(&self, to_trim: C) -> &'a str;

    fn find<C: CharEq>(&self, search: C) -> Option<uint>;
    fn rfind<C: CharEq>(&self, search: C) -> Option<uint>;
    fn find_str(&self, &str) -> Option<uint>;

    // ...
}

This RFC proposes to fix those issues by providing a unified Pattern trait that all “string pattern” types would implement, and that would be used by the string API exclusively.

This fixes the duplication, consistency, and extensibility problems, and also allows to define newtype wrappers for the same pattern types that use different or specific search implementations.

As an additional design goal, the new abstractions should also not pose a problem for optimization - like for iterators, a concrete instance should produce similar machine code to a hardcoded optimized loop written in C.

Detailed design

New traits

First, new traits will be added to the str module in the std library:

trait Pattern<'a> {
    type Searcher: Searcher<'a>;
    fn into_matcher(self, haystack: &'a str) -> Self::Searcher;

    fn is_contained_in(self, haystack: &'a str) -> bool { /* default*/ }
    fn match_starts_at(self, haystack: &'a str, idx: usize) -> bool { /* default*/ }
    fn match_ends_at(self, haystack: &'a str, idx: usize) -> bool
        where Self::Searcher: ReverseSearcher<'a> { /* default*/ }
}

A Pattern represents a builder for an associated type implementing a family of Searcher traits (see below), and will be implemented by all types that represent string patterns, which includes:

  • &str
  • char, and everything else implementing CharEq
  • Third party types like &Regex or Ascii
  • Alternative algorithm wrappers like struct BoyerMoore(&str)
impl<'a>     Pattern<'a> for char       { /* ... */ }
impl<'a, 'b> Pattern<'a> for &'b str    { /* ... */ }

impl<'a, 'b> Pattern<'a> for &'b [char] { /* ... */ }
impl<'a, F>  Pattern<'a> for F where F: FnMut(char) -> bool { /* ... */ }

impl<'a, 'b> Pattern<'a> for &'b Regex  { /* ... */ }

The lifetime parameter on Pattern exists in order to allow threading the lifetime of the haystack (the string to be searched through) through the API, and is a workaround for not having associated higher kinded types yet.

Consumers of this API can then call into_searcher() on the pattern to convert it into a type implementing a family of Searcher traits:

pub enum SearchStep {
    Match(usize, usize),
    Reject(usize, usize),
    Done
}
pub unsafe trait Searcher<'a> {
    fn haystack(&self) -> &'a str;
    fn next(&mut self) -> SearchStep;

    fn next_match(&mut self) -> Option<(usize, usize)> { /* default*/ }
    fn next_reject(&mut self) -> Option<(usize, usize)> { /* default*/ }
}
pub unsafe trait ReverseSearcher<'a>: Searcher<'a> {
    fn next_back(&mut self) -> SearchStep;

    fn next_match_back(&mut self) -> Option<(usize, usize)> { /* default*/ }
    fn next_reject_back(&mut self) -> Option<(usize, usize)> { /* default*/ }
}
pub trait DoubleEndedSearcher<'a>: ReverseSearcher<'a> {}

The basic idea of a Searcher is to expose a interface for iterating through all connected string fragments of the haystack while classifying them as either a match, or a reject.

This happens in form of the returned enum value. A Match needs to contain the start and end indices of a complete non-overlapping match, while a Rejects may be emitted for arbitrary non-overlapping rejected parts of the string, as long as the start and end indices lie on valid utf8 boundaries.

Similar to iterators, depending on the concrete implementation a searcher can have additional capabilities that build on each other, which is why they will be defined in terms of a three-tier hierarchy:

  • Searcher<'a> is the basic trait that all searchers need to implement. It contains a next() method that returns the start and end indices of the next match or reject in the haystack, with the search beginning at the front (left) of the string. It also contains a haystack() getter for returning the actual haystack, which is the source of the 'a lifetime on the hierarchy. The reason for this getter being made part of the trait is twofold:
    • Every searcher needs to store some reference to the haystack anyway.
    • Users of this trait will need access to the haystack in order for the individual match results to be useful.
  • ReverseSearcher<'a> adds an next_back() method, for also allowing to efficiently search in reverse (starting from the right). However, the results are not required to be equal to the results of next() in reverse, (as would be the case for the DoubleEndedIterator trait) because that can not be efficiently guaranteed for all searchers. (For an example, see further below)
  • Instead DoubleEndedSearcher<'a> is provided as an marker trait for expressing that guarantee - If a searcher implements this trait, all results found from the left need to be equal to all results found from the right in reverse order.

As an important last detail, both Searcher and ReverseSearcher are marked as unsafe traits, even though the actual methods aren’t. This is because every implementation of these traits need to ensure that all indices returned by next() and next_back() lie on valid utf8 boundaries in the haystack.

Without that guarantee, every single match returned by a matcher would need to be double-checked for validity, which would be unnecessary and most likely unoptimizable work.

This is in contrast to the current hardcoded implementations, which can make use of such guarantees because the concrete types are known and all unsafe code needed for such optimizations is contained inside a single safe impl.

Given that most implementations of these traits will likely live in the std library anyway, and are thoroughly tested, marking these traits unsafe doesn’t seem like a huge burden to bear for good, optimizable performance.

The role of the additional default methods

Pattern, Searcher and ReverseSearcher each offer a few additional default methods that give better optimization opportunities.

Most consumers of the pattern API will use them to more narrowly constraint how they are looking for a pattern, which given an optimized implementantion, should lead to mostly optimal code being generated.

Example for the issue with double-ended searching

Let the haystack be the string "fooaaaaabar", and let the pattern be the string "aa".

Then a efficient, lazy implementation of the matcher searching from the left would find these matches:

"foo[aa][aa]abar"

However, the same algorithm searching from the right would find these matches:

"fooa[aa][aa]bar"

This discrepancy can not be avoided without additional overhead or even allocations for caching in the reverse matcher, and thus “matching from the front” needs to be considered a different operation than “matching from the back”.

Why (uint, uint) instead of &str

Note: This section is a bit outdated now

It would be possible to define next and next_back to return &strs instead of (uint, uint) tuples.

A concrete searcher impl could then make use of unsafe code to construct such an slice cheaply, and by its very nature it is guaranteed to lie on utf8 boundaries, which would also allow not marking the traits as unsafe.

However, this approach has a couple of issues. For one, not every consumer of this API cares about only the matched slice itself:

  • The split() family of operations cares about the slices between matches.
  • Operations like match_indices() and find() need to actually return the offset to the start of the string as part of their definition.
  • The trim() and Xs_with() family of operations need to compare individual match offsets with each other and the start and end of the string.

In order for these use cases to work with a &str match, the concrete adapters would need to unsafely calculate the offset of a match &str to the start of the haystack &str.

But that in turn would require matcher implementors to only return actual sub slices into the haystack, and not random static string slices, as the API defined with &str would allow.

In order to resolve that issue, you’d have to do one of:

  • Add the uncheckable API constraint of only requiring true subslices, which would make the traits unsafe again, negating much of the benefit.
  • Return a more complex custom slice type that still contains the haystack offset. (This is listed as an alternative at the end of this RFC.)

In both cases, the API does not really improve significantly, so uint indices have been chosen as the “simple” default design.

New methods on StrExt

With the Pattern and Searcher traits defined and implemented, the actual str methods will be changed to make use of them:

pub trait StrExt for ?Sized {
    fn contains<'a, P>(&'a self, pat: P) -> bool where P: Pattern<'a>;

    fn split<'a, P>(&'a self, pat: P) -> Splits<P> where P: Pattern<'a>;
    fn rsplit<'a, P>(&'a self, pat: P) -> RSplits<P> where P: Pattern<'a>;
    fn split_terminator<'a, P>(&'a self, pat: P) -> TermSplits<P> where P: Pattern<'a>;
    fn rsplit_terminator<'a, P>(&'a self, pat: P) -> RTermSplits<P> where P: Pattern<'a>;
    fn splitn<'a, P>(&'a self, pat: P, n: uint) -> NSplits<P> where P: Pattern<'a>;
    fn rsplitn<'a, P>(&'a self, pat: P, n: uint) -> RNSplits<P> where P: Pattern<'a>;

    fn matches<'a, P>(&'a self, pat: P) -> Matches<P> where P: Pattern<'a>;
    fn rmatches<'a, P>(&'a self, pat: P) -> RMatches<P> where P: Pattern<'a>;
    fn match_indices<'a, P>(&'a self, pat: P) -> MatchIndices<P> where P: Pattern<'a>;
    fn rmatch_indices<'a, P>(&'a self, pat: P) -> RMatchIndices<P> where P: Pattern<'a>;

    fn starts_with<'a, P>(&'a self, pat: P) -> bool where P: Pattern<'a>;
    fn ends_with<'a, P>(&'a self, pat: P) -> bool where P: Pattern<'a>,
                                                        P::Searcher: ReverseSearcher<'a>;

    fn trim_matches<'a, P>(&'a self, pat: P) -> &'a str where P: Pattern<'a>,
                                                              P::Searcher: DoubleEndedSearcher<'a>;
    fn trim_left_matches<'a, P>(&'a self, pat: P) -> &'a str where P: Pattern<'a>;
    fn trim_right_matches<'a, P>(&'a self, pat: P) -> &'a str where P: Pattern<'a>,
                                                                    P::Searcher: ReverseSearcher<'a>;

    fn find<'a, P>(&'a self, pat: P) -> Option<uint> where P: Pattern<'a>;
    fn rfind<'a, P>(&'a self, pat: P) -> Option<uint> where P: Pattern<'a>,
                                                            P::Searcher: ReverseSearcher<'a>;

    // ...
}

These are mainly the same pattern-using methods as currently existing, only changed to uniformly use the new pattern API. The main differences are:

  • Duplicates like contains(char) and contains_str(&str) got merged into single generic methods.
  • CharEq-centric naming got changed to Pattern-centric naming by changing chars to matches in a few method names.
  • A Matches iterator has been added, that just returns the pattern matches as &str slices. Its uninteresting for patterns that look for a single string fragment, like the char and &str matcher, but useful for advanced patterns like predicates over codepoints, or regular expressions.
  • All operations that can work from both the front and the back consistently exist in two versions, the regular front version, and a r prefixed reverse versions. As explained above, this is because both represent different operations, and thus need to be handled as such. To be more precise, the two can not be abstracted over by providing a DoubleEndedIterator implementations, as the different results would break the requirement for double ended iterators to behave like a double ended queues where you just pop elements from both sides.

However, all iterators will still implement DoubleEndedIterator if the underlying matcher implements DoubleEndedSearcher, to keep the ability to do things like foo.split('a').rev().

Transition and deprecation plans

Most changes in this RFC can be made in such a way that code using the old hardcoded or CharEq-using methods will still compile, or give deprecation warning.

It would even be possible to generically implement Pattern for all CharEq types, making the transition more painless.

Long-term, post 1.0, it would be possible to define new sets of Pattern and Searcher without a lifetime parameter by making use of higher kinded types in order to simplify the string APIs. Eg, instead of fn starts_with<'a, P>(&'a self, pat: P) -> bool where P: Pattern<'a>; you’d have fn starts_with<P>(&self, pat: P) -> bool where P: Pattern;.

In order to not break backwards-compatibility, these can use the same generic-impl trick to forward to the old traits, which would roughly look like this:

unsafe trait NewPattern {
    type Searcher<'a> where Searcher: NewSearcher;

    fn into_matcher<'a>(self, s: &'a str) -> Self::Searcher<'a>;
}

unsafe impl<'a, P> Pattern<'a> for P where P: NewPattern {
    type Searcher = <Self as NewPattern>::Searcher<'a>;

    fn into_matcher(self, haystack: &'a str) -> Self::Searcher {
        <Self as NewPattern>::into_matcher(self, haystack)
    }
}

unsafe trait NewSearcher for Self<'_> {
    fn haystack<'a>(self: &Self<'a>) -> &'a str;
    fn next_match<'a>(self: &mut Self<'a>) -> Option<(uint, uint)>;
}

unsafe impl<'a, M> Searcher<'a> for M<'a> where M: NewSearcher {
    fn haystack(&self) -> &'a str {
        <M as NewSearcher>::haystack(self)
    }
    fn next_match(&mut self) -> Option<(uint, uint)> {
        <M as NewSearcher>::next_match(self)
    }
}

Based on coherency experiments and assumptions about how future HKT will work, the author is assuming that the above implementation will work, but can not experimentally prove it.

Note: There might be still an issue with this upgrade path on the concrete iterator types. That is, Split<P> might turn into Split<'a, P>… Maybe require the 'a from the beginning?

In order for these new traits to fully replace the old ones without getting in their way, the old ones need to not be defined in a way that makes them “final”. That is, they should be defined in their own submodule, like str::pattern that can grow a sister module like str::newpattern, and not be exported in a global place like str or even the prelude (which would be unneeded anyway).

Drawbacks

  • It complicates the whole machinery and API behind the implementation of matching on string patterns.
  • The no-HKT-lifetime-workaround wart might be to confusing for something as commonplace as the string API.
  • This add a few layers of generics, so compilation times and micro optimizations might suffer.

Alternatives

Note: This section is not updated to the new naming scheme

In general:

  • Keep status quo, with all issues listed at the beginning.
  • Stabilize on hardcoded variants, eg providing both contains and contains_str. Similar to status quo, but no CharEq and thus no generics.

Under the assumption that the lifetime parameter on the traits in this proposal is too big a wart to have in the release string API, there is an primary alternative that would avoid it:

  • Stabilize on a variant around CharEq - This would mean hardcoded _str methods, generic CharEq methods, and no extensibility to types like Regex, but has a upgrade path for later upgrading CharEq to a full-fledged, HKT-using Pattern API, by providing back-comp generic impls.

Next, there are alternatives that might make a positive difference in the authors opinion, but still have some negative trade-offs:

  • With the Matcher traits having the unsafe constraint of returning results unique to the current haystack already, they could just directly return a (*const u8, *const u8) pointing into it. This would allow a few more micro-optimizations, as now the matcher -> match -> final slice pipeline would no longer need to keep adding and subtracting the start address of the haystack for immediate results.
  • Extend Pattern into Pattern and ReversePattern, starting the forward-reverse split at the level of patterns directly. The two would still be in a inherits-from relationship like Matcher and ReverseSearcher, and be interchangeable if the later also implement DoubleEndedSearcher, but on the str API where clauses like where P: Pattern<'a>, P::Searcher: ReverseSearcher<'a> would turn into where P: ReversePattern<'a>.

Lastly, there are alternatives that don’t seem very favorable, but are listed for completeness sake:

  • Remove unsafe from the API by returning a special SubSlice<'a> type instead of (uint, uint) in each match, that wraps the haystack and the current match as a (*start, *match_start, *match_end, *end) pointer quad. It is unclear whether those two additional words per match end up being an issue after monomorphization, but two of them will be constant for the duration of the iteration, so changes are they won’t matter. The haystack() could also be removed that way, as each match already returns the haystack. However, this still prevents removal of the lifetime parameters without HKT.
  • Remove the lifetimes on Matcher and Pattern by requiring users of the API to store the haystack slice themselves, duplicating it in the in-memory representation. However, this still runs into HKT issues with the impl of Pattern.
  • Remove the lifetime parameter on Pattern and Matcher by making them fully unsafe API’s, and require implementations to unsafely transmuting back the lifetime of the haystack slice.
  • Remove unsafe from the API by not marking the Matcher traits as unsafe, requiring users of the API to explicitly check every match on validity in regard to utf8 boundaries.
  • Allow to opt-in the unsafe traits by providing parallel safe and unsafe Matcher traits or methods, with the one per default implemented in terms of the other.

Unresolved questions

  • Concrete performance is untested compared to the current situation.
  • Should the API split in regard to forward-reverse matching be as symmetrical as possible, or as minimal as possible? In the first case, iterators like Matches and RMatches could both implement DoubleEndedIterator if a DoubleEndedSearcher exists, in the latter only Matches would, with RMatches only providing the minimum to support reverse operation. A ruling in favor of symmetry would also speak for the ReversePattern alternative.

Additional extensions

A similar abstraction system could be implemented for String APIs, so that for example string.push("foo"), string.push('f'), string.push('f'.to_ascii()) all work by using something like a StringSource trait.

This would allow operations like s.replace(&regex!(...), "foo"), which would be a method generic over both the pattern matched and the string fragment it gets replaced with:

fn replace<P, S>(&mut self, pat: P, with: S) where P: Pattern, S: StringSource { /* ... */ }

Summary

This RFC proposes several new generic conversion traits. The motivation is to remove the need for ad hoc conversion traits (like FromStr, AsSlice, ToSocketAddr, FromError) whose sole role is for generics bounds. Aside from cutting down on trait proliferation, centralizing these traits also helps the ecosystem avoid incompatible ad hoc conversion traits defined downstream from the types they convert to or from. It also future-proofs against eventual language features for ergonomic conversion-based overloading.

Motivation

The idea of generic conversion traits has come up from time to time, and now that multidispatch is available they can be made to work reasonably well. They are worth considering due to the problems they solve (given below), and considering now because they would obsolete several ad hoc conversion traits (and several more that are in the pipeline) for std.

Problem 1: overloading over conversions

Rust does not currently support arbitrary, implicit conversions – and for some good reasons. However, it is sometimes important ergonomically to allow a single function to be explicitly overloaded based on conversions.

For example, the recently proposed path APIs introduce an AsPath trait to make various path operations ergonomic:

pub trait AsPath {
    fn as_path(&self) -> &Path;
}

impl Path {
    ...

    pub fn join<P: AsPath>(&self, path: &P) -> PathBuf { ... }
}

The idea in particular is that, given a path, you can join using a string literal directly. That is:

// write this:
let new_path = my_path.join("fixed_subdir_name");

// not this:
let new_path = my_path.join(Path::new("fixed_subdir_name"));

It’s a shame to have to introduce new ad hoc traits every time such an overloading is desired. And because the traits are ad hoc, it’s also not possible to program generically over conversions themselves.

Problem 2: duplicate, incompatible conversion traits

There’s a somewhat more subtle problem compounding the above: if the author of the path API neglects to include traits like AsPath for its core types, but downstream crates want to overload on those conversions, those downstream crates may each introduce their own conversion traits, which will not be compatible with one another.

Having standard, generic conversion traits cuts down on the total number of traits, and also ensures that all Rust libraries have an agreed-upon way to talk about conversions.

Non-goals

When considering the design of generic conversion traits, it’s tempting to try to do away will all ad hoc conversion methods. That is, to replace methods like to_string and to_vec with a single method to::<String> and to::<Vec<u8>>.

Unfortunately, this approach carries several ergonomic downsides:

  • The required ::< _ > syntax is pretty unfriendly. Something like to<String> would be much better, but is unlikely to happen given the current grammar.

  • Designing the traits to allow this usage is surprisingly subtle – it effectively requires two traits per type of generic conversion, with blanket impls mapping one to the other. Having such complexity for all conversions in Rust seems like a non-starter.

  • Discoverability suffers somewhat. Looking through a method list and seeing to_string is easier to comprehend (for newcomers especially) than having to crawl through the impls for a trait on the side – especially given the trait complexity mentioned above.

Nevertheless, this is a serious alternative that will be laid out in more detail below, and merits community discussion.

Detailed design

Basic design

The design is fairly simple, although perhaps not as simple as one might expect: we introduce a total of four traits:

trait AsRef<T: ?Sized> {
    fn as_ref(&self) -> &T;
}

trait AsMut<T: ?Sized> {
    fn as_mut(&mut self) -> &mut T;
}

trait Into<T> {
    fn into(self) -> T;
}

trait From<T> {
    fn from(T) -> Self;
}

The first three traits mirror our as/into conventions, but add a bit more structure to them: as-style conversions are from references to references and into-style conversions are between arbitrary types (consuming their argument).

A To trait, following our to conventions and converting from references to arbitrary types, is possible but is deferred for now.

The final trait, From, mimics the from constructors. This trait is expected to outright replace most custom from constructors. See below.

Why the reference restrictions?

If all of the conversion traits were between arbitrary types, you would have to use generalized where clauses and explicit lifetimes even for simple cases:

// Possible alternative:
trait As<T> {
    fn convert_as(self) -> T;
}

// But then you get this:
fn take_as<'a, T>(t: &'a T) where &'a T: As<&'a MyType>;

// Instead of this:
fn take_as<T>(t: &T) where T: As<MyType>;

If you need a conversion that works over any lifetime, you need to use higher-ranked trait bounds:

... where for<'a> &'a T: As<&'a MyType>

This case is particularly important when you cannot name a lifetime in advance, because it will be created on the stack within the function. It might be possible to add sugar so that where &T: As<&MyType> expands to the above automatically, but such an elision might have other problems, and in any case it would preclude writing direct bounds like fn foo<P: AsPath>.

The proposed trait definition essentially bakes in the needed lifetime connection, capturing the most common mode of use for as/to/into conversions. In the future, an HKT-based version of these traits could likely generalize further.

Why have multiple traits at all?

The biggest reason to have multiple traits is to take advantage of the lifetime linking explained above. In addition, however, it is a basic principle of Rust’s libraries that conversions are distinguished by cost and consumption, and having multiple traits makes it possible to (by convention) restrict attention to e.g. “free” as-style conversions by bounding only by AsRef.

Why have both Into and From? There are a few reasons:

  • Coherence issues: the order of the types is significant, so From allows extensibility in some cases that Into does not.

  • To match with existing conventions around conversions and constructors (in particular, replacing many from constructors).

Blanket impls

Given the above trait design, there are a few straightforward blanket impls as one would expect:

// AsMut implies Into
impl<'a, T, U> Into<&'a mut U> for &'a mut T where T: AsMut<U> {
    fn into(self) -> &'a mut U {
        self.as_mut()
    }
}

// Into implies From
impl<T, U> From<T> for U where T: Into<U> {
    fn from(t: T) -> U { t.into() }
}

An example

Using all of the above, here are some example impls and their use:

impl AsRef<str> for String {
    fn as_ref(&self) -> &str {
        self.as_slice()
    }
}
impl AsRef<[u8]> for String {
    fn as_ref(&self) -> &[u8] {
        self.as_bytes()
    }
}

impl Into<Vec<u8>> for String {
    fn into(self) -> Vec<u8> {
        self.into_bytes()
    }
}

fn main() {
    let a = format!("hello");
    let b: &[u8] = a.as_ref();
    let c: &str = a.as_ref();
    let d: Vec<u8> = a.into();
}

This use of generic conversions within a function body is expected to be rare, however; usually the traits are used for generic functions:

impl Path {
    fn join_path_inner(&self, p: &Path) -> PathBuf { ... }

    pub fn join_path<P: AsRef<Path>>(&self, p: &P) -> PathBuf {
        self.join_path_inner(p.as_ref())
    }
}

In this very typical pattern, you introduce an “inner” function that takes the converted value, and the public API is a thin wrapper around that. The main reason to do so is to avoid code bloat: given that the generic bound is used only for a conversion that can be done up front, there is no reason to monomorphize the entire function body for each input type.

An aside: codifying the generics pattern in the language

This pattern is so common that we probably want to consider sugar for it, e.g. something like:

impl Path {
    pub fn join_path(&self, p: ~Path) -> PathBuf {
        ...
    }
}

that would desugar into exactly the above (assuming that the ~ sigil was restricted to AsRef conversions). Such a feature is out of scope for this RFC, but it’s a natural and highly ergonomic extension of the traits being proposed here.

Preliminary conventions

Would all conversion traits be replaced by the proposed ones? Probably not, due to the combination of two factors (using the example of To, despite its being deferred for now):

  • You still want blanket impls like ToString for Show, but:
  • This RFC proposes that specific conversion methods like to_string stay in common use.

On the other hand, you’d expect a blanket impl of To<String> for any T: ToString, and one should prefer bounding over To<String> rather than ToString for consistency. Basically, the role of ToString is just to provide the ad hoc method name to_string in a blanket fashion.

So a rough, preliminary convention would be the following:

  • An ad hoc conversion method is one following the normal convention of as_foo, to_foo, into_foo or from_foo. A “generic” conversion method is one going through the generic traits proposed in this RFC. An ad hoc conversion trait is a trait providing an ad hoc conversion method.

  • Use ad hoc conversion methods for “natural”, outgoing conversions that should have easy method names and good discoverability. A conversion is “natural” if you’d call it directly on the type in normal code; “unnatural” conversions usually come from generic programming.

    For example, to_string is a natural conversion for str, while into_string is not; but the latter is sometimes useful in a generic context – and that’s what the generic conversion traits can help with.

  • On the other hand, favor From for all conversion constructors.

  • Introduce ad hoc conversion traits if you need to provide a blanket impl of an ad hoc conversion method, or need special functionality. For example, to_string needs a trait so that every Show type automatically provides it.

  • For any ad hoc conversion method, also provide an impl of the corresponding generic version; for traits, this should be done via a blanket impl.

  • When using generics bounded over a conversion, always prefer to use the generic conversion traits. For example, bound S: To<String> not S: ToString. This encourages consistency, and also allows clients to take advantage of the various blanket generic conversion impls.

  • Use the “inner function” pattern mentioned above to avoid code bloat.

Prelude changes

All of the conversion traits are added to the prelude. There are two reasons for doing so:

  • For AsRef/AsMut/Into, the reasoning is similar to the inclusion of PartialEq and friends: they are expected to appear ubiquitously as bounds.

  • For From, bounds are somewhat less common but the use of the from constructor is expected to be rather widespread.

Drawbacks

There are a few drawbacks to the design as proposed:

  • Since it does not replace all conversion traits, there’s the unfortunate case of having both a ToString trait and a To<String> trait bound. The proposed conventions go some distance toward at least keeping APIs consistent, but the redundancy is unfortunate. See Alternatives for a more radical proposal.

  • It may encourage more overloading over coercions, and also more generics code bloat (assuming that the “inner function” pattern isn’t followed). Coercion overloading is not necessarily a bad thing, however, since it is still explicit in the signature rather than wholly implicit. If we do go in this direction, we can consider language extensions that make it ergonomic and avoid code bloat.

Alternatives

The original form of this RFC used the names As.convert_as, AsMut.convert_as_mut, To.convert_to and Into.convert_into (though still From.from). After discussion As was changed to AsRef, removing the keyword collision of a method named as, and the convert_ prefixes were removed.


The main alternative is one that attempts to provide methods that completely replace ad hoc conversion methods. To make this work, a form of double dispatch is used, so that the methods are added to every type but bounded by a separate set of conversion traits.

In this strawman proposal, the name “view shift” is used for as conversions, “conversion” for to conversions, and “transformation” for into conversions. These names are not too important, but needed to distinguish the various generic methods.

The punchline is that, in the end, we can write

let s = format!("hello");
let b = s.shift_view::<[u8]>();

or, put differently, replace as_bytes with shift_view::<[u8]> – for better or worse.

In addition to the rather large jump in complexity, this alternative design also suffers from poor error messages. For example, if you accidentally typed shift_view::<u8> instead, you receive:

error: the trait `ShiftViewFrom<collections::string::String>` is not implemented for the type `u8`

which takes a bit of thought and familiarity with the traits to fully digest. Taken together, the complexity, error messages, and poor ergonomics of things like convert::<u8> rather than as_bytes led the author to discard this alternative design.

// VIEW SHIFTS

// "Views" here are always lightweight, non-lossy, always
// successful view shifts between reference types

// Immutable views

trait ShiftViewFrom<T: ?Sized> {
    fn shift_view_from(&T) -> &Self;
}

trait ShiftView {
    fn shift_view<T: ?Sized>(&self) -> &T where T: ShiftViewFrom<Self>;
}

impl<T: ?Sized> ShiftView for T {
    fn shift_view<U: ?Sized + ShiftViewFrom<T>>(&self) -> &U {
        ShiftViewFrom::shift_view_from(self)
    }
}

// Mutable coercions

trait ShiftViewFromMut<T: ?Sized> {
    fn shift_view_from_mut(&mut T) -> &mut Self;
}

trait ShiftViewMut {
    fn shift_view_mut<T: ?Sized>(&mut self) -> &mut T where T: ShiftViewFromMut<Self>;
}

impl<T: ?Sized> ShiftViewMut for T {
    fn shift_view_mut<U: ?Sized + ShiftViewFromMut<T>>(&mut self) -> &mut U {
        ShiftViewFromMut::shift_view_from_mut(self)
    }
}

// CONVERSIONS

trait ConvertFrom<T: ?Sized> {
    fn convert_from(&T) -> Self;
}

trait Convert {
    fn convert<T>(&self) -> T where T: ConvertFrom<Self>;
}

impl<T: ?Sized> Convert for T {
    fn convert<U>(&self) -> U where U: ConvertFrom<T> {
        ConvertFrom::convert_from(self)
    }
}

impl ConvertFrom<str> for Vec<u8> {
    fn convert_from(s: &str) -> Vec<u8> {
        s.to_string().into_bytes()
    }
}

// TRANSFORMATION

trait TransformFrom<T> {
    fn transform_from(T) -> Self;
}

trait Transform {
    fn transform<T>(self) -> T where T: TransformFrom<Self>;
}

impl<T> Transform for T {
    fn transform<U>(self) -> U where U: TransformFrom<T> {
        TransformFrom::transform_from(self)
    }
}

impl TransformFrom<String> for Vec<u8> {
    fn transform_from(s: String) -> Vec<u8> {
        s.into_bytes()
    }
}

impl<'a, T, U> TransformFrom<&'a T> for U where U: ConvertFrom<T> {
    fn transform_from(x: &'a T) -> U {
        x.convert()
    }
}

impl<'a, T, U> TransformFrom<&'a mut T> for &'a mut U where U: ShiftViewFromMut<T> {
    fn transform_from(x: &'a mut T) -> &'a mut U {
        ShiftViewFromMut::shift_view_from_mut(x)
    }
}

// Example

impl ShiftViewFrom<String> for str {
    fn shift_view_from(s: &String) -> &str {
        s.as_slice()
    }
}
impl ShiftViewFrom<String> for [u8] {
    fn shift_view_from(s: &String) -> &[u8] {
        s.as_bytes()
    }
}

fn main() {
    let s = format!("hello");
    let b = s.shift_view::<[u8]>();
}

Possible further work

We could add a To trait.

trait To<T> {
    fn to(&self) -> T;
}

As far as blanket impls are concerned, there are a few simple ones:

// AsRef implies To
impl<'a, T: ?Sized, U: ?Sized> To<&'a U> for &'a T where T: AsRef<U> {
    fn to(&self) -> &'a U {
        self.as_ref()
    }
}

// To implies Into
impl<'a, T, U> Into<U> for &'a T where T: To<U> {
    fn into(self) -> U {
        self.to()
    }
}
  • Start Date: 2014-12-18
  • RFC PR: 531
  • Rust Issue: n/a

Summary

According to current documents, the RFC process is required to make “substantial” changes to the Rust distribution. It is currently lightweight, but lacks a definition for the Rust distribution. This RFC aims to amend the process with a both broad and clear definition of “Rust distribution,” while still keeping the process itself in tact.

Motivation

The motivation for this change comes from the recent decision for Crates.io to affirm its first come, first serve policy. While there was discussion of the matter on a GitHub issue, this discussion was rather low visibility. Regardless of the outcome of this particular decision, it highlights the fact that there is not a clear place for thorough discussion of policy decisions related to the outermost parts of Rust.

Detailed design

To remedy this issue, there must be a defined scope for the RFC process. This definition would be incorporated into the section titled “When you need to follow this process.” The goal here is to be as explicit as possible. This RFC proposes that the scope of the RFC process be defined as follows:

  • Rust
  • Cargo
  • Crates.io
  • The RFC process itself

This definition explicitly does not include:

  • Other crates maintained under the rust-lang organization, such as time.

Drawbacks

The only particular drawback would be if this definition is too narrow, it might be restrictive. However, this definition fortunately includes the ability to amend the RFC process. So, this could be expanded if the need exists.

Alternatives

The alternative is leaving the process as is. However, adding clarity at little to no cost should be preferred as it lowers the barrier to entry for contributions, and increases the visibility of potential changes that may have previously been discussed outside of an RFC.

Unresolved questions

Are there other things that should be explicitly included as part of the scope of the RFC process right now?

  • Start Date: 2014-12-19
  • RFC PR: 532
  • Rust Issue: 20361

Summary

This RFC proposes the mod keyword used to refer the immediate parent namespace in use items (use a::b::{mod, c}) to be changed to self.

Motivation

While this looks fine:

use a::b::{mod, c};

pub mod a {
    pub mod b {
        pub type c = ();
    }
}

This looks strange, since we are not really importing a module:

use Foo::{mod, Bar, Baz};

enum Foo { Bar, Baz }

RFC #168 was written when there was no namespaced enum, therefore the choice of the keyword was suboptimal.

Detailed design

This RFC simply proposes to use self in place of mod. This should amount to one line change to the parser, possibly with a renaming of relevant AST node (PathListMod).

Drawbacks

self is already used to denote a relative path in the use item. While they can be clearly distinguished (any use of self proposed in this RFC will appear inside braces), this can cause some confusion to beginners.

Alternatives

Don’t do this. Simply accept that mod also acts as a general term for namespaces.

Allow enum to be used in place of mod when the parent item is enum. This clearly expresses the intent and it doesn’t reuse self. However, this is not very future-proof for several reasons.

  • Any item acting as a namespace would need a corresponding keyword. This is backward compatible but cumbersome.
  • If such namespace is not defined with an item but only implicitly, we may not have a suitable keyword to use.
  • We currently import all items sharing the same name (e.g. struct P(Q);), with no way of selectively importing one of them by the item type. An explicit item type in use will imply that we can selectively import, while we actually can’t.

Unresolved questions

None.

Summary

In order to prepare for an expected future implementation of non-zeroing dynamic drop, remove support for:

  • moving individual elements into an uninitialized fixed-sized array, and

  • moving individual elements out of fixed-sized arrays [T; n], (copying and borrowing such elements is still permitted).

Motivation

If we want to continue supporting dynamic drop while also removing automatic memory zeroing and drop-flags, then we need to either (1.) adopt potential complex code generation strategies to support arrays with only some elements initialized (as discussed in the unresolved questions for RFC PR 320, or we need to (2.) remove support for constructing such arrays in safe code.

This RFC is proposing the second tack.

The expectation is that relatively few libraries need to support moving out of fixed-sized arrays (and even fewer take advantage of being able to initialize individual elements of an uninitialized array, as supporting this was almost certainly not intentional in the language design). Therefore removing the feature from the language will present relatively little burden.

Detailed design

If an expression e has type [T; n] and T does not implement Copy, then it will be illegal to use e[i] in an r-value position.

If an expression e has type [T; n] expression e[i] = <expr> will be made illegal at points in the control flow where e has not yet been initialized.

Note that it remains legal to overwrite an element in an initialized array: e[i] = <expr>, as today. This will continue to drop the overwritten element before moving the result of <expr> into place.

Note also that the proposed change has no effect on the semantics of destructuring bind; i.e. fn([a, b, c]: [Elem; 3]) { ... } will continue to work as much as it does today.

A prototype implementation has been posted at Rust PR 21930.

Drawbacks

  • Adopting this RFC is introducing a limitation on the language based on a hypothetical optimization that has not yet been implemented (though much of the ground work for its supporting analyses are done).

Also, as noted in the comment thread from RFC PR 320

  • We support moving a single element out of an n-tuple, and “by analogy” we should support moving out of [T; n] (Note that one can still move out of [T; n] in some cases via destructuring bind.)

  • It is “nice” to be able to write

    fn grab_random_from(actions: [Action; 5]) -> Action { actions[rand_index()] }

    to express this now, one would be forced to instead use clone() (or pass in a Vec and do some element swapping).

Alternatives

We can just leave things as they are; there are hypothetical code-generation strategies for supporting non-zeroing drop even with this feature, as discussed in the comment thread from RFC PR 320.

Unresolved questions

None

  • Start Date: 2014-19-19
  • RFC PR: 534
  • Rust Issue: 20362

Summary

Rename the #[deriving(Foo)] syntax extension to #[derive(Foo)].

Motivation

Unlike our other verb-based attribute names, “deriving” stands alone as a present participle. By convention our attributes prefer “warn” rather than “warning”, “inline” rather than “inlining”, “test” rather than “testing”, and so on. We also have a trend against present participles in general, such as with Encoding being changed to Encode.

It’s also shorter to type, which is very important in a world without implicit Copy implementations.

Finally, if I may be subjective, derive(Thing1, Thing2) simply reads better than deriving(Thing1, Thing2).

Detailed design

Rename the deriving attribute to derive. This should be a very simple find- and-replace.

Drawbacks

Participles the world over will lament the loss of their only foothold in this promising young language.

Summary

This RFC proposes that we rename the pointer-sized integer types int/uint, so as to avoid misconceptions and misuses. After extensive community discussions and several revisions of this RFC, the finally chosen names are isize/usize.

Motivation

Currently, Rust defines two machine-dependent integer types int/uint that have the same number of bits as the target platform’s pointer type. These two types are used for many purposes: indices, counts, sizes, offsets, etc.

The problem is, int/uint look like default integer types, but pointer-sized integers are not good defaults, and it is desirable to discourage people from overusing them.

And it is a quite popular opinion that, the best way to discourage their use is to rename them.

Previously, the latest renaming attempt RFC PR 464 was rejected. (Some parts of this RFC is based on that RFC.) A tale of two’s complement states the following reasons:

  • Changing the names would affect literally every Rust program ever written.
  • Adjusting the guidelines and tutorial can be equally effective in helping people to select the correct type.
  • All the suggested alternative names have serious drawbacks.

However:

Rust was and is undergoing quite a lot of breaking changes. Even though the int/uint renaming will “break the world”, it is not unheard of, and it is mainly a “search & replace”. Also, a transition period can be provided, during which int/uint can be deprecated, while the new names can take time to replace them. So “to avoid breaking the world” shouldn’t stop the renaming.

int/uint have a long tradition of being the default integer type names, so programmers will be tempted to use them in Rust, even the experienced ones, no matter what the documentation says. The semantics of int/uint in Rust is quite different from that in many other mainstream languages. Worse, the Swift programming language, which is heavily influenced by Rust, has the types Int/UInt with almost the same semantics as Rust’s int/uint, but it actively encourages programmers to use Int as much as possible. From the Swift Programming Language:

Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size: …

Swift also provides an unsigned integer type, UInt, which has the same size as the current platform’s native word size: …

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability.

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this is not the case, Int is preferred, even when the values to be stored are known to be non-negative.

Thus, it is very likely that newcomers will come to Rust, expecting int/uint to be the preferred integer types, even if they know that they are pointer-sized.

Not renaming int/uint violates the principle of least surprise, and is not newcomer friendly.

Before the rejection of RFC PR 464, the community largely settled on two pairs of candidates: imem/umem and iptr/uptr. As stated in previous discussions, the names have some drawbacks that may be unbearable. (Please refer to A tale of two’s complement and related discussions for details.)

This RFC originally proposed a new pair of alternatives intx/uintx.

However, given the discussions about the previous revisions of this RFC, and the discussions in Restarting the int/uint Discussion, this RFC author (@CloudiDust) now believes that intx/uintx are not ideal. Instead, one of the other pairs of alternatives should be chosen. The finally chosen names are isize/usize.

Detailed Design

  • Rename int/uint to isize/usize, with them being their own literal suffixes.
  • Update code and documentation to use pointer-sized integers more narrowly for their intended purposes. Provide a deprecation period to carry out these updates.

usize in action:

fn slice_or_fail<'b>(&'b self, from: &usize, to: &usize) -> &'b [T]

There are different opinions about which literal suffixes to use. The following section would discuss the alternatives.

Choosing literal suffixes:

isize/usize:

  • Pros: They are the same as the type names, very consistent with the rest of the integer primitives.
  • Cons: They are too long for some, and may stand out too much as suffixes. However, discouraging people from overusing isize/usize is the point of this RFC. And if they are not overused, then this will not be a problem in practice.

is/us:

  • Pros: They are succinct as suffixes.
  • Cons: They are actual English words, with is being a keyword in many programming languages and us being an abbreviation of “unsigned” (losing information) or “microsecond” (misleading). Also, is/us may be too short (shorter than i64/u64) and too pleasant to use, which can be a problem.

Note: No matter which suffixes get chosen, it can be beneficial to reserve is as a keyword, but this is outside the scope of this RFC.

iz/uz:

  • Pros and cons: Similar to those of is/us, except that iz/uz are not actual words, which is an additional advantage. However it may not be immediately clear that iz/uz are abbreviations of isize/usize.

i/u:

  • Pros: They are very succinct.
  • Cons: They are too succinct and carry the “default integer types” connotation, which is undesirable.

isz/usz:

  • Pros: They are the middle grounds between isize/usize and is/us, neither too long nor too short. They are not actual English words and it’s clear that they are short for isize/usize.
  • Cons: Not everyone likes the appearances of isz/usz, but this can be said about all the candidates.

After community discussions, it is deemed that using isize/usize directly as suffixes is a fine choice and there is no need to introduce other suffixes.

Advantages of isize/usize:

  • The names indicate their common use cases (container sizes/indices/offsets), so people will know where to use them, instead of overusing them everywhere.
  • The names follow the i/u + {suffix} pattern that is used by all the other primitive integer types like i32/u32.
  • The names are newcomer friendly and have familiarity advantage over almost all other alternatives.
  • The names are easy on the eyes.

See Alternatives B to L for the alternatives to isize/usize that have been rejected.

Drawbacks

Drawbacks of the renaming in general:

  • Renaming int/uint requires changing much existing code. On the other hand, this is an ideal opportunity to fix integer portability bugs.

Drawbacks of isize/usize:

  • The names fail to indicate the precise semantics of the types - pointer-sized integers. (And they don’t follow the i32/u32 pattern as faithfully as possible, as 32 indicates the exact size of the types, but size in isize/usize is vague in this aspect.)
  • The names favour some of the types’ use cases over the others.
  • The names remind people of C’s ssize_t/size_t, but isize/usize don’t share the exact same semantics with the C types.

Familiarity is a double edged sword here. isize/usize are chosen not because they are perfect, but because they represent a good compromise between semantic accuracy, familiarity and code readability. Given good documentation, the drawbacks listed here may not matter much in practice, and the combined familiarity and readability advantage outweighs them all.

Alternatives

A. Keep the status quo:

Which may hurt in the long run, especially when there is at least one (would-be?) high-profile language (which is Rust-inspired) taking the opposite stance of Rust.

The following alternatives make different trade-offs, and choosing one would be quite a subjective matter. But they are all better than the status quo.

B. iptr/uptr:

  • Pros: “Pointer-sized integer”, exactly what they are.
  • Cons: C/C++ have intptr_t/uintptr_t, which are typically only used for storing casted pointer values. We don’t want people to confuse the Rust types with the C/C++ ones, as the Rust ones have more typical use cases. Also, people may wonder why all data structures have “pointers” in their method signatures. Besides the “funny-looking” aspect, the names may have an incorrect “pointer fiddling and unsafe staff” connotation there, as ptr isn’t usually seen in safe Rust code.

In the following snippet:

fn slice_or_fail<'b>(&'b self, from: &uptr, to: &uptr) -> &'b [T]

It feels like working with pointers, not integers.

C. imem/umem:

When originally proposed, mem/m are interpreted as “memory numbers” (See @1fish2’s comment in RFC PR 464):

imem/umem are “memory numbers.” They’re good for indexes, counts, offsets, sizes, etc. As memory numbers, it makes sense that they’re sized by the address space.

However this interpretation seems vague and not quite convincing, especially when all other integer types in Rust are named precisely in the “i/u + {size}” pattern, with no “indirection” involved. What is “memory-sized” anyway? But actually, they can be interpreted as _mem_ory-pointer-sized, and be a precise size specifier just like ptr.

  • Pros: Types with similar names do not exist in mainstream languages, so people will not make incorrect assumptions.
  • Cons: mem -> memory-pointer-sized is definitely not as obvious as ptr -> pointer-sized. The unfamiliarity may turn newcomers away from Rust.

Also, for some, imem/umem just don’t feel like integers no matter how they are interpreted, especially under certain circumstances. In the following snippet:

fn slice_or_fail<'b>(&'b self, from: &umem, to: &umem) -> &'b [T]

umem still feels like a pointer-like construct here (from “some memory” to “some other memory”), even though it doesn’t have ptr in its name.

D. intp/uintp and intm/uintm:

Variants of Alternatives B and C. Instead of stressing the ptr or mem part, they stress the int or uint part.

They are more integer-like than iptr/uptr or imem/umem if one knows where to split the words.

The problem here is that they don’t strictly follow the i/u + {size} pattern, are of different lengths, and the more frequently used type uintp(uintm) has a longer name. Granted, this problem already exists with int/uint, but those two are names that everyone is familiar with.

So they may not be as pretty as iptr/uptr or imem/umem.

fn slice_or_fail<'b>(&'b self, from: &uintm, to: &uintm) -> &'b [T]
fn slice_or_fail<'b>(&'b self, from: &uintp, to: &uintp) -> &'b [T]

E. intx/uintx:

The original proposed names of this RFC, where x means “unknown/variable/platform-dependent”.

They share the same problems with intp/uintp and intm/uintm, while in addition failing to be specific enough. There are other kinds of platform-dependent integer types after all (like register-sized ones), so which ones are intx/uintx?

F. idiff/usize:

There is a problem with isize: it most likely will remind people of C/C++ ssize_t. But ssize_t is in the POSIX standard, not the C/C++ ones, and is not for index offsets according to POSIX. The correct type for index offsets in C99 is ptrdiff_t, so for a type representing offsets, idiff may be a better name.

However, isize/usize have the advantage of being symmetrical, and ultimately, even with a name like idiff, some semantic mismatch between idiff and ptrdiff_t would still exist. Also, for fitting a casted pointer value, a type named isize is better than one named idiff. (Though both would lose to iptr.)

G. iptr/uptr and idiff/usize:

Rename int/uint to iptr/uptr, with idiff/usize being aliases and used in container method signatures.

This is for addressing the “not enough use cases covered” problem. Best of both worlds at the first glance.

iptr/uptr will be used for storing casted pointer values, while idiff/usize will be used for offsets and sizes/indices, respectively.

iptr/uptr and idiff/usize may even be treated as different types to prevent people from accidentally mixing their usage.

This will bring the Rust type names quite in line with the standard C99 type names, which may be a plus from the familiarity point of view.

However, this setup brings two sets of types that share the same underlying representations. C distinguishes between size_t/uintptr_t/intptr_t/ptrdiff_t not only because they are used under different circumstances, but also because the four may have representations that are potentially different from each other on some architectures. Rust assumes a flat memory address space and its int/uint types don’t exactly share semantics with any of the C types if the C standard is strictly followed.

Thus, even introducing four names would not fix the “failing to express the precise semantics of the types” problem. Rust just doesn’t need to, and shouldn’t distinguish between iptr/idiff and uptr/usize, doing so would bring much confusion for very questionable gain.

H. isiz/usiz:

A pair of variants of isize/usize. This author believes that the missing e may be enough to warn people that these are not ssize_t/size_t with “Rustfied” names. But at the same time, isiz/usiz mostly retain the familiarity of isize/usize.

However, isiz/usiz still hide the actual semantics of the types, and omitting but a single letter from a word does feel too hack-ish.

fn slice_or_fail<'b>(&'b self, from: &usiz, to: &usiz) -> &'b [T]

I. iptr_size/uptr_size:

The names are very clear about the semantics, but are also irregular, too long and feel out of place.

fn slice_or_fail<'b>(&'b self, from: &uptr_size, to: &uptr_size) -> &'b [T]

J. iptrsz/uptrsz:

Clear semantics, but still a bit too long (though better than iptr_size/uptr_size), and the ptr parts are still a bit concerning (though to a much less extent than iptr/uptr). On the other hand, being “a bit too long” may not be a disadvantage here.

fn slice_or_fail<'b>(&'b self, from: &uptrsz, to: &uptrsz) -> &'b [T]

K. ipsz/upsz:

Now (and only now, which is the problem) it is clear where this pair of alternatives comes from.

By shortening ptr to p, ipsz/upsz no longer stress the “pointer” parts in anyway. Instead, the sz or “size” parts are (comparatively) stressed. Interestingly, ipsz/upsz look similar to isiz/usiz.

So this pair of names actually reflects both the precise semantics of “pointer-sized integers” and the fact that they are commonly used for “sizes”. However,

fn slice_or_fail<'b>(&'b self, from: &upsz, to: &upsz) -> &'b [T]

ipsz/upsz have gone too far. They are completely incomprehensible without the documentation. Many rightfully do not like letter soup. The only advantage here is that, no one would be very likely to think he/she is dealing with pointers. iptrsz/uptrsz are better in the comprehensibility aspect.

L. Others:

There are other alternatives not covered in this RFC. Please refer to this RFC’s comments and RFC PR 464 for more.

Unresolved questions

None. Necessary decisions about Rust’s general integer type policies have been made in Restarting the int/uint Discussion.

History

Amended by RFC 573 to change the suffixes from is and us to isize and usize. Tracking issue for this amendment is rust-lang/rust#22496.

Summary

  1. Remove the Sized default for the implicitly declared Self parameter on traits.
  2. Make it “object unsafe” for a trait to inherit from Sized.

Motivation

The primary motivation is to enable a trait object SomeTrait to implement the trait SomeTrait. This was the design goal of enforcing object safety, but there was a detail that was overlooked, which this RFC aims to correct.

Secondary motivations include:

  • More generality for traits, as they are applicable to DST.
  • Eliminate the confusing and irregular impl Trait for ?Sized syntax.
  • Sidestep questions about whether the ?Sized default is inherited like other supertrait bounds that appear in a similar position.

This change has been implemented. Fallout within the standard library was quite minimal, since the default only affects default method implementations.

Detailed design

Currently, all type parameters are Sized by default, including the implicit Self parameter that is part of a trait definition. To avoid the default Sized bound on Self, one declares a trait as follows (this example uses the syntax accepted in RFC 490 but not yet implemented):

trait Foo for ?Sized { ... }

This syntax doesn’t have any other precedent in the language. One might expect to write:

trait Foo : ?Sized { ... }

However, placing ?Sized in the supertrait listing raises awkward questions regarding inheritance. Certainly, when experimenting with this syntax early on, we found it very surprising that the ?Sized bound was “inherited” by subtraits. At the same time, it makes no sense to inherit, since all that the ?Sized notation is saying is “do not add Sized”, and you can’t inherit the absence of a thing. Having traits simply not inherit from Sized by default sidesteps this problem altogether and avoids the need for a special syntax to suppress the (now absent) default.

Removing the default also has the benefit of making traits applicable to more types by default. One particularly useful case is trait objects. We are working towards a goal where the trait object for a trait Foo always implements the trait Foo. Because the type Foo is an unsized type, this is naturally not possible if Foo inherits from Sized (since in that case every type that implements Foo must also be Sized).

The impact of this change is minimal under the current rules. This is because it only affects default method implementations. In any actual impl, the Self type is bound to a specific type, and hence it known whether or not that type is Sized. This change has been implemented and hence the fallout can be seen on this branch (specifically, this commit contains the fallout from the standard library). That same branch also implements the changes needed so that every trait object Foo implements the trait Foo.

Drawbacks

The Self parameter is inconsistent with other type parameters if we adopt this RFC. We believe this is acceptable since it is syntactically distinguished in other ways (for example, it is not declared), and the benefits are substantial.

Alternatives

  • Leave Self as it is. The change to object safety must be made in any case, which would mean that for a trait object Foo to implement the trait Foo, it would have to be declared trait Foo for Sized?. Indeed, that would be necessary even to create a trait object Foo. This seems like an untenable burden, so adopting this design choice seems to imply reversing the decision that all trait objects implement their respective traits (RFC 255).

  • Remove the Sized defaults altogether. This approach is purer, but the annotation burden is substantial. We continue to experiment in the hopes of finding an alternative to current blanket default, but without success thus far (beyond the idea of doing global inference).

Unresolved questions

  • None.

Summary

Future-proof the allowed forms that input to an MBE can take by requiring certain delimiters following NTs in a matcher. In the future, it will be possible to lift these restrictions backwards compatibly if desired.

Key Terminology

  • macro: anything invocable as foo!(...) in source code.
  • MBE: macro-by-example, a macro defined by macro_rules.
  • matcher: the left-hand-side of a rule in a macro_rules invocation, or a subportion thereof.
  • macro parser: the bit of code in the Rust parser that will parse the input using a grammar derived from all of the matchers.
  • fragment: The class of Rust syntax that a given matcher will accept (or “match”).
  • repetition : a fragment that follows a regular repeating pattern
  • NT: non-terminal, the various “meta-variables” or repetition matchers that can appear in a matcher, specified in MBE syntax with a leading $ character.
  • simple NT: a “meta-variable” non-terminal (further discussion below).
  • complex NT: a repetition matching non-terminal, specified via Kleene closure operators (*, +).
  • token: an atomic element of a matcher; i.e. identifiers, operators, open/close delimiters, and simple NT’s.
  • token tree: a tree structure formed from tokens (the leaves), complex NT’s, and finite sequences of token trees.
  • delimiter token: a token that is meant to divide the end of one fragment and the start of the next fragment.
  • separator token: an optional delimiter token in an complex NT that separates each pair of elements in the matched repetition.
  • separated complex NT: a complex NT that has its own separator token.
  • delimited sequence: a sequence of token trees with appropriate open- and close-delimiters at the start and end of the sequence.
  • empty fragment: The class of invisible Rust syntax that separates tokens, i.e. whitespace, or (in some lexical contexts), the empty token sequence.
  • fragment specifier: The identifier in a simple NT that specifies which fragment the NT accepts.
  • language: a context-free language.

Example:

macro_rules! i_am_an_mbe {
    (start $foo:expr $($i:ident),* end) => ($foo)
}

(start $foo:expr $($i:ident),* end) is a matcher. The whole matcher is a delimited sequence (with open- and close-delimiters ( and )), and $foo and $i are simple NT’s with expr and ident as their respective fragment specifiers.

$(i:ident),* is also an NT; it is a complex NT that matches a comma-separated repetition of identifiers. The , is the separator token for the complex NT; it occurs in between each pair of elements (if any) of the matched fragment.

Another example of a complex NT is $(hi $e:expr ;)+, which matches any fragment of the form hi <expr>; hi <expr>; ... where hi <expr>; occurs at least once. Note that this complex NT does not have a dedicated separator token.

(Note that Rust’s parser ensures that delimited sequences always occur with proper nesting of token tree structure and correct matching of open- and close-delimiters.)

Motivation

In current Rust (version 0.12; i.e. pre 1.0), the macro_rules parser is very liberal in what it accepts in a matcher. This can cause problems, because it is possible to write an MBE which corresponds to an ambiguous grammar. When an MBE is invoked, if the macro parser encounters an ambiguity while parsing, it will bail out with a “local ambiguity” error. As an example for this, take the following MBE:

macro_rules! foo {
    ($($foo:expr)* $bar:block) => (/*...*/)
};

Attempts to invoke this MBE will never succeed, because the macro parser will always emit an ambiguity error rather than make a choice when presented an ambiguity. In particular, it needs to decide when to stop accepting expressions for foo and look for a block for bar (noting that blocks are valid expressions). Situations like this are inherent to the macro system. On the other hand, it’s possible to write an unambiguous matcher that becomes ambiguous due to changes in the syntax for the various fragments. As a concrete example:

macro_rules! bar {
    ($in:ty ( $($arg:ident)*, ) -> $out:ty;) => (/*...*/)
};

When the type syntax was extended to include the unboxed closure traits, an input such as FnMut(i8, u8) -> i8; became ambiguous. The goal of this proposal is to prevent such scenarios in the future by requiring certain “delimiter tokens” after an NT. When extending Rust’s syntax in the future, ambiguity need only be considered when combined with these sets of delimiters, rather than any possible arbitrary matcher.


Another example of a potential extension to the language that motivates a restricted set of “delimiter tokens” is (postponed) RFC 352, “Allow loops to return values other than ()”, where the break expression would now accept an optional input expression: break <expr>.

  • This proposed extension to the language, combined with the facts that break and { <stmt> ... <expr>? } are Rust expressions, implies that { should not be in the follow set for the expr fragment specifier.

  • Thus in a slightly more ideal world the following program would not be accepted, because the interpretation of the macro could change if we were to accept RFC 352:

    macro_rules! foo {
        ($e:expr { stuff }) => { println!("{:?}", $e) }
    }
    
    fn main() {
        loop { foo!(break { stuff }); }
    }

    (in our non-ideal world, the program is legal in Rust versions 1.0 through at least 1.4)

Detailed design

We will tend to use the variable “M” to stand for a matcher, variables “t” and “u” for arbitrary individual tokens, and the variables “tt” and “uu” for arbitrary token trees. (The use of “tt” does present potential ambiguity with its additional role as a fragment specifier; but it will be clear from context which interpretation is meant.)

“SEP” will range over separator tokens, “OP” over the Kleene operators * and +, and “OPEN”/“CLOSE” over matching token pairs surrounding a delimited sequence (e.g. [ and ]).

We also use Greek letters “α” “β” “γ” “δ” to stand for potentially empty token-tree sequences. (However, the Greek letter “ε” (epsilon) has a special role in the presentation and does not stand for a token-tree sequence.)

  • This Greek letter convention is usually just employed when the presence of a sequence is a technical detail; in particular, when I wish to emphasize that we are operating on a sequence of token-trees, I will use the notation “tt …” for the sequence, not a Greek letter

Note that a matcher is merely a token tree. A “simple NT”, as mentioned above, is an meta-variable NT; thus it is a non-repetition. For example, $foo:ty is a simple NT but $($foo:ty)+ is a complex NT.

Note also that in the context of this RFC, the term “token” generally includes simple NTs.

Finally, it is useful for the reader to keep in mind that according to the definitions of this RFC, no simple NT matches the empty fragment, and likewise no token matches the empty fragment of Rust syntax. (Thus, the only NT that can match the empty fragment is a complex NT.)

The Matcher Invariant

This RFC establishes the following two-part invariant for valid matchers

  1. For any two successive token tree sequences in a matcher M (i.e. M = ... tt uu ...), we must have FOLLOW(... tt) ⊇ FIRST(uu ...)

  2. For any separated complex NT in a matcher, M = ... $(tt ...) SEP OP ..., we must have SEP ∈ FOLLOW(tt ...).

The first part says that whatever actual token that comes after a matcher must be somewhere in the predetermined follow set. This ensures that a legal macro definition will continue to assign the same determination as to where ... tt ends and uu ... begins, even as new syntactic forms are added to the language.

The second part says that a separated complex NT must use a separator token that is part of the predetermined follow set for the internal contents of the NT. This ensures that a legal macro definition will continue to parse an input fragment into the same delimited sequence of tt ...’s, even as new syntactic forms are added to the language.

(This is assuming that all such changes are appropriately restricted, by the definition of FOLLOW below, of course.)

The above invariant is only formally meaningful if one knows what FIRST and FOLLOW denote. We address this in the following sections.

FIRST and FOLLOW, informally

FIRST and FOLLOW are defined as follows.

A given matcher M maps to three sets: FIRST(M), LAST(M) and FOLLOW(M).

Each of the three sets is made up of tokens. FIRST(M) and LAST(M) may also contain a distinguished non-token element ε (“epsilon”), which indicates that M can match the empty fragment. (But FOLLOW(M) is always just a set of tokens.)

Informally:

  • FIRST(M): collects the tokens potentially used first when matching a fragment to M.

  • LAST(M): collects the tokens potentially used last when matching a fragment to M.

  • FOLLOW(M): the set of tokens allowed to follow immediately after some fragment matched by M.

    In other words: t ∈ FOLLOW(M) if and only if there exists (potentially empty) token sequences α, β, γ, δ where:

    • M matches β,
    • t matches γ, and
    • The concatenation α β γ δ is a parseable Rust program.

We use the shorthand ANYTOKEN to denote the set of all tokens (including simple NTs).

  • (For example, if any token is legal after a matcher M, then FOLLOW(M) = ANYTOKEN.)

(To review one’s understanding of the above informal descriptions, the reader at this point may want to jump ahead to the examples of FIRST/LAST before reading their formal definitions.)

FIRST, LAST

Below are formal inductive definitions for FIRST and LAST.

“A ∪ B” denotes set union, “A ∩ B” denotes set intersection, and “A \ B” denotes set difference (i.e. all elements of A that are not present in B).

FIRST(M), defined by case analysis on the sequence M and the structure of its first token-tree (if any):

  • if M is the empty sequence, then FIRST(M) = { ε },

  • if M starts with a token t, then FIRST(M) = { t },

    (Note: this covers the case where M starts with a delimited token-tree sequence, M = OPEN tt ... CLOSE ..., in which case t = OPEN and thus FIRST(M) = { OPEN }.)

    (Note: this critically relies on the property that no simple NT matches the empty fragment.)

  • Otherwise, M is a token-tree sequence starting with a complex NT: M = $( tt ... ) OP α, or M = $( tt ... ) SEP OP α, (where α is the (potentially empty) sequence of token trees for the rest of the matcher).

    • Let sep_set = { SEP } if SEP present; otherwise sep_set = {}.

    • If ε ∈ FIRST(tt ...), then FIRST(M) = (FIRST(tt ...) \ { ε }) ∪ sep_set ∪ FIRST(α)

    • Else if OP = *, then FIRST(M) = FIRST(tt ...) ∪ FIRST(α)

    • Otherwise (OP = +), FIRST(M) = FIRST(tt ...)

Note: The ε-case above,

FIRST(M) = (FIRST(tt ...) \ { ε }) ∪ sep_set ∪ FIRST(α)

may seem complicated, so lets take a moment to break it down. In the ε case, the sequence tt ... may be empty. Therefore our first token may be SEP itself (if it is present), or it may be the first token of α); that’s why the result is including “sep_set ∪ FIRST(α)”. Note also that if α itself may match the empty fragment, then FIRST(α) will ensure that ε is included in our result, and conversely, if α cannot match the empty fragment, then we must ensure that ε is not included in our result; these two facts together are why we can and should unconditionally remove ε from FIRST(tt ...).


LAST(M), defined by case analysis on M itself (a sequence of token-trees):

  • if M is the empty sequence, then LAST(M) = { ε }

  • if M is a singleton token t, then LAST(M) = { t }

  • if M is the singleton complex NT repeating zero or more times, M = $( tt ... ) *, or M = $( tt ... ) SEP *

    • Let sep_set = { SEP } if SEP present; otherwise sep_set = {}.

    • if ε ∈ LAST(tt ...) then LAST(M) = LAST(tt ...) ∪ sep_set

    • otherwise, the sequence tt ... must be non-empty; LAST(M) = LAST(tt ...) ∪ { ε }

  • if M is the singleton complex NT repeating one or more times, M = $( tt ... ) +, or M = $( tt ... ) SEP +

    • Let sep_set = { SEP } if SEP present; otherwise sep_set = {}.

    • if ε ∈ LAST(tt ...) then LAST(M) = LAST(tt ...) ∪ sep_set

    • otherwise, the sequence tt ... must be non-empty; LAST(M) = LAST(tt ...)

  • if M is a delimited token-tree sequence OPEN tt ... CLOSE, then LAST(M) = { CLOSE }

  • if M is a non-empty sequence of token-trees tt uu ...,

    • If ε ∈ LAST(uu ...), then LAST(M) = LAST(tt) ∪ (LAST(uu ...) \ { ε }).

    • Otherwise, the sequence uu ... must be non-empty; then LAST(M) = LAST(uu ...)

NOTE: The presence or absence of SEP is relevant to the above definitions, but solely in the case where the interior of the complex NT could be empty (i.e. ε ∈ FIRST(interior)). (I overlooked this fact in my first round of prototyping.)

NOTE: The above definition for LAST assumes that we keep our pre-existing rule that the separator token in a complex NT is solely for separating elements; i.e. that such NT’s do not match fragments that end with the separator token. If we choose to lift this restriction in the future, the above definition will need to be revised accordingly.

Examples of FIRST and LAST

Below are some examples of FIRST and LAST. (Note in particular how the special ε element is introduced and eliminated based on the interaction between the pieces of the input.)

Our first example is presented in a tree structure to elaborate on how the analysis of the matcher composes. (Some of the simpler subtrees have been elided.)

INPUT:  $(  $d:ident   $e:expr   );*    $( $( h )* );*    $( f ; )+   g
            ~~~~~~~~   ~~~~~~~                ~
                |         |                   |
FIRST:   { $d:ident }  { $e:expr }          { h }


INPUT:  $(  $d:ident   $e:expr   );*    $( $( h )* );*    $( f ; )+
            ~~~~~~~~~~~~~~~~~~             ~~~~~~~           ~~~
                       |                      |               |
FIRST:          { $d:ident }               { h, ε }         { f }

INPUT:  $(  $d:ident   $e:expr   );*    $( $( h )* );*    $( f ; )+   g
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~    ~~~~~~~~~~~~~~    ~~~~~~~~~   ~
                       |                       |              |       |
FIRST:        { $d:ident, ε }            {  h, ε, ;  }      { f }   { g }


INPUT:  $(  $d:ident   $e:expr   );*    $( $( h )* );*    $( f ; )+   g
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                        |
FIRST:                       { $d:ident, h, ;,  f }

Thus:

  • FIRST($($d:ident $e:expr );* $( $(h)* );* $( f ;)+ g) = { $d:ident, h, ;, f }

Note however that:

  • FIRST($($d:ident $e:expr );* $( $(h)* );* $($( f ;)+ g)*) = { $d:ident, h, ;, f, ε }

Here are similar examples but now for LAST.

  • LAST($d:ident $e:expr) = { $e:expr }
  • LAST($( $d:ident $e:expr );*) = { $e:expr, ε }
  • LAST($( $d:ident $e:expr );* $(h)*) = { $e:expr, ε, h }
  • LAST($( $d:ident $e:expr );* $(h)* $( f ;)+) = { ; }
  • LAST($( $d:ident $e:expr );* $(h)* $( f ;)+ g) = { g }

and again, changing the end part of matcher changes its last set considerably:

  • LAST($( $d:ident $e:expr );* $(h)* $($( f ;)+ g)*) = { $e:expr, ε, h, g }

FOLLOW(M)

Finally, the definition for FOLLOW(M) is built up incrementally atop more primitive functions.

We first assume a primitive mapping, FOLLOW(NT) (defined below) from a simple NT to the set of allowed tokens for the fragment specifier for that NT.

Second, we generalize FOLLOW to tokens: FOLLOW(t) = FOLLOW(NT) if t is (a simple) NT. Otherwise, t must be some other (non NT) token; in this case FOLLOW(t) = ANYTOKEN.

Finally, we generalize FOLLOW to arbitrary matchers by composing the primitive functions above:

FOLLOW(M) = FOLLOW(t1) ∩ FOLLOW(t2) ∩ ... ∩ FOLLOW(tN)
            where { t1, t2, ..., tN } = (LAST(M) \ { ε })

Examples of FOLLOW (expressed as equality relations between sets, to avoid incorporating details of FOLLOW(NT) in these examples):

  • FOLLOW($( $d:ident $e:expr )*) = FOLLOW($e:expr)
  • FOLLOW($( $d:ident $e:expr )* $(;)*) = FOLLOW($e:expr) ∩ ANYTOKEN = FOLLOW($e:expr)
  • FOLLOW($( $d:ident $e:expr )* $(;)* $( f |)+) = ANYTOKEN

FOLLOW(NT)

Here is the definition for FOLLOW(NT), which maps every simple NT to the set of tokens that are allowed to follow it, based on the fragment specifier for the NT.

The current legal fragment specifiers are: item, block, stmt, pat, expr, ty, ident, path, meta, and tt.

  • FOLLOW(pat) = {FatArrow, Comma, Eq, Or, Ident(if), Ident(in)}
  • FOLLOW(expr) = {FatArrow, Comma, Semicolon}
  • FOLLOW(ty) = {OpenDelim(Brace), Comma, FatArrow, Colon, Eq, Gt, Semi, Or, Ident(as), Ident(where), OpenDelim(Bracket), Nonterminal(Block)}
  • FOLLOW(stmt) = FOLLOW(expr)
  • FOLLOW(path) = FOLLOW(ty)
  • FOLLOW(block) = any token
  • FOLLOW(ident) = any token
  • FOLLOW(tt) = any token
  • FOLLOW(item) = any token
  • FOLLOW(meta) = any token

(Note that close delimiters are valid following any NT.)

Examples of valid and invalid matchers

With the above specification in hand, we can present arguments for why particular matchers are legal and others are not.

  • ($ty:ty < foo ,) : illegal, because FIRST(< foo ,) = { < } ⊈ FOLLOW(ty)

  • ($ty:ty , foo <) : legal, because FIRST(, foo <) = { , } is ⊆ FOLLOW(ty).

  • ($pa:pat $pb:pat $ty:ty ,) : illegal, because FIRST($pb:pat $ty:ty ,) = { $pb:pat } ⊈ FOLLOW(pat), and also FIRST($ty:ty ,) = { $ty:ty } ⊈ FOLLOW(pat).

  • ( $($a:tt $b:tt)* ; ) : legal, because FIRST($b:tt) = { $b:tt } is ⊆ FOLLOW(tt) = ANYTOKEN, as is FIRST(;) = { ; }.

  • ( $($t:tt),* , $(t:tt),* ) : legal (though any attempt to actually use this macro will signal a local ambiguity error during expansion).

  • ($ty:ty $(; not sep)* -) : illegal, because FIRST($(; not sep)* -) = { ;, - } is not in FOLLOW(ty).

  • ($($ty:ty)-+) : illegal, because separator - is not in FOLLOW(ty).

Drawbacks

It does restrict the input to a MBE, but the choice of delimiters provides reasonable freedom and can be extended in the future.

Alternatives

  1. Fix the syntax that a fragment can parse. This would create a situation where a future MBE might not be able to accept certain inputs because the input uses newer features than the fragment that was fixed at 1.0. For example, in the bar MBE above, if the ty fragment was fixed before the unboxed closure sugar was introduced, the MBE would not be able to accept such a type. While this approach is feasible, it would cause unnecessary confusion for future users of MBEs when they can’t put certain perfectly valid Rust code in the input to an MBE. Versioned fragments could avoid this problem but only for new code.
  2. Keep macro_rules unstable. Given the great syntactical abstraction that macro_rules provides, it would be a shame for it to be unusable in a release version of Rust. If ever macro_rules were to be stabilized, this same issue would come up.
  3. Do nothing. This is very dangerous, and has the potential to essentially freeze Rust’s syntax for fear of accidentally breaking a macro.

Edit History

  • Updated by https://github.com/rust-lang/rfcs/pull/1209, which added semicolons into the follow set for types.

  • Updated by https://github.com/rust-lang/rfcs/pull/1384:

    • replaced detailed design with a specification-oriented presentation rather than an implementation-oriented algorithm.
    • fixed some oversights in the specification that led to matchers like $e:expr { stuff } being accepted (which match fragments like break { stuff }, significantly limiting future language extensions),
    • expanded the follows sets for ty to include OpenDelim(Brace), Ident(where), Or (since Rust’s grammar already requires all of |foo:TY| {}, fn foo() -> TY {} and fn foo() -> TY where {} to work).
    • expanded the follow set for pat to include Or (since Rust’s grammar already requires match (true,false) { PAT | PAT => {} } and |PAT| {} to work); see also RFC issue 1336. Also added If and In to follow set for pat (to make the specification match the old implementation).
  • Updated by https://github.com/rust-lang/rfcs/pull/1462, which added open square bracket into the follow set for types.

  • Updated by https://github.com/rust-lang/rfcs/pull/1494, which adjusted the follow set for types to include block nonterminals.

Appendices

Appendix A: Algorithm for recognizing valid matchers.

The detailed design above only sought to provide a specification for what a correct matcher is (by defining FIRST, LAST, and FOLLOW, and specifying the invariant relating FIRST and FOLLOW for all valid matchers.

The above specification can be implemented efficiently; we here give one example algorithm for recognizing valid matchers.

  • This is not the only possible algorithm; for example, one could precompute a table mapping every suffix of every token-tree sequence to its FIRST set, by augmenting FirstSet below accordingly.

    Or one could store a subset of such information during the precomputation, such as just the FIRST sets for complex NT’s, and then use that table to inform a forward scan of the input.

    The latter is in fact what my prototype implementation does; I must emphasize the point that the algorithm here is not prescriptive.

  • The intent of this RFC is that the specifications of FIRST and FOLLOW above will take precedence over this algorithm if the two are found to be producing inconsistent results.

The algorithm for recognizing valid matchers M is named ValidMatcher.

To define it, we will need a mapping from submatchers of M to the FIRST set for that submatcher; that is handled by FirstSet.

Procedure FirstSet(M)

input: a token tree M representing a matcher

output: FIRST(M)

Let M = tts[1] tts[2] ... tts[n].
Let curr_first = { ε }.

For i in n down to 1 (inclusive):
  Let tt = tts[i].

  1. If tt is a token, curr_first := { tt }

  2. Else if tt is a delimited sequence `OPEN uu ... ClOSE`,
     curr_first := { OPEN }

  3. Else tt is a complex NT `$(uu ...) SEP OP`

     Let inner_first = FirstSet(`uu ...`) i.e. recursive call

     if OP == `*` or ε ∈ inner_first then
         curr_first := curr_first ∪ inner_first
     else
         curr_first := inner_first

return curr_first

(Note: If we were precomputing a full table in this procedure, we would need a recursive invocation on (uu …) in step 2 of the for-loop.)

Predicate ValidMatcher(M)

To simplify the specification, we assume in this presentation that all simple NT’s have a valid fragment specifier (i.e., one that has an entry in the FOLLOW(NT) table above.

This algorithm works by scanning forward across the matcher M = α β, (where α is the prefix we have scanned so far, and β is the suffix that remains to be scanned). We maintain LAST(α) as we scan, and use it to compute FOLLOW(α) and compare that to FIRST(β).

input: a token tree, M, and a set of tokens that could follow it, F.

output: LAST(M) (and also signals failure whenever M is invalid)

Let last_of_prefix = { ε }

Let M = tts[1] tts[2] ... tts[n].

For i in 1 up to n (inclusive):
  // For reference:
  // α = tts[1] .. tts[i]
  // β = tts[i+1] .. tts[n]
  // γ is some outer token sequence; the input F represents FIRST(γ)

  1. Let tt = tts[i].

  2. Let first_of_suffix; // aka FIRST(β γ)

  3. let S = FirstSet(tts[i+1] .. tts[n]);

  4. if ε ∈ S then
     // (include the follow information if necessary)

     first_of_suffix := S ∪ F

  5. else

     first_of_suffix := S

  6. Update last_of_prefix via case analysis on tt:

     a. If tt is a token:
        last_of_prefix := { tt }

     b. Else if tt is a delimited sequence `OPEN uu ... CLOSE`:

        i.  run ValidMatcher( M = `uu ...`, F = { `CLOSE` })

       ii. last_of_prefix := { `CLOSE` }

     c. Else, tt must be a complex NT,
        in other words, `NT = $( uu .. ) SEP OP` or `NT = $( uu .. ) OP`:

        i. If SEP present,
          let sublast = ValidMatcher( M = `uu ...`, F = first_of_suffix ∪ { `SEP` })

       ii. else:
          let sublast = ValidMatcher( M = `uu ...`, F = first_of_suffix)

      iii. If ε in sublast then:
           last_of_prefix := last_of_prefix ∪ (sublast \ ε)

       iv. Else:
           last_of_prefix := sublast

  7. At this point, last_of_prefix == LAST(α) and first_of_suffix == FIRST(β γ).

     For each simple NT token t in last_of_prefix:

     a. If first_of_suffix ⊆ FOLLOW(t), then we are okay so far. </li>

     b. Otherwise, we have found a token t whose follow set is not compatible
        with the FIRST(β γ), and must signal failure.

// After running the above for loop on all of `M`, last_of_prefix == LAST(M)

Return last_of_prefix

This algorithm should be run on every matcher in every macro_rules invocation, with F = { EOF }. If it rejects a matcher, an error should be emitted and compilation should not complete.

Summary

Establish a convention throughout the core libraries for unsafe functions constructing references out of raw pointers. The goal is to improve usability while promoting awareness of possible pitfalls with inferred lifetimes.

Motivation

The current library convention on functions constructing borrowed values from raw pointers has the pointer passed by reference, which reference’s lifetime is carried over to the return value. Unfortunately, the lifetime of a raw pointer is often not indicative of the lifetime of the pointed-to data. So the status quo eschews the flexibility of inferring the lifetime from the usage, while falling short of providing useful safety semantics in exchange.

A typical case where the lifetime needs to be adjusted is in bindings to a foreign library, when returning a reference to an object’s inner value (we know from the library’s API contract that the inner data’s lifetime is bound to the containing object):

impl Outer {
    fn inner_str(&self) -> &[u8] {
        unsafe {
            let p = ffi::outer_get_inner_str(&self.raw);
            let s = std::slice::from_raw_buf(&p, libc::strlen(p));
            std::mem::copy_lifetime(self, s)
        }
    }
}

Raw pointer casts also discard the lifetime of the original pointed-to value.

Detailed design

The signature of from_raw* constructors will be changed back to what it once was, passing a pointer by value:

unsafe fn from_raw_buf<'a, T>(ptr: *const T, len: uint) -> &'a [T]

The lifetime on the return value is inferred from the call context.

The current usage can be mechanically changed, while keeping an eye on possible lifetime leaks which need to be worked around by e.g. providing safe helper functions establishing lifetime guarantees, as described below.

Document the unsafety

In many cases, the lifetime parameter will come annotated or elided from the call context. The example above, adapted to the new convention, is safe despite lack of any explicit annotation:

impl Outer {
    fn inner_str(&self) -> &[u8] {
        unsafe {
            let p = ffi::outer_get_inner_str(&self.raw);
            std::slice::from_raw_buf(p, libc::strlen(p))
        }
    }
}

In other cases, the inferred lifetime will not be correct:

    let foo = unsafe { ffi::new_foo() };
    let s = unsafe { std::slice::from_raw_buf(foo.data, foo.len) };

    // Some lines later
    unsafe { ffi::free_foo(foo) };

    // More lines later
    let guess_what = s[0];
    // The lifetime of s is inferred to extend to the line above.
    // That code told you it's unsafe, didn't it?

Given that the function is unsafe, the code author should exercise due care when using it. However, the pitfall here is not readily apparent at the place where the invalid usage happens, so it can be easily committed by an inexperienced user, or inadvertently slipped in with a later edit.

To mitigate this, the documentation on the reference-from-raw functions should include caveats warning about possible misuse and suggesting ways to avoid it. When an ‘anchor’ object providing the lifetime is available, the best practice is to create a safe helper function or method, taking a reference to the anchor object as input for the lifetime parameter, like in the example above. The lifetime can also be explicitly assigned with std::mem::copy_lifetime or std::mem::copy_lifetime_mut, or annotated when possible.

Fix copy_mut_lifetime

To improve composability in cases when the lifetime does need to be assigned explicitly, the first parameter of std::mem::copy_mut_lifetime should be made an immutable reference. There is no reason for the lifetime anchor to be mutable: the pointer’s mutability is usually the relevant question, and it’s an unsafe function to begin with. This wart may breed tedious, mut-happy, or transmute-happy code, when e.g. a container providing the lifetime for a mutable view into its contents is not itself necessarily mutable.

Drawbacks

The implicitly inferred lifetimes are unsafe in sneaky ways, so care is required when using the borrowed values.

Changing the existing functions is an API break.

Alternatives

An earlier revision of this RFC proposed adding a generic input parameter to determine the lifetime of the returned reference:

unsafe fn from_raw_buf<'a, T, U: Sized?>(ptr: *const T, len: uint,
                                         life_anchor: &'a U)
                                        -> &'a [T]

However, an object with a suitable lifetime is not always available in the context of the call. In line with the general trend in Rust libraries to favor composability, std::mem::copy_lifetime and std::mem::copy_lifetime_mut should be the principal methods to explicitly adjust a lifetime.

Unresolved questions

Should the change in function parameter signatures be done before 1.0?

Acknowledgements

Thanks to Alex Crichton for shepherding this proposal in a constructive and timely manner. He has in fact rationalized the convention in its present form.

Summary

Remove chaining of comparison operators (e.g. a == b == c) from the syntax. Instead, require extra parentheses ((a == b) == c).

Motivation

fn f(a: bool, b: bool, c: bool) -> bool {
    a == b == c
}

This code is currently accepted and is evaluated as ((a == b) == c). This may be confusing to programmers coming from languages like Python, where chained comparison operators are evaluated as (a == b && b == c).

In C, the same problem exists (and is exacerbated by implicit conversions). Styleguides like Misra-C require the use of parentheses in this case.

By requiring the use of parentheses, we avoid potential confusion now, and open up the possibility for python-like chained comparisons post-1.0.

Additionally, making the chain f < b > (c) invalid allows us to easily produce a diagnostic message: “Use ::< instead of < if you meant to specify type arguments.”, which would be a vast improvement over the current diagnostics for this mistake.

Detailed design

Emit a syntax error when a comparison operator appears as an operand of another comparison operator (without being surrounded by parentheses). The comparison operators are < > <= >= == and !=.

This is easily implemented directly in the parser.

Note that this restriction on accepted syntax will effectively merge the precedence level 4 (< > <= >=) with level 3 (== !=).

Drawbacks

It’s a breaking change.

In particular, code that currently uses the difference between precedence level 3 and 4 breaks and will require the use of parentheses:

if a < 0 == b < 0 { /* both negative or both non-negative */ }

However, I don’t think this kind of code sees much use. The rustc codebase doesn’t seem to have any occurrences of chained comparisons.

Alternatives

As this RFC just makes the chained comparison syntax available for post-1.0 language features, pretty much every alternative (including returning to the status quo) can still be implemented later.

If this RFC is not accepted, it will be impossible to add python-style chained comparison operators later.

A variation on this RFC would be to keep the separation between precedence level 3 and 4, and only reject programs where a comparison operator appears as an operand of another comparison operator of the same precedence level. This minimizes the breaking changes, but does not allow full python-style chained comparison operators in the future (although a more limited form of them would still be possible).

Unresolved questions

Is there real code that would get broken by this change? So far, I’ve been unable to find any.

  • Start Date: 2014-06-30
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/560
  • Rust Issue #: https://github.com/rust-lang/rust/issues/22020

Summary

Change the semantics of the built-in fixed-size integer types from being defined as wrapping around on overflow to it being considered a program error (but not undefined behavior in the C sense). Implementations are permitted to check for overflow at any time (statically or dynamically). Implementations are required to at least check dynamically when debug_assert! assertions are enabled. Add a WrappingOps trait to the standard library with operations defined as wrapping on overflow for the limited number of cases where this is the desired semantics, such as hash functions.

Motivation

Numeric overflow prevents a difficult situation. On the one hand, overflow (and underflow) is known to be a common source of error in other languages. Rust, at least, does not have to worry about memory safety violations, but it is still possible for overflow to lead to bugs. Moreover, Rust’s safety guarantees do not apply to unsafe code, which carries the same risks as C code when it comes to overflow. Unfortunately, banning overflow outright is not feasible at this time. Detecting overflow statically is not practical, and detecting it dynamically can be costly. Therefore, we have to steer a middle ground.

The RFC has several major goals:

  1. Ensure that code which intentionally uses wrapping semantics is clearly identified.
  2. Help users to identify overflow problems and help those who wish to be careful about overflow to do so.
  3. Ensure that users who wish to detect overflow can safely enable overflow checks and dynamic analysis, both on their code and on libraries they use, with a minimal risk of “false positives” (intentional overflows leading to a panic).
  4. To the extent possible, leave room in the future to move towards universal overflow checking if it becomes feasible. This may require opt-in from end-users.

To that end the RFC proposes two mechanisms:

  1. Optional, dynamic overflow checking. Ordinary arithmetic operations (e.g., a+b) would conditionally check for overflow. If an overflow occurs when checking is enabled, a thread panic will be signaled. Specific intrinsics and library support are provided to permit either explicit overflow checks or explicit wrapping.
  2. Overflow checking would be, by default, tied to debug assertions (debug_assert!). It can be seen as analogous to a debug assertion: an important safety check that is too expensive to perform on all code.

We expect that additional and finer-grained mechanisms for enabling overflows will be added in the future. One easy option is a command-line switch to enable overflow checking universally or within specific crates. Another option might be lexically scoped annotations to enable overflow (or perhaps disable) checking in specific blocks. Neither mechanism is detailed in this RFC at this time.

Why tie overflow checking to debug assertions

The reasoning behind connecting overflow checking and debug assertion is that it ensures that pervasive checking for overflow is performed at some point in the development cycle, even if it does not take place in shipping code for performance reasons. The goal of this is to prevent “lock-in” where code has a de-facto reliance on wrapping semantics, and thus incorrectly breaks when stricter checking is enabled.

We would like to allow people to switch “pervasive” overflow checks on by default, for example. However, if the default is not to check for overflow, then it seems likely that a pervasive check like that could not be used, because libraries are sure to come to rely on wrapping semantics, even if accidentally.

By making the default for debugging code be checked overflow, we help ensure that users will encounter overflow errors in practice, and thus become aware that overflow in Rust is not the norm. It will also help debug simple errors, like unsigned underflow leading to an infinite loop.

Detailed design

Arithmetic operations with error conditions

There are various operations which can sometimes produce error conditions (detailed below). Typically these error conditions correspond to under/overflow but not exclusively. It is the programmers responsibility to avoid these error conditions: any failure to do so can be considered a bug, and hence can be flagged by a static/dynamic analysis tools as an error. This is largely a semantic distinction, though.

The result of an error condition depends upon the state of overflow checking, which can be either enabled or default (this RFC does not describe a way to disable overflow checking completely). If overflow checking is enabled, then an error condition always results in a panic. For efficiency reasons, this panic may be delayed over some number of pure operations, as described below.

If overflow checking is default, that means that erroneous operations will produce a value as specified below. Note though that code which encounters an error condition is still considered buggy. In particular, Rust source code (in particular library code) cannot rely on wrapping semantics, and should always be written with the assumption that overflow checking may be enabled. This is because overflow checking may be enabled by a downstream consumer of the library.

In the future, we could add some way to explicitly disable overflow checking in a scoped fashion. In that case, the result of each error condition would simply be the same as the optional state when no panic occurs, and this would requests for override checking specified elsewhere. However, no mechanism for disabling overflow checks is provided by this RFC: instead, it is recommended that authors use the wrapped primitives.

The error conditions that can arise, and their defined results, are as follows. The intention is that the defined results are the same as the defined results today. The only change is that now a panic may result.

  • The operations +, -, *, can underflow and overflow. When checking is enabled this will panic. When checking is disabled this will two’s complement wrap.
  • The operations /, % for the arguments INT_MIN and -1 will unconditionally panic. This is unconditional for legacy reasons.
  • Shift operations (<<, >>) on a value of width N can be passed a shift value >= N. It is unclear what behaviour should result from this, so the shift value is unconditionally masked to be modulo N to ensure that the argument is always in range.

Enabling overflow checking

Compilers should present a command-line option to enable overflow checking universally. Additionally, when building in a default “debug” configuration (i.e., whenever debug_assert would be enabled), overflow checking should be enabled by default, unless the user explicitly requests otherwise. The precise control of these settings is not detailed in this RFC.

The goal of this rule is to ensure that, during debugging and normal development, overflow detection is on, so that users can be alerted to potential overflow (and, in particular, for code where overflow is expected and normal, they will be immediately guided to use the wrapping methods introduced below). However, because these checks will be compiled out whenever an optimized build is produced, final code will not pay a performance penalty.

In the future, we may add additional means to control when overflow is checked, such as scoped attributes or a global, independent compile-time switch.

Delayed panics

If an error condition should occur and a thread panic should result, the compiler is not required to signal the panic at the precise point of overflow. It is free to coalesce checks from adjacent pure operations. Panics may never be delayed across an unsafe block nor may they be skipped entirely, however. The precise details of how panics may be deferred – and the definition of a pure operation – can be hammered out over time, but the intention here is that, at minimum, overflow checks for adjacent numeric operations like a+b-c can be coalesced into a single check. Another useful example might be that, when summing a vector, the final overflow check could be deferred until the summation is complete.

Methods for explicit wrapping arithmetic

For those use cases where explicit wraparound on overflow is required, such as hash functions, we must provide operations with such semantics. Accomplish this by providing the following methods defined in the inherent impls for the various integral types.

impl i32 { // and i8, i16, i64, isize, u8, u32, u64, usize
    fn wrapping_add(self, rhs: Self) -> Self;
    fn wrapping_sub(self, rhs: Self) -> Self;
    fn wrapping_mul(self, rhs: Self) -> Self;
    fn wrapping_div(self, rhs: Self) -> Self;
    fn wrapping_rem(self, rhs: Self) -> Self;

    fn wrapping_lshift(self, amount: u32) -> Self;
    fn wrapping_rshift(self, amount: u32) -> Self;
}

These are implemented to preserve the pre-existing, wrapping semantics unconditionally.

Wrapping<T> type for convenience

For convenience, the std::num module also provides a Wrapping<T> newtype for which the operator overloads are implemented using the WrappingOps trait:

pub struct Wrapping<T>(pub T);

impl<T: WrappingOps> Add<Wrapping<T>, Wrapping<T>> for Wrapping<T> {
    fn add(&self, other: &Wrapping<T>) -> Wrapping<T> {
        self.wrapping_add(*other)
    }
}

// Likewise for `Sub`, `Mul`, `Div`, and `Rem`

Note that this is only for potential convenience. The type-based approach has the drawback that e.g. Vec<int> and Vec<Wrapping<int>> are incompatible types.

Lint

In general it seems inadvisable to use operations with error conditions (like a naked + or -) in unsafe code. It would be better to use explicit checked or wrapped operations as appropriate. The same holds for destructors, since unwinding in destructors is inadvisable. Therefore, the RFC recommends a lint be added against such operations, defaulting to warn, though the details (such as the name of this lint) are not spelled out.

Drawbacks

Making choices is hard. Having to think about whether wraparound arithmetic is appropriate may cause an increased cognitive burden. However, wraparound arithmetic is almost never the intended behavior. Therefore, programmers should be able to keep using the built-in integer types and to not think about this. Where wraparound semantics are required, it is generally a specialized use case with the implementor well aware of the requirement.

Loss of additive commutativity and benign overflows. In some cases, overflow behavior can be benign. For example, given an expression like a+b-c, intermediate overflows are not harmful so long as the final result is within the range of the integral type. To take advantage of this property, code would have to be written to use the wrapping constructs, such as a.wrapping_add(b).wrapping_sub(c). However, this drawback is counterbalanced by the large number of arithmetic expressions which do not have the same behavior when overflow occurs. A common example is (max+min)/2, which is a typical ingredient for binary searches and the like and can lead to very surprising behavior. Moreover, the use of wrapping_add and wrapping_sub to highlight the fact that the intermediate result may overflow seems potentially useful to an end-reader.

Danger of triggering additional panics from within unsafe code. This proposal creates more possibility for panics to occur, at least when checks are enabled. As usual, a panic at an inopportune time can lead to bugs if code is not exception safe. This is particularly worrisome in unsafe code, where crucial safety guarantees can be violated. However, this danger already exists, as there are numerous ways to trigger a panic, and hence unsafe code must be written with this in mind. It seems like the best advice is for unsafe code to eschew the plain + and - operators, and instead prefer explicit checked or wrapping operations as appropriate (hence the proposed lint). Furthermore, the danger of an unexpected panic occurring in unsafe code must be weighed against the danger of a (silent) overflow, which can also lead to unsafety.

Divergence of debug and optimized code. The proposal here causes additional divergence of debug and optimized code, since optimized code will not include overflow checking. It would therefore be recommended that robust applications run tests both with and without optimizations (and debug assertions). That said, this state of affairs already exists. First, the use of debug_assert! causes debug/optimized code to diverge, but also, optimizations are known to cause non-trivial changes in behavior. For example, recursive (but pure) functions may be optimized away entirely by LLVM. Therefore, it always makes sense to run tests in both modes. This situation is not unique to Rust; most major projects do something similar. Moreover, in most languages, debug_assert! is in fact the only (or at least predominant) kind of of assertion, and hence the need to run tests both with and without assertions enabled is even stronger.

Benchmarking. Someone may conduct a benchmark of Rust with overflow checks turned on, post it to the Internet, and mislead the audience into thinking that Rust is a slow language. The choice of defaults minimizes this risk, however, since doing an optimized build in cargo (which ought to be a prerequisite for any benchmark) also disables debug assertions (or ought to).

Impact of overflow checking on optimization. In addition to the direct overhead of checking for overflow, there is some additional overhead when checks are enabled because compilers may have to forego other optimizations or code motion that might have been legal. This concern seems minimal since, in optimized builds, overflow checking will not be enabled. Certainly if we ever decided to change the default for overflow checking to enabled in optimized builds, we would want to measure carefully and likely include some means of disabling checks in particularly hot paths.

Alternatives and possible future directions

Do nothing for now

Defer any action until later, as advocated by:

Reasons this was not pursued: The proposed changes are relatively well-contained. Doing this after 1.0 would require either breaking existing programs which rely on wraparound semantics, or introducing an entirely new set of integer types and porting all code to use those types, whereas doing it now lets us avoid needlessly proliferating types. Given the paucity of circumstances where wraparound semantics is appropriate, having it be the default is defensible only if better options aren’t available.

Scoped attributes to control runtime checking

The original RFC proposed a system of scoped attributes for enabling/disabling overflow checking. Nothing in the current RFC precludes us from going in this direction in the future. Rather, this RFC is attempting to answer the question (left unanswered in the original RFC) of what the behavior ought to be when no attribute is in scope.

The proposal for scoped attributes in the original RFC was as follows. Introduce an overflow_checks attribute which can be used to turn runtime overflow checks on or off in a given scope. #[overflow_checks(on)] turns them on, #[overflow_checks(off)] turns them off. The attribute can be applied to a whole crate, a module, an fn, or (as per RFC 40) a given block or a single expression. When applied to a block, this is analogous to the checked { } blocks of C#. As with lint attributes, an overflow_checks attribute on an inner scope or item will override the effects of any overflow_checks attributes on outer scopes or items. Overflow checks can, in fact, be thought of as a kind of run-time lint. Where overflow checks are in effect, overflow with the basic arithmetic operations and casts on the built-in fixed-size integer types will invoke task failure. Where they are not, the checks are omitted, and the result of the operations is left unspecified (but will most likely wrap).

Significantly, turning overflow_checks on or off should only produce an observable difference in the behavior of the program, beyond the time it takes to execute, if the program has an overflow bug.

It should also be emphasized that overflow_checks(off) only disables runtime overflow checks. Compile-time analysis can and should still be performed where possible. Perhaps the name could be chosen to make this more obvious, such as runtime_overflow_checks, but that starts to get overly verbose.

Illustration of use:

// checks are on for this crate
#![overflow_checks(on)]

// but they are off for this module
#[overflow_checks(off)]
mod some_stuff {

    // but they are on for this function
    #[overflow_checks(on)]
    fn do_thing() {
        ...

        // but they are off for this block
        #[overflow_checks(off)] {
            ...
            // but they are on for this expression
            let n = #[overflow_checks(on)] (a * b + c);
            ...
        }

        ...
    }

    ...
}

...

Checks off means wrapping on

If we adopted a model of overflow checks, one could use an explicit request to turn overflow checks off as a signal that wrapping is desired. This would allow us to do without the WrappingOps trait and to avoid having unspecified results. See:

Reasons this was not pursued: The official semantics of a type should not change based on the context. It should be possible to make the choice between turning checks on or off solely based on performance considerations. It should be possible to distinguish cases where checking was too expensive from where wraparound was desired. (Wraparound is not usually desired.)

Different operators

Have the usual arithmetic operators check for overflow, and introduce a new set of operators with wraparound semantics, as done by Swift. Alternately, do the reverse: make the normal operators wrap around, and introduce new ones which check.

Reasons this was not pursued: New, strange operators would pose an entrance barrier to the language. The use cases for wraparound semantics are not common enough to warrant having a separate set of symbolic operators.

Different types

Have separate sets of fixed-size integer types which wrap around on overflow and which are checked for overflow (e.g. u8, u8c, i8, i8c, …).

Reasons this was not pursued: Programmers might be confused by having to choose among so many types. Using different types would introduce compatibility hazards to APIs. Vec<u8> and Vec<u8c> are incompatible. Wrapping arithmetic is not common enough to warrant a whole separate set of types.

Just use Checked*

Just use the existing Checked traits and a Checked<T> type after the same fashion as the Wrapping<T> in this proposal.

Reasons this was not pursued: Wrong defaults. Doesn’t enable distinguishing “checking is slow” from “wrapping is desired” from “it was the default”.

Runtime-closed range types

As proposed by Bill Myers.

Reasons this was not pursued: My brain melted. :(

Making as be checked

The RFC originally specified that using as to convert between types would cause checked semantics. However, we now use as as a primitive type operator. This decision was discussed on the discuss message board.

The key points in favor of reverting as to its original semantics were:

  1. as is already a fairly low-level operator that can be used (for example) to convert between *mut T and *mut U.
  2. as is the only way to convert types in constants, and hence it is important that it covers all possibilities that constants might need (eventually, const fn or other approaches may change this, but those are not going to be stable for 1.0).
  3. The type ascription RFC set the precedent that as is used for “dangerous” coercions that require care.
  4. Eventually, checked numeric conversions (and perhaps most or all uses of as) can be ergonomically added as methods. The precise form of this will be resolved in the future. const fn can then allow these to be used in constant expressions.

Unresolved questions

None today (see Updates section below).

Future work

  • Look into adopting imprecise exceptions and a similar design to Ada’s, and to what is explored in the research on AIR (As Infinitely Ranged) semantics, to improve the performance of checked arithmetic. See also:

  • Make it easier to use integer types of unbounded size, i.e. actual mathematical integers and naturals.

Updates since being accepted

Since it was accepted, the RFC has been updated as follows:

  1. The wrapping methods were moved to be inherent, since we gained the capability for libstd to declare inherent methods on primitive integral types.
  2. as was changed to restore the behavior before the RFC (that is, it truncates to the target bitwidth and reinterprets the highest order bit, a.k.a. sign-bit, as necessary, as a C cast would).
  3. Shifts were specified to mask off the bits of over-long shifts.
  4. Overflow was specified to be two’s complement wrapping (this was mostly a clarification).
  5. INT_MIN / -1 and INT_MIN % -1 panics.

Acknowledgements and further reading

This RFC was initially written by Gábor Lehel and was since edited by Nicholas Matsakis into its current form. Although the text has changed significantly, the spirit of the original is preserved (at least in our opinion). The primary changes from the original are:

  1. Define the results of errors in some cases rather than using undefined values.
  2. Move discussion of scoped attributes to the “future directions” section.
  3. Define defaults for when overflow checking is enabled.

Many aspects of this proposal and many of the ideas within it were influenced and inspired by a discussion on the rust-dev mailing list. The author is grateful to everyone who provided input, and would like to highlight the following messages in particular as providing motivation for the proposal.

On the limited use cases for wrapping arithmetic:

On the value of distinguishing where overflow is valid from where it is not:

The idea of scoped attributes:

On the drawbacks of a type-based approach:

In general:

Further credit is due to the commenters in the GitHub discussion thread.

Summary

Remove official support for the ndebug config variable, replace the current usage of it with a more appropriate debug_assertions compiler-provided config variable.

Motivation

The usage of ‘ndebug’ to indicate a release build is a strange holdover from C/C++. It is not used much and is easy to forget about. Since it used like any other value passed to the cfg flag, it does not interact with other flags such as -g or -O.

The only current users of ndebug are the implementations of the debug_assert! macro. At the time of this writing integer overflow checking is will also be controlled by this variable. Since the optimisation setting does not influence ndebug, this means that code that the user expects to be optimised will still contain the overflow checking logic. Similarly, debug_assert! invocations are not removed, contrary to what intuition should expect. Enabling optimisations should been seen as a request to make the user’s code faster, removing debug_assert! and other checks seems like a natural consequence.

Detailed design

The debug_assertions configuration variable, the replacement for the ndebug variable, will be compiler provided based on the value of the opt-level codegen flag, including the implied value from -O. Any value higher than 0 will disable the variable.

Another codegen flag debug-assertions will override this, forcing it on or off based on the value passed to it.

Drawbacks

Technically backwards incompatible change. However the only usage of the ndebug variable in the rust tree is in the implementation of debug_assert!, so it’s unlikely that any external code is using it.

Alternatives

No real alternatives beyond different names and defaults.

Unresolved questions

From the RFC discussion there remain some unresolved details:

  • brson writes, “I have a minor concern that -C debug-assertions might not be the right place for this command line flag - it doesn’t really affect code generation, at least in the current codebase (also --cfg debug_assertions has the same effect).”.
  • huonw writes, “It seems like the flag could be more than just a boolean, but rather take a list of what to enable to allow fine-grained control, e.g. none, overflow-checks, debug_cfg,overflow-checks, all. (Where -C debug-assertions=debug_cfg acts like –cfg debug.)”.
  • huonw writes, “if we want this to apply to more than just debug_assert do we want to use a name other than -C debug-assertions?”.

Summary

A recent RFC split what was previously fmt::Show into two traits, fmt::Show and fmt::String, with format specifiers {:?} and {} respectively.

That RFC did not, however, establish complete conventions for when to implement which of the traits, nor what is expected from the output. That’s what this RFC seeks to do.

It turns out that, due to the suggested conventions and other concerns, renaming the traits is also desirable.

Motivation

Part of the reason for splitting up Show in the first place was some tension around the various use cases it was trying to cover, and the fact that it could not cover them all simultaneously. Now that the trait has been split, this RFC aims to provide clearer guidelines about their use.

Detailed design

The design of the conventions stems from two basic desires:

  1. It should be easy to generate a debugging representation of essentially any type.

  2. It should be possible to create user-facing text output via convenient interpolation.

Part of the premise behind (2) is that user-facing output cannot automatically be “composed” from smaller pieces of user-facing output (via, say, #[derive]). Most of the time when you’re preparing text for a user consumption, the output needs to be quite tailored, and interpolation via format is a good tool for that job.

As part of the conventions being laid out here, the RFC proposes to:

  1. Rename fmt::Show to fmt::Debug, and
  2. Rename fmt::String to fmt::Display.

Debugging: fmt::Debug

The fmt::Debug trait is intended for debugging. It should:

  • Be implemented on every type, usually via #[derive(Debug)].
  • Never panic.
  • Escape away control characters.
  • Introduce quotes and other delimiters as necessary to give a clear representation of the data involved.
  • Focus on the runtime aspects of a type; repeating information such as suffixes for integer literals is not generally useful since that data is readily available from the type definition.

In terms of the output produced, the goal is make it easy to make sense of compound data of various kinds without overwhelming debugging output with every last bit of type information – most of which is readily available from the source. The following rules give rough guidance:

  • Scalars print as unsuffixed literals.
  • Strings print as normal quoted notation, with escapes.
  • Smart pointers print as whatever they point to (without further annotation).
  • Fully public structs print as you’d normally construct them: MyStruct { f1: ..., f2: ... }
  • Enums print as you’d construct their variants (possibly with special cases for things like Option and single-variant enums?).
  • Containers print using some notation that makes their type and contents clear. (Since we lack literals for all container types, this will be ad hoc).

It is not a requirement for the debugging output to be valid Rust source. This is in general not possible in the presence of private fields and other abstractions. However, when it is feasible to do so, debugging output should match Rust syntax; doing so makes it easier to copy debug output into unit tests, for example.

User-facing: fmt::Display

The fmt::Display trait is intended for user-facing output. It should:

  • Be implemented for scalars, strings, and other basic types.
  • Be implemented for generic wrappers like Option<T> or smart pointers, where the output can be wholly delegated to a single fmt::Display implementation on the underlying type.
  • Not be implemented for generic containers like Vec<T> or even Result<T, E>, where there is no useful, general way to tailor the output for user consumption.
  • Be implemented for specific user-defined types as useful for an application, with application-defined user-facing output. In particular, applications will often make their types implement fmt::Display specifically for use in format interpolation.
  • Never panic.
  • Avoid quotes, escapes, and so on unless specifically desired for a user-facing purpose.
  • Require use of an explicit adapter (like the display method in Path) when it potentially looses significant information.

A common pattern for fmt::Display is to provide simple “adapters”, which are types wrapping another type for the sole purpose of formatting in a certain style or context. For example:

pub struct ForHtml<'a, T>(&'a T);
pub struct ForCli<'a, T>(&'a T);

impl MyInterestingType {
    fn for_html(&self) -> ForHtml<MyInterestingType> { ForHtml(self) }
    fn for_cli(&self) -> ForCli<MyInterestingType> { ForCli(self) }
}

impl<'a> fmt::Display for ForHtml<'a, MyInterestingType> { ... }
impl<'a> fmt::Display for ForCli<'a, MyInterestingType> { ... }

Rationale for format specifiers

Given the above conventions, it should be clear that fmt::Debug is much more commonly implemented on types than fmt::Display. Why, then, use {} for fmt::Display and {:?} for fmt::Debug? Aren’t those the wrong defaults?

There are two main reasons for this choice:

  • Debugging output usually makes very little use of interpolation. In general, one is typically using #[derive(Show)] or format!("{:?}", something_to_debug), and the latter is better done via more direct convenience.

  • When creating tailored string output via interpolation, the expected “default” formatting for things like strings is unquoted and unescaped. It would be surprising if the default specifiers below did not yield `“hello, world!” as the output string.

    format!("{}, {}!", "hello", "world")

In other words, although more types implement fmt::Debug, most meaningful uses of interpolation (other than in such implementations) will use fmt::Display, making {} the right choice.

Use in errors

Right now, the (unstable) Error trait comes equipped with a description method yielding an Option<String>. This RFC proposes to drop this method an instead inherit from fmt::Display. It likewise proposes to make unwrap in Result depend and use fmt::Display rather than fmt::Debug.

The reason in both cases is the same: although errors are often thought of in terms of debugging, the messages they result in are often presented directly to the user and should thus be tailored. Tying them to fmt::Display makes it easier to remember and add such tailoring, and less likely to spew a lot of unwanted internal representation.

Alternatives

We’ve already explored an alternative where Show tries to play both of the roles above, and found it to be problematic. There may, however, be alternative conventions for a multi-trait world. The RFC author hopes this will emerge from the discussion thread.

Unresolved questions

(Previous questions here have been resolved in an RFC update).

  • Start Date: 2015-01-11
  • RFC PR: #572
  • Rust Issue: #22203

Summary

Feature gate unused attributes for backwards compatibility.

Motivation

Interpreting the current backwards compatibility rules strictly, it’s not possible to add any further language features that use new attributes. For example, if we wish to add a feature that expands the attribute #[awesome_deriving(Encodable)] into an implementation of Encodable, any existing code that contains uses of the #[awesome_deriving] attribute might be broken. While such attributes are useless in release 1.0 code (since syntax extensions aren’t allowed yet), we still have a case of code that stops compiling after an update of a release build.

Detailed design

We add a feature gate, custom_attribute, that disallows the use of any attributes not defined by the compiler or consumed in any other way.

This is achieved by elevating the unused_attribute lint to a feature gate check (with the gate open, it reverts to being a lint). We’d also need to ensure that it runs after all the other lints (currently it runs as part of the main lint check and might warn about attributes which are actually consumed by other lints later on).

Eventually, we can try for a namespacing system as described below, however with unused attributes feature gated, we need not worry about it until we start considering stabilizing plugins.

Drawbacks

I don’t see much of a drawback (except that the alternatives below might be more lucrative). This might make it harder for people who wish to use custom attributes for static analysis in 1.0 code.

Alternatives

Forbid #[rustc_*] and #[rustc(...)] attributes

(This was the original proposal in the RfC)

This is less restrictive for the user, but it restricts us to a form of namespacing for any future attributes which we may wish to introduce. This is suboptimal, since by the time plugins stabilize (which is when user-defined attributes become useful for release code) we may add many more attributes to the compiler and they will all have cumbersome names.

Do nothing

If we do nothing we can still manage to add new attributes, however we will need to invent new syntax for it. This will probably be in the form of basic namespacing support (#[rustc::awesome_deriving]) or arbitrary token tree support (the use case will probably still end up looking something like #[rustc::awesome_deriving])

This has the drawback that the attribute parsing and representation will need to be overhauled before being able to add any new attributes to the compiler.

Unresolved questions

Which proposal to use — disallowing #[rustc_*] and #[rustc] attributes, or just #[forbid(unused_attribute)]ing everything.

The name of the feature gate could perhaps be improved.

  • Start Date: 2015-01-12
  • RFC PR #: https://github.com/rust-lang/rfcs/pull/574
  • Rust Issue #: https://github.com/rust-lang/rust/issues/23055

Summary

Replace Vec::drain by a method that accepts a range parameter. Add String::drain with similar functionality.

Motivation

Allowing a range parameter is strictly more powerful than the current version. E.g., see the following implementations of some Vec methods via the hypothetical drain_range method:

fn truncate(x: &mut Vec<u8>, len: usize) {
    if len <= x.len() {
        x.drain_range(len..);
    }
}

fn remove(x: &mut Vec<u8>, index: usize) -> u8 {
    x.drain_range(index).next().unwrap()
}

fn pop(x: &mut Vec<u8>) -> Option<u8> {
    match x.len() {
        0 => None,
        n => x.drain_range(n-1).next()
    }
}

fn drain(x: &mut Vec<u8>) -> DrainRange<u8> {
    x.drain_range(0..)
}

fn clear(x: &mut Vec<u8>) {
    x.drain_range(0..);
}

With optimization enabled, those methods will produce code that runs as fast as the current versions. (They should not be implemented this way.)

In particular, this method allows the user to remove a slice from a vector in O(Vec::len) instead of O(Slice::len * Vec::len).

Detailed design

Remove Vec::drain and add the following method:

/// Creates a draining iterator that clears the specified range in the Vec and
/// iterates over the removed items from start to end.
///
/// # Panics
///
/// Panics if the range is decreasing or if the upper bound is larger than the
/// length of the vector.
pub fn drain<T: Trait>(&mut self, range: T) -> /* ... */;

Where Trait is some trait that is implemented for at least Range<usize>, RangeTo<usize>, RangeFrom<usize>, FullRange, and usize.

The precise nature of the return value is to be determined during implementation and may or may not depend on T.

Add String::drain:

/// Creates a draining iterator that clears the specified range in the String
/// and iterates over the characters contained in the range.
///
/// # Panics
///
/// Panics if the range is decreasing, if the upper bound is larger than the
/// length of the String, or if the start and the end of the range don't lie on
/// character boundaries.
pub fn drain<T: Trait>(&mut self, range: T) -> /* ... */;

Where Trait and the return value are as above but need not be the same.

Drawbacks

  • The function signature differs from other collections.
  • It’s not clear from the signature that .. can be used to get the old behavior.
  • The trait documentation will link to the std::ops module. It’s not immediately apparent how the types in there are related to the N..M syntax.
  • Some of these problems can be mitigated by solid documentation of the function itself.

Summary

Rename (maybe one of) the standard collections, so as to make the names more consistent. Currently, among all the alternatives, renaming BinaryHeap to BinHeap is the slightly preferred solution.

Motivation

In this comment in the Rust 1.0.0-alpha announcement thread in /r/programming, it was pointed out that Rust’s std collections had inconsistent names. Particularly, the abbreviation rules of the names seemed unclear.

The current collection names (and their longer versions) are:

  • Vec -> Vector
  • BTreeMap
  • BTreeSet
  • BinaryHeap
  • Bitv -> BitVec -> BitVector
  • BitvSet -> BitVecSet -> BitVectorSet
  • DList -> DoublyLinkedList
  • HashMap
  • HashSet
  • RingBuf -> RingBuffer
  • VecMap -> VectorMap

The abbreviation rules do seem unclear. Sometimes the first word is abbreviated, sometimes the last. However there are also cases where the names are not abbreviated. Bitv, BitvSet and DList seem strange on first glance. Such inconsistencies are undesirable, as Rust should not give an impression as “the promising language that has strangely inconsistent naming conventions for its standard collections”.

Also, it should be noted that traditionally ring buffers have fixed sizes, but Rust’s RingBuf does not. So it is preferable to rename it to something clearer, in order to avoid incorrect assumptions and surprises.

Detailed design

First some general naming rules should be established.

  1. At least maintain module level consistency when abbreviations are concerned.
  2. Prefer commonly used abbreviations.
  3. When in doubt, prefer full names to abbreviated ones.
  4. Don’t be dogmatic.

And the new names:

  • Vec
  • BTreeMap
  • BTreeSet
  • BinaryHeap
  • Bitv -> BitVec
  • BitvSet -> BitSet
  • DList -> LinkedList
  • HashMap
  • HashSet
  • RingBuf -> VecDeque
  • VecMap

The following changes should be made:

  • Rename Bitv, BitvSet, DList and RingBuf. Change affected codes accordingly.
  • If necessary, redefine the original names as aliases of the new names, and mark them as deprecated. After a transition period, remove the original names completely.

Why prefer full names when in doubt?

The naming rules should apply not only to standard collections, but also to other codes. It is (comparatively) easier to maintain a higher level of naming consistency by preferring full names to abbreviated ones when in doubt. Because given a full name, there are possibly many abbreviated forms to choose from. Which one should be chosen and why? It is hard to write down guidelines for that.

For example, the name BinaryBuffer has at least three convincing abbreviated forms: BinBuffer/BinaryBuf/BinBuf. Which one would be the most preferred? Hard to say. But it is clear that the full name BinaryBuffer is not a bad name.

However, if there is a convincing reason, one should not hesitate using abbreviated names. A series of names like BinBuffer/OctBuffer/HexBuffer is very natural. Also, few would think that AtomicallyReferenceCounted, the full name of Arc, is a good type name.

Advantages of the new names:

  • Vec: The name of the most frequently used Rust collection is left unchanged (and by extension VecMap), so the scope of the changes are greatly reduced. Vec is an exception to the “prefer full names” rule because it is the collection in Rust.
  • BitVec: Bitv is a very unusual abbreviation of BitVector, but BitVec is a good one given Vector is shortened to Vec.
  • BitSet: Technically, BitSet is a synonym of BitVec(tor), but it has Set in its name and can be interpreted as a set-like “view” into the underlying bit array/vector, so BitSet is a good name. No need to have an additional v.
  • LinkedList: DList doesn’t say much about what it actually is. LinkedList is not too long (like DoublyLinkedList) and it being a doubly-linked list follows Java/C#’s traditions.
  • VecDeque: This name exposes some implementation details and signifies its “interface” just like HashSet, and it doesn’t have the “fixed-size” connotation that RingBuf has. Also, Deque is commonly preferred to DoubleEndedQueue, it is clear that the former should be chosen.

Drawbacks

  • There will be breaking changes to standard collections that are already marked stable.

Alternatives

A. Keep the status quo:

And Rust’s standard collections will have some strange names and no consistent naming rules.

B. Also rename Vec to Vector:

And by extension, Bitv to BitVector and VecMap to VectorMap.

This means breaking changes at a larger scale. Given that Vec is the collection of Rust, we can have an exception here.

C. Rename DList to DLinkedList, not LinkedList:

It is clearer, but also inconsistent with the other names by having a single-lettered abbreviation of Doubly. As Java/C# also have doubly-linked LinkedList, it is not necessary to use the additional D.

D. Also rename BinaryHeap to BinHeap.

BinHeap can also mean BinomialHeap, so BinaryHeap is the better name here.

E. Rename RingBuf to RingBuffer, or do not rename RingBuf at all.

Doing so would fail to stop people from making the incorrect assumption that Rust’s RingBufs have fixed sizes.

Unresolved questions

None.

Summary

The Fn traits should be modified to make the return type an associated type.

Motivation

The strongest reason is because it would permit impls like the following (example from @alexcrichton):

impl<R,F> Foo for F : FnMut() -> R { ... }

This impl is currently illegal because the parameter R is not constrained. (This also has an impact on my attempts to add variance, which would require a “phantom data” annotation for R for the same reason; but that RFC is not quite ready yet.)

Another related reason is that it often permits fewer type parameters. Rather than having a distinct type parameter for the return type, the associated type projection F::Output can be used. Consider the standard library Map type:

struct Map<A,B,I,F>
    where I : Iterator<Item=A>,
          F : FnMut(A) -> B,
{
    ...
}

impl<A,B,I,F> Iterator for Map<A,B,I,F>
    where I : Iterator<Item=A>,
          F : FnMut(A) -> B,
{
    type Item = B;
    ...
}

This type could be equivalently written:

struct Map<I,F>
    where I : Iterator, F : FnMut<(I::Item,)>
{
    ...
}

impl<I,F> Iterator for Map<I,F>,
    where I : Iterator,
          F : FnMut<(I::Item,)>,
{
    type Item = F::Output;
    ...
}

This example highlights one subtle point about the () notation, which is covered below.

Detailed design

The design has been implemented. You can see it in this pull request. The Fn trait is modified to read as follows:

trait Fn<A> {
    type Output;
    fn call(&self, args: A) -> Self::Output;
}

The other traits are modified in an analogous fashion.

Parentheses notation

The shorthand Foo(...) expands to Foo<(...), Output=()>. The shorthand Foo(..) -> B expands to Foo<(...), Output=B>. This implies that if you use the parenthetical notation, you must supply a return type (which could be a new type parameter). If you would prefer to leave the return type unspecified, you must use angle-bracket notation. (Note that using angle-bracket notation with the Fn traits is currently feature-gated, as described here.)

This can be seen in the In the Map example from the introduction. There the <> notation was used so that F::Output is left unbound:

struct Map<I,F>
    where I : Iterator, F : FnMut<(I::Item,)>

An alternative would be to retain the type parameter B:

struct Map<B,I,F>
    where I : Iterator, F : FnMut(I::Item) -> B

Or to remove the bound on F from the type definition and use it only in the impl:

struct Map<I,F>
    where I : Iterator
{
    ...
}

impl<B,I,F> Iterator for Map<I,F>,
    where I : Iterator,
          F : FnMut(I::Item) -> B
{
    type Item = F::Output;
    ...
}

Note that this final option is not legal without this change, because the type parameter B on the impl would be unconstrained.

Drawbacks

Cannot overload based on return type alone

This change means that you cannot overload indexing to “model” a trait like Default:

trait Default {
    fn default() -> Self;
}

That is, I can’t do something like the following:

struct Defaulty;
impl<T:Default> Fn<()> for Defaulty {
    type Output = T;

    fn call(&self) -> T {
        Default::default()
    }
}

This is not possible because the impl type parameter T is not constrained.

This does not seem like a particularly strong limitation. Overloaded call notation is already less general than full traits in various ways (for example, it lacks the ability to define a closure that always panics; that is, the ! notation is not a type and hence something like FnMut() -> ! is not legal). The ability to overload based on return type is not removed, it is simply not something you can model using overloaded operators.

Alternatives

Special syntax to represent the lack of an Output binding

Rather than having people use angle-brackets to omit the Output binding, we could introduce some special syntax for this purpose. For example, FnMut() -> ? could desugar to FnMut<()> (whereas FnMut() alone desugars to FnMut<(), Output=()>). The first suggestion that is commonly made is FnMut() -> _, but that has an existing meaning in a function context (where _ represents a fresh type variable).

Change meaning of FnMut() to not bind the output

We could make FnMut() desugar to FnMut<()>, and hence require an explicit FnMut() -> () to bind the return type to unit. This feels surprising and inconsistent.

Summary

Make CString dereference to a token type CStr, which designates null-terminated string data.

// Type-checked to only accept C strings
fn safe_puts(s: &CStr) {
    unsafe { libc::puts(s.as_ptr()) };
}

fn main() {
    let s = CString::from_slice("A Rust string");
    safe_puts(s);
}

Motivation

The type std::ffi::CString is used to prepare string data for passing as null-terminated strings to FFI functions. This type dereferences to a DST, [libc::c_char]. The slice type as it is, however, is a poor choice for representing borrowed C string data, since:

  1. A slice does not express the C string invariant at compile time. Safe interfaces wrapping FFI functions cannot take slice references as is without dynamic checks (when null-terminated slices are expected) or building a temporary CString internally (in this case plain Rust slices must be passed with no interior NULs).
  2. An allocated CString buffer is not the only desired source for borrowed C string data. Specifically, it should be possible to interpret a raw pointer, unsafely and at zero overhead, as a reference to a null-terminated string, so that the reference can then be used safely. However, in order to construct a slice (or a dynamically sized newtype wrapping a slice), its length has to be determined, which is unnecessary for the consuming FFI function that will only receive a thin pointer. Another likely data source are string and byte string literals: provided that a static string is null-terminated, there should be a way to pass it to FFI functions without an intermediate allocation in CString.

As a pattern of owned/borrowed type pairs has been established throughout other modules (see e.g. path reform), it makes sense that CString gets its own borrowed counterpart.

Detailed design

This proposal introduces CStr, a type to designate a null-terminated string. This type does not implement Sized, Copy, or Clone. References to CStr are only safely obtained by dereferencing CString and a few other helper methods, described below. A CStr value should provide no size information, as there is intent to turn CStr into an unsized type, pending resolution on that proposal.

Stage 1: CStr, a DST with a weight problem

As current Rust does not have unsized types that are not DSTs, at this stage CStr is defined as a newtype over a character slice:

#[repr(C)]
pub struct CStr {
    chars: [libc::c_char]
}

impl CStr {
    pub fn as_ptr(&self) -> *const libc::c_char {
        self.chars.as_ptr()
    }
}

CString is changed to dereference to CStr:

impl Deref for CString {
    type Target = CStr;
    fn deref(&self) -> &CStr { ... }
}

In implementation, the CStr value needs a length for the internal slice. This RFC provides no guarantees that the length will be equal to the length of the string, or be any particular value suitable for safe use.

Stage 2: unsized CStr

If unsized types are enabled later one way of another, the definition of CStr would change to an unsized type with statically sized contents. The authors of this RFC believe this would constitute no breakage to code using CStr safely. With a view towards this future change, it’s recommended to avoid any unsafe code depending on the internal representation of CStr.

Returning C strings

In cases when an FFI function returns a pointer to a non-owned C string, it might be preferable to wrap the returned string safely as a ‘thin’ &CStr rather than scan it into a slice up front. To facilitate this, conversion from a raw pointer should be added (with an inferred lifetime as per the established convention):

impl CStr {
    pub unsafe fn from_ptr<'a>(ptr: *const libc::c_char) -> &'a CStr {
        ...
    }
}

For getting a slice out of a CStr reference, method to_bytes is provided. The name is preferred over as_bytes to reflect the linear cost of calculating the length.

impl CStr {
    pub fn to_bytes(&self) -> &[u8] { ... }
    pub fn to_bytes_with_nul(&self) -> &[u8] { ... }
}

An odd consequence is that it is valid, if wasteful, to call to_bytes on a CString via auto-dereferencing.

Remove c_str_to_bytes

The functions c_str_to_bytes and c_str_to_bytes_with_nul, with their problematic lifetime semantics, are deprecated and eventually removed in favor of composition of the functions described above: c_str_to_bytes(&ptr) becomes CStr::from_ptr(ptr).to_bytes().

Proof of concept

The described interface changes are implemented in crate c_string.

Drawbacks

The change of the deref target type is another breaking change to CString. In practice the main purpose of borrowing from CString is to obtain a raw pointer with .as_ptr(); for code which only does this and does not expose the slice in type annotations, parameter signatures and so on, the change should not be breaking since CStr also provides this method.

Making the deref target unsized throws away the length information intrinsic to CString and makes it less useful as a container for bytes. This is countered by the fact that there are general purpose byte containers in the core libraries, whereas CString addresses the specific need to convey string data from Rust to C-style APIs.

Alternatives

If the proposed enhancements or other equivalent facilities are not adopted, users of Rust can turn to third-party libraries for better convenience and safety when working with C strings. This may result in proliferation of incompatible helper types in public APIs until a dominant de-facto solution is established.

Unresolved questions

Need a Cow?

Summary

Make Self a keyword.

Motivation

Right now, Self is just a regular identifier that happens to get a special meaning inside trait definitions and impls. Specifically, users are not forbidden from defining a type called Self, which can lead to weird situations:

struct Self;

struct Foo;

impl Foo {
    fn foo(&self, _: Self) {}
}

This piece of code defines types called Self and Foo, and a method foo() that because of the special meaning of Self has the signature fn(&Foo, Foo).

So in this case it is not possible to define a method on Foo that takes the actual type Self without renaming it or creating a renamed alias.

It would also be highly unidiomatic to actually name the type Self for a custom type, precisely because of this ambiguity, so preventing it outright seems like the right thing to do.

Making the identifier Self an keyword would prevent this situation because the user could not use it freely for custom definitions.

Detailed design

Make the identifier Self a keyword that is only legal to use inside a trait definition or impl to refer to the Self type.

Drawbacks

It might be unnecessary churn because people already don’t run into this in practice.

Alternatives

Keep the status quo. It isn’t a problem in practice, and just means Self is the special case of a contextual type definition in the language.

Unresolved questions

None so far

Summary

Add a default lifetime bound for object types, so that it is no longer necessary to write things like Box<Trait+'static> or &'a (Trait+'a). The default will be based on the context in which the object type appears. Typically, object types that appear underneath a reference take the lifetime of the innermost reference under which they appear, and otherwise the default is 'static. However, user-defined types with T:'a annotations override the default.

Examples:

  • &'a &'b SomeTrait becomes &'a &'b (SomeTrait+'b)
  • &'a Box<SomeTrait> becomes &'a Box<SomeTrait+'a>
  • Box<SomeTrait> becomes Box<SomeTrait+'static>
  • Rc<SomeTrait> becomes Rc<SomeTrait+'static>
  • std::cell::Ref<'a, SomeTrait> becomes std::cell::Ref<'a, SomeTrait+'a>

Cases where the lifetime bound is either given explicitly or can be inferred from the traits involved are naturally unaffected.

Motivation

Current situation

As described in RFC 34, object types carry a single lifetime bound. Sometimes, this bound can be inferred based on the traits involved. Frequently, however, it cannot, and in that case the lifetime bound must be given explicitly. Some examples of situations where an error would be reported are as follows:

struct SomeStruct {
    object: Box<Writer>, // <-- ERROR No lifetime bound can be inferred.
}

struct AnotherStruct<'a> {
    callback: &'a Fn(),  // <-- ERROR No lifetime bound can be inferred.
}

Errors of this sort are a common source of confusion for new users (partly due to a poor error message). To avoid errors, those examples would have to be written as follows:

struct SomeStruct {
    object: Box<Writer+'static>,
}

struct AnotherStruct<'a> {
    callback: &'a (Fn()+'a),
}

Ever since it was introduced, there has been a desire to make this fully explicit notation more compact for common cases. In practice, the object bounds are almost always tightly linked to the context in which the object appears: it is relatively rare, for example, to have a boxed object type that is not bounded by 'static or Send (e.g., Box<Trait+'a>). Similarly, it is unusual to have a reference to an object where the object itself has a distinct bound (e.g., &'a (Trait+'b)). This is not to say these situations never arise; as we’ll see below, both of these do arise in practice, but they are relatively unusual (and in fact there is never a good reason to do &'a (Trait+'b), though there can be a reason to have &'a mut (Trait+'b); see “Detailed Design” for full details).

The need for a shorthand is made somewhat more urgent by RFC 458, which disconnects the Send trait from the 'static bound. This means that object types now are written Box<Foo+Send> would have to be written Box<Foo+Send+'static>.

Therefore, the following examples would require explicit bounds:

trait Message : Send { }
Box<Message> // ERROR: 'static no longer inferred from `Send` supertrait
Box<Writer+Send> // ERROR: 'static no longer inferred from `Send` bound

The proposed rule

This RFC proposes to use the context in which an object type appears to derive a sensible default. Specifically, the default begins as 'static. Type constructors like & or user-defined structs can alter that default for their type arguments, as follows:

  • The default begins as 'static.
  • &'a X and &'a mut X change the default for object bounds within X to be 'a
  • The defaults for user-defined types like SomeType<X> are driven by the where-clauses defined on SomeType, see the next section for details. The high-level idea is that if the where-clauses on SomeType indicate the X will be borrowed for a lifetime 'a, then the default for objects appearing in X becomes 'a.

The motivation for these rules is basically that objects which are not contained within a reference default to 'static, and otherwise the default is the lifetime of the reference. This is almost always what you want. As evidence, consider the following statistics, which show the frequency of trait references from three Rust projects. The final column shows the percentage of uses that would be correctly predicted by the proposed rule.

As these statistics were gathered using ack and some simple regular expressions, they only include cover those cases where an explicit lifetime bound was required today. In function signatures, lifetime bounds can always be omitted, and it is impossible to distinguish &SomeTrait from &SomeStruct using only a regular expression. However, we believe that the proposed rule would be compatible with the existing defaults for function signatures in all or virtually all cases.

The first table shows the results for objects that appear within a Box:

packageBox<Trait+Send>Box<Trait+'static>Box<Trait+'other>%
iron600100%
cargo70750%
rust53282080%

Here rust refers to both the standard library and rustc. As you can see, cargo (and rust, specifically libsyntax) both have objects that encapsulate borrowed references, leading to types Box<Trait+'src>. This pattern is not aided by the current defaults (though it is also not made any more explicit than it already is). However, this is the minority.

The next table shows the results for references to objects.

package&(Trait+Send)&'a [mut] (Trait+'a)&'a mut (Trait+'b)%
iron000100%
cargo0050%
rust190100%

As before, the defaults would not help cargo remove its existing annotations (though they do not get any worse), though all other cases are resolved. (Also, from casual examination, it appears that cargo could in fact employ the proposed defaults without a problem, though the types would be different than the types as they appear in the source today, but this has not been fully verified.)

Detailed design

This section extends the high-level rule above with support for user-defined types, and also describes potential interactions with other parts of the system.

User-defined types. The way that user-defined types like SomeType<...> will depend on the where-clauses attached to SomeType:

  • If SomeType contains a single where-clause like T:'a, where T is some type parameter on SomeType and 'a is some lifetime, then the type provided as value of T will have a default object bound of 'a. An example of this is std::cell::Ref: a usage like Ref<'x, X> would change the default for object types appearing in X to be 'a.
  • If SomeType contains no where-clauses of the form T:'a then the default is not changed. An example of this is Box or Rc. Usages like Box<X> would therefore leave the default unchanged for object types appearing in X, which probably means that the default would be 'static (though &'a Box<X> would have a default of 'a).
  • If SomeType contains multiple where-clausess of the form T:'a, then the default is cleared and explicit lifetiem bounds are required. There are no known examples of this in the standard library as this situation arises rarely in practice.

The motivation for these rules is that T:'a annotations are only required when a reference to T with lifetime 'a appears somewhere within the struct body. For example, the type std::cell::Ref is defined:

pub struct Ref<'b, T:'b> {
    value: &'b T,
    borrow: BorrowRef<'b>,
}

Because the field value has type &'b T, the declaration T:'b is required, to indicate that borrowed pointers within T must outlive the lifetime 'b. This RFC uses this same signal to control the defaults on objects types.

It is important that the default is not driven by the actual types of the fields within Ref, but solely by the where-clauses declared on Ref. This is both because it better serves to separate interface and implementation and because trying to examine the types of the fields to determine the default would create a cycle in the case of recursive types.

Precedence of this rule with respect to other defaults. This rule takes precedence over the existing existing defaults that are applied in function signatures as well as those that are intended (but not yet implemented) for impl declarations. Therefore:

fn foo1(obj: &SomeTrait) { }
fn foo2(obj: Box<SomeTrait>) { }

expand under this RFC to:

// Under this RFC:
fn foo1<'a>(obj: &'a (SomeTrait+'a)) { }
fn foo2(obj: Box<SomeTrait+'static>) { }

whereas today those same functions expand to:

// Under existing rules:
fn foo1<'a,'b>(obj: &'a (SomeTrait+'b)) { }
fn foo2(obj: Box<SomeTrait+'static>) { }

The reason for this rule is that we wish to ensure that if one writes a struct declaration, then any types which appear in the struct declaration can be safely copy-and-pasted into a fn signature. For example:

struct Foo {
    x: Box<SomeTrait>, // equiv to `Box<SomeTrait+'static>`
}

fn bar(foo: &mut Foo, x: Box<SomeTrait>) {
    foo.x = x; // (*)
}

The goal is to ensure that the line marked with (*) continues to compile. If we gave the fn signature defaults precedence over the object defaults, the assignment would in this case be illegal, because the expansion of Box<SomeTrait> would be different.

Interaction with object coercion. The rules specify that &'a SomeTrait and &'a mut SomeTrait are expanded to &'a (SomeTrait+'a)and &'a mut (SomeTrait+'a) respectively. Today, in fn signatures, one would get the expansions &'a (SomeTrait+'b) and &'a mut (SomeTrait+'b), respectively. In the case of a shared reference &'a SomeTrait, this difference is basically irrelevant, as the lifetime bound can always be approximated to be shorter when needed.

In the case a mutable reference &'a mut SomeTrait, however, using two lifetime variables is in principle a more general expansion. The reason has to do with “variance” – specifically, because the proposed expansion places the 'a lifetime qualifier in the reference of a mutable reference, the compiler will be unable to allow 'a to be approximated with a shorter lifetime. You may have experienced this if you have types like &'a mut &'a mut Foo; the compiler is also forced to be conservative about the lifetime 'a in that scenario.

However, in the specific case of object types, this concern is ameliorated by the existing object coercions. These coercions permit &'a mut (SomeTrait+'a) to be coerced to &'b mut (SomeTrait+'c) where 'a : 'b and 'a : 'c. The reason that this is legal is because unsized types (like object types) cannot be assigned, thus sidestepping the variance concerns. This means that programs like the following compile successfully (though you will find that you get errors if you replace the object type (Counter+'a) with the underlying type &'a mut u32):

#![allow(unused_variables)]
#![allow(dead_code)]

trait Counter {
    fn inc_and_get(&mut self) -> u32;
}

impl<'a> Counter for &'a mut u32 {
    fn inc_and_get(&mut self) -> u32 {
        **self += 1;
        **self
    }
}

fn foo<'a>(x: &'a u32, y: &'a mut (Counter+'a)) {
}

fn bar<'a>(x: &'a mut (Counter+'a)) {
    let value = 2_u32;
    foo(&value, x)
}

fn main() {
}

This may seem surprising, but it’s a reflection of the fact that object types give the user less power than if the user had direct access to the underlying data; the user is confined to accessing the underlying data through a known interface.

Drawbacks

A. Breaking change. This change has the potential to break some existing code, though given the statistics gathered we believe the effect will be minimal (in particular, defaults are only permitted in fn signatures today, so in most existing code explicit lifetime bounds are used).

B. Lifetime errors with defaults can get confusing. Defaults always carry some potential to surprise users, though it’s worth pointing out that the current rules are also a big source of confusion. Further improvements like the current system for suggesting alternative fn signatures would help here, of course (and are an expected subject of investigation regardless).

C. Inferring T:'a annotations becomes inadvisable. It has sometimes been proposed that we should infer the T:'a annotations that are currently required on structs. Adopting this RFC makes that inadvisable because the effect of inferred annotations on defaults would be quite subtle (one could ignore them, which is suboptimal, or one could try to use them, but that makes the defaults that result quite non-obvious, and may also introduce cyclic dependencies in the code that are very difficult to resolve, since inferring the bounds needed without knowing object lifetime bounds would be challenging). However, there are good reasons not to want to infer those bounds in any case. In general, Rust has adopted the principle that type definitions are always fully explicit when it comes to reference lifetimes, even though fn signatures may omit information (e.g., omitted lifetimes, lifetime elision, etc). This principle arose from past experiments where we used extensive inference in types and found that this gave rise to particularly confounding errors, since the errors were based on annotations that were inferred and hence not always obvious.

Alternatives

  1. Leave things as they are with an improved error message. Besides the general dissatisfaction with the current system, a big concern here is that if RFC 458 is accepted (which seems likely), this implies that object types like SomeTrait+Send will now require an explicit region bound. Most of the time, that would be SomeTrait+Send+'static, which is very long indeed. We considered the option of introducing a new trait, let’s call it Own for now, that is basically Send+'static. However, that required (1) finding a reasonable name for Own; (2) seems to lessen one of the benefits of RFC 458, which is that lifetimes and other properties can be considered orthogonally; and (3) does nothing to help with cases like &'a mut FnMut(), which one would still have to write as &'a mut (FnMut()+'a).

  2. Do not drive defaults with the T:'a annotations that appear on structs. An earlier iteration of this RFC omitted the consideration of T:'a annotations from user-defined structs. While this retains the option of inferring T:'a annotations, it means that objects appearing in user-defined types like Ref<'a, Trait> get the wrong default.

Unresolved questions

None.

Summary

Rename the be reserved keyword to become.

Motivation

A keyword needs to be reserved to support guaranteed tail calls in a backward-compatible way. Currently the keyword reserved for this purpose is be, but the become alternative was proposed in the old RFC for guaranteed tail calls, which is now postponed and tracked in PR#271.

Some advantages of the become keyword are:

  • it provides a clearer indication of its meaning (“this function becomes that function”)
  • its syntax results in better code alignment (become is exactly as long as return)

The expected result is that users will be unable to use become as identifier, ensuring that it will be available for future language extensions.

This RFC is not about implementing tail call elimination, only on whether the be keyword should be replaced with become.

Detailed design

Rename the be reserved word to become. This is a very simple find-and-replace.

Drawbacks

Some code might be using become as an identifier.

Alternatives

The main alternative is to do nothing, i.e. to keep the be keyword reserved for supporting guaranteed tail calls in a backward-compatible way. Using become as the keyword for tail calls would not be backward-compatible because it would introduce a new keyword, which might have been used in valid code.

Another option is to add the become keyword, without removing be. This would have the same drawbacks as the current proposal (might break existing code), but it would also guarantee that the become keyword is available in the future.

Unresolved questions

Summary

Add a new intrinsic, discriminant_value that extracts the value of the discriminant for enum types.

Motivation

Many operations that work with discriminant values can be significantly improved with the ability to extract the value of the discriminant that is used to distinguish between variants in an enum. While trivial cases often optimise well, more complex ones would benefit from direct access to this value.

A good example is the SqlState enum from the postgres crate (Listed at the end of this RFC). It contains 233 variants, of which all but one contain no fields. The most obvious implementation of (for example) the PartialEq trait looks like this:

match (self, other) {
    (&Unknown(ref s1), &Unknown(ref s2)) => s1 == s2,
    (&SuccessfulCompletion, &SuccessfulCompletion) => true,
    (&Warning, &Warning) => true,
    (&DynamicResultSetsReturned, &DynamicResultSetsReturned) => true,
    (&ImplicitZeroBitPadding, &ImplicitZeroBitPadding) => true,
	      .
		  .
		  .
    (_, _) => false
}

Even with optimisations enabled, this code is very suboptimal, producing this code. A way to extract the discriminant would allow this code:

match (self, other) {
    (&Unknown(ref s1), &Unknown(ref s2)) => s1 == s2,
    (l, r) => unsafe {
	    discriminant_value(l) == discriminant_value(r)
	}
}

Which is compiled into this IR.

Detailed design

What is a discriminant?

A discriminant is a value stored in an enum type that indicates which variant the value is. The most common case is that the discriminant is stored directly as an extra field in the variant. However, the discriminant may be stored in any place, and in any format. However, we can always extract the discriminant from the value somehow.

Implementation

For any given type, discriminant_value will return a u64 value. The values returned are as specified:

  • Non-Enum Type: Always 0

  • C-Like Enum Type: If no variants have fields, then the enum is considered “C-Like”. The user is able to specify discriminant values in this case, and the return value would be equivalent to the result of casting the variant to a u64.

  • ADT Enum Type: If any variant has a field, then the enum is conidered to be an “ADT” enum. The user is not able to specify the discriminant value in this case. The precise values are unspecified, but have the following characteristics:

    • The value returned for the same variant of the same enum type will compare as equal. I.E. discriminant_value(v) == discriminant_value(v).
    • Two values returned for different variants will compare as unequal relative to their respective listed positions. That means that if variant A is listed before variant B, then discriminant_value(A) < discriminant_value(B).

Note the returned values for two differently-typed variants may compare in any way.

Drawbacks

  • Potentially exposes implementation details. However, relying the specific values returned from discriminant_value should be considered bad practice, as the intrinsic provides no such guarantee.

  • Allows non-enum types to be provided. This may be unexpected by some users.

Alternatives

  • More strongly specify the values returned. This would allow for a broader range of uses, but requires specifying behaviour that we may not want to.

  • Disallow non-enum types. Non-enum types do not have a discriminant, so trying to extract might be considered an error. However, there is no compelling reason to disallow these types as we can simply treat them as single-variant enums and synthesise a zero constant. Note that this is what would be done for single-variant enums anyway.

  • Do nothing. Improvements to codegen and/or optimisation could make this unnecessary. The “Sufficiently Smart Compiler” trap is a strong case against this reasoning though. There will likely always be cases where the user can write more efficient code than the compiler can produce.

Unresolved questions

  • Should #[derive] use this intrinsic to improve derived implementations of traits? While intrinsics are inherently unstable, #[derive]d code is compiler generated and therefore can be updated if the intrinsic is changed or removed.

Appendix

pub enum SqlState {
    SuccessfulCompletion,
    Warning,
    DynamicResultSetsReturned,
    ImplicitZeroBitPadding,
    NullValueEliminatedInSetFunction,
    PrivilegeNotGranted,
    PrivilegeNotRevoked,
    StringDataRightTruncationWarning,
    DeprecatedFeature,
    NoData,
    NoAdditionalDynamicResultSetsReturned,
    SqlStatementNotYetComplete,
    ConnectionException,
    ConnectionDoesNotExist,
    ConnectionFailure,
    SqlclientUnableToEstablishSqlconnection,
    SqlserverRejectedEstablishmentOfSqlconnection,
    TransactionResolutionUnknown,
    ProtocolViolation,
    TriggeredActionException,
    FeatureNotSupported,
    InvalidTransactionInitiation,
    LocatorException,
    InvalidLocatorException,
    InvalidGrantor,
    InvalidGrantOperation,
    InvalidRoleSpecification,
    DiagnosticsException,
    StackedDiagnosticsAccessedWithoutActiveHandler,
    CaseNotFound,
    CardinalityViolation,
    DataException,
    ArraySubscriptError,
    CharacterNotInRepertoire,
    DatetimeFieldOverflow,
    DivisionByZero,
    ErrorInAssignment,
    EscapeCharacterConflict,
    IndicatorOverflow,
    IntervalFieldOverflow,
    InvalidArgumentForLogarithm,
    InvalidArgumentForNtileFunction,
    InvalidArgumentForNthValueFunction,
    InvalidArgumentForPowerFunction,
    InvalidArgumentForWidthBucketFunction,
    InvalidCharacterValueForCast,
    InvalidDatetimeFormat,
    InvalidEscapeCharacter,
    InvalidEscapeOctet,
    InvalidEscapeSequence,
    NonstandardUseOfEscapeCharacter,
    InvalidIndicatorParameterValue,
    InvalidParameterValue,
    InvalidRegularExpression,
    InvalidRowCountInLimitClause,
    InvalidRowCountInResultOffsetClause,
    InvalidTimeZoneDisplacementValue,
    InvalidUseOfEscapeCharacter,
    MostSpecificTypeMismatch,
    NullValueNotAllowedData,
    NullValueNoIndicatorParameter,
    NumericValueOutOfRange,
    StringDataLengthMismatch,
    StringDataRightTruncationException,
    SubstringError,
    TrimError,
    UnterminatedCString,
    ZeroLengthCharacterString,
    FloatingPointException,
    InvalidTextRepresentation,
    InvalidBinaryRepresentation,
    BadCopyFileFormat,
    UntranslatableCharacter,
    NotAnXmlDocument,
    InvalidXmlDocument,
    InvalidXmlContent,
    InvalidXmlComment,
    InvalidXmlProcessingInstruction,
    IntegrityConstraintViolation,
    RestrictViolation,
    NotNullViolation,
    ForeignKeyViolation,
    UniqueViolation,
    CheckViolation,
    ExclusionViolation,
    InvalidCursorState,
    InvalidTransactionState,
    ActiveSqlTransaction,
    BranchTransactionAlreadyActive,
    HeldCursorRequiresSameIsolationLevel,
    InappropriateAccessModeForBranchTransaction,
    InappropriateIsolationLevelForBranchTransaction,
    NoActiveSqlTransactionForBranchTransaction,
    ReadOnlySqlTransaction,
    SchemaAndDataStatementMixingNotSupported,
    NoActiveSqlTransaction,
    InFailedSqlTransaction,
    InvalidSqlStatementName,
    TriggeredDataChangeViolation,
    InvalidAuthorizationSpecification,
    InvalidPassword,
    DependentPrivilegeDescriptorsStillExist,
    DependentObjectsStillExist,
    InvalidTransactionTermination,
    SqlRoutineException,
    FunctionExecutedNoReturnStatement,
    ModifyingSqlDataNotPermittedSqlRoutine,
    ProhibitedSqlStatementAttemptedSqlRoutine,
    ReadingSqlDataNotPermittedSqlRoutine,
    InvalidCursorName,
    ExternalRoutineException,
    ContainingSqlNotPermitted,
    ModifyingSqlDataNotPermittedExternalRoutine,
    ProhibitedSqlStatementAttemptedExternalRoutine,
    ReadingSqlDataNotPermittedExternalRoutine,
    ExternalRoutineInvocationException,
    InvalidSqlstateReturned,
    NullValueNotAllowedExternalRoutine,
    TriggerProtocolViolated,
    SrfProtocolViolated,
    SavepointException,
    InvalidSavepointException,
    InvalidCatalogName,
    InvalidSchemaName,
    TransactionRollback,
    TransactionIntegrityConstraintViolation,
    SerializationFailure,
    StatementCompletionUnknown,
    DeadlockDetected,
    SyntaxErrorOrAccessRuleViolation,
    SyntaxError,
    InsufficientPrivilege,
    CannotCoerce,
    GroupingError,
    WindowingError,
    InvalidRecursion,
    InvalidForeignKey,
    InvalidName,
    NameTooLong,
    ReservedName,
    DatatypeMismatch,
    IndeterminateDatatype,
    CollationMismatch,
    IndeterminateCollation,
    WrongObjectType,
    UndefinedColumn,
    UndefinedFunction,
    UndefinedTable,
    UndefinedParameter,
    UndefinedObject,
    DuplicateColumn,
    DuplicateCursor,
    DuplicateDatabase,
    DuplicateFunction,
    DuplicatePreparedStatement,
    DuplicateSchema,
    DuplicateTable,
    DuplicateAliaas,
    DuplicateObject,
    AmbiguousColumn,
    AmbiguousFunction,
    AmbiguousParameter,
    AmbiguousAlias,
    InvalidColumnReference,
    InvalidColumnDefinition,
    InvalidCursorDefinition,
    InvalidDatabaseDefinition,
    InvalidFunctionDefinition,
    InvalidPreparedStatementDefinition,
    InvalidSchemaDefinition,
    InvalidTableDefinition,
    InvalidObjectDefinition,
    WithCheckOptionViolation,
    InsufficientResources,
    DiskFull,
    OutOfMemory,
    TooManyConnections,
    ConfigurationLimitExceeded,
    ProgramLimitExceeded,
    StatementTooComplex,
    TooManyColumns,
    TooManyArguments,
    ObjectNotInPrerequisiteState,
    ObjectInUse,
    CantChangeRuntimeParam,
    LockNotAvailable,
    OperatorIntervention,
    QueryCanceled,
    AdminShutdown,
    CrashShutdown,
    CannotConnectNow,
    DatabaseDropped,
    SystemError,
    IoError,
    UndefinedFile,
    DuplicateFile,
    ConfigFileError,
    LockFileExists,
    FdwError,
    FdwColumnNameNotFound,
    FdwDynamicParameterValueNeeded,
    FdwFunctionSequenceError,
    FdwInconsistentDescriptorInformation,
    FdwInvalidAttributeValue,
    FdwInvalidColumnName,
    FdwInvalidColumnNumber,
    FdwInvalidDataType,
    FdwInvalidDataTypeDescriptors,
    FdwInvalidDescriptorFieldIdentifier,
    FdwInvalidHandle,
    FdwInvalidOptionIndex,
    FdwInvalidOptionName,
    FdwInvalidStringLengthOrBufferLength,
    FdwInvalidStringFormat,
    FdwInvalidUseOfNullPointer,
    FdwTooManyHandles,
    FdwOutOfMemory,
    FdwNoSchemas,
    FdwOptionNameNotFound,
    FdwReplyHandle,
    FdwSchemaNotFound,
    FdwTableNotFound,
    FdwUnableToCreateExecution,
    FdwUnableToCreateReply,
    FdwUnableToEstablishConnection,
    PlpgsqlError,
    RaiseException,
    NoDataFound,
    TooManyRows,
    InternalError,
    DataCorrupted,
    IndexCorrupted,
    Unknown(String),
}

History

This RFC was accepted on a provisional basis on 2015-10-04. The intention is to implement and experiment with the proposed intrinsic. Some concerns expressed in the RFC discussion that will require resolution before the RFC can be fully accepted:

  • Using bounds such as T:Reflect to help ensure parametricity.
  • Do we want to change the return type in some way?
    • It may not be helpful if we expose discriminant directly in the case of (potentially) negative discriminants.
    • We might want to return something more opaque to guard against unintended representation exposure.
  • Does this intrinsic need to be unsafe?

Summary

The Debug trait is intended to be implemented by every type and display useful runtime information to help with debugging. This RFC proposes two additions to the fmt API, one of which aids implementors of Debug, and one which aids consumers of the output of Debug. Specifically, the # format specifier modifier will cause Debug output to be “pretty printed”, and some utility builder types will be added to the std::fmt module to make it easier to implement Debug manually.

Motivation

Pretty printing

The conventions for Debug format state that output should resemble Rust struct syntax, without added line breaks. This can make output difficult to read in the presence of complex and deeply nested structures:

HashMap { "foo": ComplexType { thing: Some(BufferedReader { reader: FileStream { path: "/home/sfackler/rust/README.md", mode: R }, buffer: 1013/65536 }), other_thing: 100 }, "bar": ComplexType { thing: Some(BufferedReader { reader: FileStream { path: "/tmp/foobar", mode: R }, buffer: 0/65536 }), other_thing: 0 } }

This can be made more readable by adding appropriate indentation:

HashMap {
    "foo": ComplexType {
        thing: Some(
            BufferedReader {
                reader: FileStream {
                    path: "/home/sfackler/rust/README.md",
                    mode: R
                },
                buffer: 1013/65536
            }
        ),
        other_thing: 100
    },
    "bar": ComplexType {
        thing: Some(
            BufferedReader {
                reader: FileStream {
                    path: "/tmp/foobar",
                    mode: R
                },
                buffer: 0/65536
            }
        ),
        other_thing: 0
    }
}

However, we wouldn’t want this “pretty printed” version to be used by default, since it’s significantly more verbose.

Helper types

For many Rust types, a Debug implementation can be automatically generated by #[derive(Debug)]. However, many encapsulated types cannot use the derived implementation. For example, the types in std::io::buffered all have manual Debug impls. They all maintain a byte buffer that is both extremely large (64k by default) and full of uninitialized memory. Printing it in the Debug impl would be a terrible idea. Instead, the implementation prints the size of the buffer as well as how much data is in it at the moment: https://github.com/rust-lang/rust/blob/0aec4db1c09574da2f30e3844de6d252d79d4939/src/libstd/io/buffered.rs#L48-L60

pub struct BufferedStream<S> {
    inner: BufferedReader<InternalBufferedWriter<S>>
}

impl<S> fmt::Debug for BufferedStream<S> where S: fmt::Debug {
    fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
        let reader = &self.inner;
        let writer = &self.inner.inner.0;
        write!(fmt, "BufferedStream {{ stream: {:?}, write_buffer: {}/{}, read_buffer: {}/{} }}",
               writer.inner,
               writer.pos, writer.buf.len(),
               reader.cap - reader.pos, reader.buf.len())
    }
}

A purely manual implementation is tedious to write and error prone. These difficulties become even more pronounced with the introduction of the “pretty printed” format described above. If Debug is too painful to manually implement, developers of libraries will create poor implementations or omit them entirely. Some simple structures to automatically create the correct output format can significantly help ease these implementations:

impl<S> fmt::Debug for BufferedStream<S> where S: fmt::Debug {
    fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
        let reader = &self.inner;
        let writer = &self.inner.inner.0;
        fmt.debug_struct("BufferedStream")
            .field("stream", writer.inner)
            .field("write_buffer", &format_args!("{}/{}", writer.pos, writer.buf.len()))
            .field("read_buffer", &format_args!("{}/{}", reader.cap - reader.pos, reader.buf.len()))
            .finish()
    }
}

Detailed design

Pretty printing

The # modifier (e.g. {:#?}) will be interpreted by Debug implementations as a request for “pretty printed” output:

  • Non-compound output is unchanged from normal Debug output: e.g. 10, "hi", None.
  • Array, set and map output is printed with one element per line, indented four spaces, and entries printed with the # modifier as well: e.g.
[
    "a",
    "b",
    "c"
]
HashSet {
    "a",
    "b",
    "c"
}
HashMap {
    "a": 1,
    "b": 2,
    "c": 3
}
  • Struct and tuple struct output is printed with one field per line, indented four spaces, and fields printed with the # modifier as well: e.g.
Foo {
    field1: "hi",
    field2: 10,
    field3: false
}
Foo(
    "hi",
    10,
    false
)

In all cases, pretty printed and non-pretty printed output should differ only in the addition of newlines and whitespace.

Helper types

Types will be added to std::fmt corresponding to each of the common Debug output formats. They will provide a builder-like API to create correctly formatted output, respecting the # flag as needed. A full implementation can be found at https://gist.github.com/sfackler/6d6610c5d9e271146d11. (Note that there’s a lot of almost-but-not-quite duplicated code in the various impls. It can probably be cleaned up a bit). For convenience, methods will be added to Formatter which create them. An example of use of the debug_struct method is shown in the Motivation section. In addition, the padded method returns a type implementing fmt::Writer that pads input passed to it. This is used inside of the other builders, but is provided here for use by Debug implementations that require formats not provided with the other helpers.

impl Formatter {
    pub fn debug_struct<'a>(&'a mut self, name: &str) -> DebugStruct<'a> { ... }
    pub fn debug_tuple<'a>(&'a mut self, name: &str) -> DebugTuple<'a> { ... }
    pub fn debug_set<'a>(&'a mut self, name: &str) -> DebugSet<'a> { ... }
    pub fn debug_map<'a>(&'a mut self, name: &str) -> DebugMap<'a> { ... }

    pub fn padded<'a>(&'a mut self) -> PaddedWriter<'a> { ... }
}

Drawbacks

The use of the # modifier adds complexity to Debug implementations.

The builder types are adding extra #[stable] surface area to the standard library that will have to be maintained.

Alternatives

We could take the helper structs alone without the pretty printing format. They’re still useful even if a library author doesn’t have to worry about the second format.

Unresolved questions

The indentation level is currently hardcoded to 4 spaces. We could allow that to be configured as well by using the width or precision specifiers, for example, {:2#?} would pretty print with a 2-space indent. It’s not totally clear to me that this provides enough value to justify the extra complexity.

  • Start Date: 2015-01-21
  • RFC PR: #702
  • Rust Issue: #21879

Summary

Add the syntax .. for std::ops::RangeFull.

Motivation

Range expressions a..b, a.. and ..b all have dedicated syntax and produce first-class values. This means that they will be usable and useful in custom APIs, so for consistency, the fourth slicing range, RangeFull, could have its own syntax ..

Detailed design

.. will produce a std::ops::RangeFull value when it is used in an expression. This means that slicing the whole range of a sliceable container is written &foo[..].

We should remove the old &foo[] syntax for consistency. Because of this breaking change, it would be best to change this before Rust 1.0.

As previously stated, when we have range expressions in the language, they become convenient to use when stating ranges in an API.

@Gankro fielded ideas where methods like for example .remove(index) -> element on a collection could be generalized by accepting either indices or ranges. Today’s .drain() could be expressed as .remove(..).

Matrix or multidimensional array APIs can use the range expressions for indexing and/or generalized slicing and .. represents selecting a full axis in a multidimensional slice, i.e. (1..3, ..) slices the first axis and preserves the second.

Because of deref coercions, the very common conversions of String or Vec to slices don’t need to use slicing syntax at all, so the change in verbosity from [] to [..] is not a concern.

Drawbacks

  • Removing the slicing syntax &foo[] is a breaking change.

  • .. already appears in patterns, as in this example: if let Some(..) = foo { }. This is not a conflict per se, but the same syntax element is used in two different ways in Rust.

Alternatives

  • We could add this syntax later, but we would end up with duplicate slicing functionality using &foo[] and &foo[..].

  • 0.. could replace .. in many use cases (but not for ranges in ordered maps).

Unresolved questions

Any parsing questions should already be mostly solved because of the a.. and ..b cases.

Summary

Allow inherent implementations on types outside of the module they are defined in, effectively reverting RFC PR 155.

Motivation

The main motivation for disallowing such impl bodies was the implementation detail of fake modules being created to allow resolving Type::method, which only worked correctly for impl Type {...} if a struct Type or enum Type were defined in the same module. The old mechanism was obsoleted by UFCS, which desugars Type::method to <Type>::method and performs a type-based method lookup instead, with path resolution having no knowledge of inherent impls - and all of that was implemented by rust-lang/rust#22172.

Aside from invalidating the previous RFC’s motivation, there is something to be said about dealing with restricted inherent impls: it leads to non-DRY single use extension traits, the worst offender being AstBuilder in libsyntax, with almost 300 lines of redundant method definitions.

Detailed design

Remove the existing limitation, and only require that the Self type of the impl is defined in the same crate. This allows moving methods to other modules:

struct Player;

mod achievements {
    struct Achievement;
    impl Player {
        fn achieve(&mut self, _: Achievement) {}
    }
}

Drawbacks

Consistency and ease of finding method definitions by looking at the module the type is defined in, has been mentioned as an advantage of this limitation. However, trait impls already have that problem and single use extension traits could arguably be worse.

Alternatives

  • Leave it as it is. Seems unsatisfactory given that we’re no longer limited by implementation details.

  • We could go further and allow adding inherent methods to any type that could implement a trait outside the crate:

    struct Point<T> { x: T, y: T }
    impl<T: Float> (Vec<Point<T>>, T) {
        fn foo(&mut self) -> T { ... }
    }

    The implementation would reuse the same coherence rules as for trait impls, and, for looking up methods, the “type definition to impl” map would be replaced with a map from method name to a set of impls containing that method.

    Technically, I am not aware of any formulation that limits inherent methods to user-defined types in the same crate, and this extra support could turn out to have a straight-forward implementation with no complications, but I’m trying to present the whole situation to avoid issues in the future - even though I’m not aware of backwards compatibility ones or any related to compiler internals.

Unresolved questions

None.

Summary

Change Functional Record Update (FRU) for struct literal expressions to respect struct privacy.

Motivation

Functional Record Update is the name for the idiom by which one can write ..<expr> at the end of a struct literal expression to fill in all remaining fields of the struct literal by using <expr> as the source for them.

mod foo {
    pub struct Bar { pub a: u8, pub b: String, _cannot_construct: () }

    pub fn new_bar(a: u8, b: String) -> Bar {
        Bar { a: a, b: b, _cannot_construct: () }
    }
}

fn main() {
    let bar_1 = foo::new_bar(3, format!("bar one"));

    let bar_2a = foo::Bar { b: format!("bar two"), ..bar_1 }; // FRU!

    println!("bar_1: {} bar_2a: {}", bar_1.b, bar_2a.b);

    let bar_2b = foo::Bar { a: 17, ..bar_2a };                // FRU again!

    println!("bar_1: {} bar_2b: {}", bar_1.b, bar_2b.b);
}

Currently, Functional Record Update will freely move or copy all fields not explicitly mentioned in the struct literal expression, so the code above runs successfully.

In particular, consider a case like this:

#![allow(unstable)]
extern crate alloc;
use self::foo::Secrets;
mod foo {
    use alloc;
    #[allow(raw_pointer_derive)]
    #[derive(Debug)]
    pub struct Secrets { pub a: u8, pub b: String, ptr: *mut u8 }

    pub fn make_secrets(a: u8, b: String) -> Secrets {
        let ptr = unsafe { alloc::heap::allocate(10, 1) };
        Secrets { a: a, b: b, ptr: ptr }
    }

    impl Drop for Secrets {
        fn drop(&mut self) {
            println!("because of {}, deallocating {:p}", self.b, self.ptr);
            unsafe { alloc::heap::deallocate(self.ptr, 10, 1); }
        }
    }
}

fn main() {
    let s_1 = foo::make_secrets(3, format!("ess one"));
    let s_2 = foo::Secrets { b: format!("ess two"), ..s_1 }; // FRU ...

    println!("s_1.b: {} s_2.b: {}", s_1.b, s_2.b);
    // at end of scope, ... both s_1 *and* s_2 get dropped.  Boom!
}

This example prints the following (if one’s memory allocator is not checking for double-frees):

s_1.b: ess one s_2.b: ess two
because of ess two, deallocating 0x7f00c182e000
because of ess one, deallocating 0x7f00c182e000

In particular, from reading the module foo, it appears that one is attempting to preserve an invariant that each instance of Secrets has its own unique ptr value; but this invariant is broken by the use of FRU.

Note that there is essentially no way around this abstraction violation today; as shown for example in Issue 21407, where the backing storage for a Vec is duplicated in a second Vec by use of the trivial FRU expression { ..t } where t: Vec<T>.

Again, this is due to the current rule that Functional Record Update will freely move or copy all fields not explicitly mentioned in the struct literal expression, regardless of whether they are visible (in terms of privacy) in the spot in code.

This RFC proposes to change that rule, and say that a struct literal expression using FRU is effectively expanded into a complete struct literal with initializers for all fields (i.e., a struct literal that does not use FRU), and that this expanded struct literal is subject to privacy restrictions.

The main motivation for this is to plug this abstraction-violating hole with as little other change to the rules, implementation, and character of the Rust language as possible.

Detailed design

As already stated above, the change proposed here is that a struct literal expression using FRU is effectively expanded into a complete struct literal with initializers for all fields (i.e., a struct literal that does not use FRU), and that this expanded struct literal is subject to privacy restrictions.

(Another way to think of this change is: one can only use FRU with a struct if one has visibility of all of its declared fields. If any fields are hidden by privacy, then all forms of struct literal syntax are unavailable, including FRU.)


This way, the Secrets example above will be essentially equivalent to

#![allow(unstable)]
extern crate alloc;
use self::foo::Secrets;
mod foo {
    use alloc;
    #[allow(raw_pointer_derive)]
    #[derive(Debug)]
    pub struct Secrets { pub a: u8, pub b: String, ptr: *mut u8 }

    pub fn make_secrets(a: u8, b: String) -> Secrets {
        let ptr = unsafe { alloc::heap::allocate(10, 1) };
        Secrets { a: a, b: b, ptr: ptr }
    }

    impl Drop for Secrets {
        fn drop(&mut self) {
            println!("because of {}, deallocating {:p}", self.b, self.ptr);
            unsafe { alloc::heap::deallocate(self.ptr, 10, 1); }
        }
    }
}

fn main() {
    let s_1 = foo::make_secrets(3, format!("ess one"));
    // let s_2 = foo::Secrets { b: format!("ess two"), ..s_1 };
    // is rewritten to:
    let s_2 = foo::Secrets { b: format!("ess two"),
                             /* remainder from FRU */
                             a: s_1.a, ptr: s_1.ptr };

    println!("s_1.b: {} s_2.b: {}", s_1.b, s_2.b);
}

which is rejected as field ptr of foo::Secrets is private and cannot be accessed from fn main (both in terms of reading it from s_1, but also in terms of using it to build a new instance of foo::Secrets.


(While the change to the language is described above in terms of rewriting the code, the implementation need not go that route. In particular, this commit shows a different strategy that is isolated to the librustc_privacy crate.)


The proposed change is applied only to struct literal expressions. In particular, enum struct variants are left unchanged, since all of their fields are already implicitly public.

Drawbacks

There is a use case for allowing private fields to be moved/copied via FRU, which I call the “future extensibility” library design pattern: it is a convenient way for a library author to tell clients to make updated copies of a record in a manner that is oblivious to the addition of new private fields to the struct (at least, new private fields that implement Copy…).

For example, in Rust today without the change proposed here, in the first example above using Bar, the author of the mod foo can change Bar like so:

    pub struct Bar { pub a: u8, pub b: String, _hidden: u8 }

    pub fn new_bar(a: u8, b: String) -> Bar {
        Bar { a: a, b: b, _hidden: 17 }
    }

And all of the code from the fn main in the first example will continue to run.

Also, when the struct is moved (rather than copied) by the FRU expression, the same pattern applies and works even when the new private fields do not implement Copy.

However, there is a small coding pattern that enables such continued future-extensibility for library authors: divide the struct into the entirely pub frontend, with one member that is the pub backend with entirely private contents, like so:

mod foo {
    pub struct Bar { pub a: u8, pub b: String, pub _hidden: BarHidden }
    pub struct BarHidden { _cannot_construct: () }
    fn new_hidden() -> BarHidden {
        BarHidden { _cannot_construct: () }
    }

    pub fn new_bar(a: u8, b: String) -> Bar {
        Bar { a: a, b: b, _hidden: new_hidden() }
    }
}

fn main() {
    let bar_1 = foo::new_bar(3, format!("bar one"));

    let bar_2a = foo::Bar { b: format!("bar two"), ..bar_1 }; // FRU!

    println!("bar_1: {} bar_2a: {}", bar_1.b, bar_2a.b);

    let bar_2b = foo::Bar { a: 17, ..bar_2a };                // FRU again!

    println!("bar_1: {} bar_2b: {}", bar_1.b, bar_2b.b);
}

All hidden changes that one would have formerly made to Bar itself are now made to BarHidden. The struct Bar is entirely public (including the supposedly-hidden field named _hidden), and thus can be legally be used with FRU in all client contexts that can see the type Bar, even under the new rules proposed by this RFC.

Alternatives

Most Important: If we do not do something about this, then both stdlib types like Vec and user-defined types will fundmentally be unable to enforce abstraction. In other words, the Rust language will be broken.


glaebhoerl and pnkfelix outlined a series of potential alternatives, including this one. Here is an attempt to transcribe/summarize them:

  1. Change the FRU form Bar { x: new_x, y: new_y, ..old_b } so it somehow is treated as consuming old_b, rather than moving/copying each of the remaining fields in old_b.

    It is not totally clear what the semantics actually are for this form. Also, there may not be time to do this properly for 1.0.

  2. Try to adopt a data/abstract-type distinction along the lines of the one in glaebhoerl’s draft RFC.

 As a special subnote on this alternative: While [glaebhoerl's draft RFC] proposed
 syntactic forms for indicating the data/abstract-type distinction, we could
 also (or instead) do it based solely on the presence of a single non-`pub`
 field, as pointed out by glaebhoerl at the [comment here].
(Another potential criterion could be "has *all* private fields."; see
 related discussion below in the item "Outlaw the trivial FRU form Foo".)
  1. let FRU keep its current privacy violating semantics, but also make FRU something one must opt-in to support on a type. E.g. make a builtin FunUpdate trait that a struct must implement in order to be usable with FRU. (Or maybe its an attribute you attach to the struct item.)

    This approach would impose a burden on all code today that makes use of FRU, since they would have to start implementing FunUpdate. Thus, not simple to implement for the libraries and the overall ecosystem. What other designs have been considered? What is the impact of not doing this?

  2. Adopt this RFC, but add a builtin HygienicFunUpdate trait that one can opt-into to get the old (privacy violating) semantics.

    While this is obviously complicated, it has the advantage that it has a staged landing strategy: We could just adopt and implement this RFC for 1.0 beta. We could add HygienicFunUpdate at an arbitrary point in the future; it would not have to be in the 1.0 release.

    (For why the trait is named HygienicFunUpdate, see comment thread on Issue 21407.)

  3. Add way for struct item to opt out of FRU support entirely, e.g. via an attribute.

    This seems pretty fragile; i.e., easy to forget.

  4. Outlaw the trivial FRU form Foo { .. }. That is, to use FRU, you have to use at least one field in the constructing expression. Again, this implies that types like Vec and HashMap will not be subject to the vulnerability outlined here.

    This solves the vulnerability for types like Vec and HashMap, but the Secrets example from the Motivation section still breaks; the author for the mod foo library will need to write their code more carefully to ensure that secret things are contained in a separate struct with all private fields, much like the BarHidden code pattern discussed above.

Unresolved questions

How important is the “future extensibility” library design pattern described in the Drawbacks section? How many Cargo packages, if any, use it?

Summary

  • Use inference to determine the variance of input type parameters.
  • Make it an error to have unconstrained type/lifetime parameters.
  • Revamp the variance markers to make them more intuitive and less numerous. In fact, there are only two: PhantomData and PhantomFn.
  • Integrate the notion of PhantomData into other automated compiler analyses, notably OIBIT, that can otherwise be deceived into yielding incorrect results.

Motivation

Why variance is good

Today, all type parameters are invariant. This can be problematic around lifetimes. A particular common example of where problems arise is in the use of Option. Here is a simple example. Consider this program, which has a struct containing two references:

struct List<'l> {
    field1: &'l int,
    field2: &'l int,
}

fn foo(field1: &int, field2: &int) {
    let list = List { field1: field1, field2: field2 };
    ...
}

fn main() { }

Here the function foo takes two references with distinct lifetimes. The variable list winds up being instantiated with a lifetime that is the intersection of the two (presumably, the body of foo). This is good.

If we modify this program so that one of those references is optional, however, we will find that it gets a compilation error:

struct List<'l> {
    field1: &'l int,
    field2: Option<&'l int>,
}

fn foo(field1: &int, field2: Option<&int>) {
    let list = List { field1: field1, field2: field2 };
        // ERROR: Cannot infer an appropriate lifetime
    ...
}

fn main() { }

The reason for this is that because Option is invariant with respect to its argument type, it means that the lifetimes of field1 and field2 must match exactly. It is not good enough for them to have a common subset. This is not good.

What variance is

Variance is a general concept that comes up in all languages that combine subtyping and generic types. However, because in Rust all subtyping is related to the use of lifetimes parameters, Rust uses variance in a very particular way. Basically, variance is a determination of when it is ok for lifetimes to be approximated (either made bigger or smaller, depending on context).

Let me give a few examples to try and clarify how variance works. Consider this simple struct Context:

struct Context<'data> {
    data: &'data u32,
    ...
}

Here the Context struct has one lifetime parameter, data, that represents the lifetime of some data that it references. Now let’s imagine that the lifetime of the data is some lifetime we call 'x. If we have a context cx of type Context<'x>, it is ok to (for example) pass cx as an argument where a value of type Context<'y> is required, so long as 'x : 'y (“'x outlives 'y”). That is, it is ok to approximate 'x as a shorter lifetime like 'y. This makes sense because by changing 'x to 'y, we’re just pretending the data has a shorter lifetime than it actually has, which can’t do any harm. Here is an example:

fn approx_context<'long,'short>(t: &Context<'long>, data: &'short Data)
    where 'long : 'short
{
    // here we approximate 'long as 'short, but that's perfectly safe.
    let u: &Context<'short> = t;
    do_something(u, data)
}

fn do_something<'x>(t: &Context<'x>, data: &'x Data) {
   ...
}

This case has been traditionally called “contravariant” by Rust, though some argue (somewhat persuasively) that “covariant” is the better terminology. In any case, this RFC generally abandons the “variance” terminology in publicly exposed APIs and bits of the language, making this a moot point (in this RFC, however, I will stick to calling lifetimes which may be made smaller “contravariant”, since that is what we have used in the past).

Next let’s consider a struct with interior mutability:

struct Table<'arg> {
    cell: Cell<&'arg Foo>
}

In the case of Table, it is not safe for the compiler to approximate the lifetime 'arg at all. This is because 'arg appears in a mutable location (the interior of a Cell). Let me show you what could happen if we did allow 'arg to be approximated:

fn innocent<'long>(t: &Table<'long>) {
    {
        let foo: Foo = ..;
        evil(t, &foo);
    }
    t.cell.get() // reads `foo`, which has been destroyed
}

fn evil<'long,'short>(t: &Table<'long>, s: &'short Foo)
    where 'long : 'short
{
    // The following assignment is not legal, but it would be legal
    let u: &Table<'short> = t;
    u.cell.set(s);
}

Here the function evil() changes contents of t.cell to point at data with a shorter lifetime than t originally had. This is bad because the caller still has the old type (Table<'long>) and doesn’t know that data with a shorter lifetime has been inserted. (This is traditionally called “invariant”.)

Finally, there can be cases where it is ok to make a lifetime longer, but not shorter. This comes up (for example) in a type like fn(&'a u8), which may be safely treated as a fn(&'static u8).

Why variance should be inferred

Actually, lifetime parameters already have a notion of variance, and this variance is fully inferred. In fact, the proper variance for type parameters is also being inferred, we’re just largely ignoring it. (It’s not completely ignored; it informs the variance of lifetimes.)

The main reason we chose inference over declarations is that variance is rather tricky business. Most of the time, it’s annoying to have to think about it, since it’s a purely mechanical thing. The main reason that it pops up from time to time in Rust today (specifically, in examples like the one above) is because we ignore the results of inference and just make everything invariant.

But in fact there is another reason to prefer inference. When manually specifying variance, it is easy to get those manual specifications wrong. There is one example later on where the author did this, but using the mechanisms described in this RFC to guide the inference actually led to the correct solution.

The corner case: unused parameters and parameters that are only used unsafely

Unfortunately, variance inference only works if type parameters are actually used. Otherwise, there is no data to go on. You might think parameters would always be used, but this is not true. In particular, some types have “phantom” type or lifetime parameters that are not used in the body of the type. This generally occurs with unsafe code:

struct Items<'vec, T> { // unused lifetime parameter 'vec
    x: *mut T
}

struct AtomicPtr<T> { // unused type parameter T
    data: AtomicUint  // represents an atomically mutable *mut T, really
}

Since these parameters are unused, the inference can reasonably conclude that AtomicPtr<int> and AtomicPtr<uint> are interchangeable: after all, there are no fields of type T, so what difference does it make what value it has? This is not good (and in fact we have behavior like this today for lifetimes, which is a common source of error).

To avoid this hazard, the RFC proposes to make it an error to have a type or lifetime parameter whose variance is not constrained. Almost always, the correct thing to do in such a case is to either remove the parameter in question or insert a marker type. Marker types basically inform the inference engine to pretend as if the type parameter were used in particular ways. They are discussed in the next section.

Revamping the marker types

The UnsafeCell type

As today, the UnsafeCell<T> type is well-known to rustc and is always considered invariant with respect to its type parameter T.

Phantom data

This RFC proposes to replace the existing marker types (CovariantType, ContravariantLifetime, etc) with a single type, PhantomData:

// Represents data of type `T` that is logically present, although the
// type system cannot see it. This type is covariant with respect to `T`.
struct PhantomData<T>;

An instance of PhantomData is used to represent data that is logically present, although the type system cannot see it. PhantomData is covariant with respect to its type parameter T. Here are some examples of uses of PhantomData from the standard library:

struct AtomicPtr<T> {
    data: AtomicUint,

    // Act as if we could reach a `*mut T` for variance. This will
    // make `AtomicPtr` *invariant* with respect to `T` (because `T` appears
    // underneath the `mut` qualifier).
    marker: PhantomData<*mut T>,
}

pub struct Items<'a, T: 'a> {
    ptr: *const T,
    end: *const T,

    // Act as if we could reach a slice `[T]` with lifetime `'a`.
    // Induces covariance on `T` and suitable variance on `'a`
    // (covariance using the definition from rfcs#391).
    marker: marker::PhantomData<&'a [T]>,
}

Note that PhantomData can be used to induce covariance, invariance, or contravariance as desired:

PhantomData<T>         // covariance
PhantomData<*mut T>    // invariance, but see "unresolved question"
PhantomData<Cell<T>>   // invariance
PhantomData<fn(T)>     // contravariant

Even better, the user doesn’t really have to understand the terms covariance, invariance, or contravariance, but simply to accurately model the kind of data that the type system should pretend is present.

Other uses for phantom data. It turns out that phantom data is an important concept for other compiler analyses. One example is the OIBIT analysis, which decides whether certain traits (like Send and Sync) are implemented by recursively examining the fields of structs and enums. OIBIT should treat phantom data the same as normal fields. Another example is the ongoing work for removing the #[unsafe_dtor] annotation, which also sometimes requires a recursive analysis of a similar nature.

Phantom functions

One limitation of the marker type PhantomData is that it cannot be used to constrain unused parameters appearing on traits. Consider the following example:

trait Dummy<T> { /* T is never used here! */ }

Normally, the variance of a trait type parameter would be determined based on where it appears in the trait’s methods: but in this case there are no methods. Therefore, we introduce two special traits that can be used to induce variance. Similarly to PhantomData, these traits represent parts of the interface that are logically present, if not actually present:

// Act as if there were a method `fn foo(A) -> R`. Induces contravariance on A
// and covariance on R.
trait PhantomFn<A,R>;

These traits should appear in the supertrait list. For example, the Dummy trait might be modified as follows:

trait Dummy<T> : PhantomFn() -> T { }

As you can see, the () notation can be used with PhantomFn as well.

Designating marker traits

In addition to phantom fns, there is a convenient trait MarkerTrait that is intended for use as a supertrait for traits that designate sets of types. These traits often have no methods and thus no actual uses of Self. The builtin bounds are a good example:

trait Copy : MarkerTrait { }
trait Sized : MarkerTrait { }
unsafe trait Send : MarkerTrait { }
unsafe trait Sync : MarkerTrait { }

MarkerTrait is not builtin to the language or specially understood by the compiler, it simply encapsulates a common pattern. It is implemented as follows:

trait MarkerTrait for Sized? : PhantomFn(Self) -> bool { }
impl<Sized? T> MarkerTrait for T { }

Intuitively, MarkerTrait extends PhantomFn(Self) because it is “as if” the traits were defined like:

trait Copy {
    fn is_copyable(&self) -> bool { true }
}

Here, the type parameter Self appears in argument position, which is contravariant.

Why contravariance? To see why contravariance is correct, you have to consider what it means for Self to be contravariant for a marker trait. It means that if I have evidence that T : Copy, then I can use that as evidence to show that U : Copy if U <: T. More formally:

(T : Copy) <: (U : Copy)   // I can use `T:Copy` where `U:Copy` is expected...
U <: T                     // ...so long as `U <: T`

More intuitively, it means that if a type T implements the marker, than all of its subtypes must implement the marker.

Because subtyping is exclusively tied to lifetimes in Rust, and most marker traits are orthogonal to lifetimes, it actually rarely makes a difference what choice you make here. But imagine that we have a marker trait that requires 'static (such as Send today, though this may change). If we made marker traits covariant with respect to Self, then &'static Foo : Send could be used as evidence that &'x Foo : Send for any 'x, because &'static Foo <: &'x Foo:

(&'static Foo : Send) <: (&'x Foo : Send) // if things were covariant...
&'static Foo <: &'x Foo                   // ...we'd have the wrong relation here

Interesting side story: the author thought that covariance would be correct for some time. It was only when attempting to phrase the desired behavior as a fn that I realized I had it backward, and quickly found the counterexample I give above. This gives me confidence that expressing variance in terms of data and fns is more reliable than trying to divine the correct results directly.

Detailed design

Most of the detailed design has already been covered in the motivation section.

Summary of changes required

  • Use variance results to inform subtyping of nominal types (structs, enums).
  • Use variance for the output type parameters on traits.
  • Input type parameters of traits are considered invariant.
  • Variance has no effect on the type parameters on an impl or fn; rather those are freshly instantiated at each use.
  • Report an error if the inference does not find any use of a type or lifetime parameter and that parameter is not bound in an associated type binding in some where clause.

These changes have largely been implemented. You can view the results, and the impact on the standard library, in this branch on nikomatsakis’s repository. Note though that as of the time of this writing, the code is slightly outdated with respect to this RFC in certain respects (which will clearly be rectified ASAP).

Variance inference algorithm

I won’t dive too deeply into the inference algorithm that we are using here. It is based on Section 4 of the paper “Taming the Wildcards: Combining Definition- and Use-Site Variance” published in PLDI’11 and written by Altidor et al. There is a fairly detailed (and hopefully only slightly outdated) description in the code as well.

Bivariance yields an error

One big change from today is that if we compute a result of bivariance as the variance for any type or lifetime parameter, we will report a hard error. The error message explicitly suggests the use of a PhantomData or PhantomFn marker as appropriate:

type parameter `T` is never used; either remove it, or use a
marker such as `std::kinds::marker::PhantomData`"

The goal is to help users as concretely as possible. The documentation on the phantom markers should also be helpful in guiding users to make the right choice (the ability to easily attach documentation to the marker type was in fact the major factor that led us to adopt marker types in the first place).

Rules for associated types

The only exception is when this type parameter is in fact an output that is implied by where clauses declared on the type. As an example of why this distinction is important, consider the type Map declared here:

struct Map<A,B,I,F>
where I : Iterator<Item=A>, F : FnMut(A) -> B
{
    iter: I,
    fn: F,
}

Neither the type A nor B are reachable from the fields declared within Map, and hence the variance inference for them results in bivariance. However, they are nonetheless constrained. In the case of the parameter A, its value is determined by the type I, and B is determined by the type F (note that RFC 587 makes the return type of FnMut an associated type).

The analysis to decide when a type parameter is implied by other type parameters is the same as that specified in RFC 447.

Future possibilities

Make phantom data and fns more first-class. One thing I would consider in the future is to integrate phantom data and fns more deeply into the language to improve usability. The idea would be to add a phantom keyword and then permit the explicit declaration of phantom fields and fns in structs and traits respectively:

// Instead of
struct Foo<T> {
    pointer: *mut u8,
    _marker: PhantomData<T>
}
trait MarkerTrait : PhantomFn(Self) {
}

// you would write:
struct Foo<T> {
    pointer: *mut u8,
    phantom T
}
trait MarkerTrait {
    phantom fn(Self);
}

Phantom fields would not need to be specified when creating an instance of a type and (being anonymous) could never be named. They exist solely to aid the analysis. This would improve the usability of phantom markers greatly.

Alternatives

Default to a particular variance when a type or lifetime parameter is unused. A prior RFC advocated for this approach, mostly because markers were seen as annoying to use. However, after some discussion, it seems that it is more prudent to make a smaller change and retain explicit declarations. Some factors that influenced this decision:

  • The importance of phantom data for other analyses like OIBIT.
  • Many unused lifetime parameters (and some unused type parameters) are in fact completely unnecessary. Defaulting to a particular variance would not help in identifying these cases (though a better dead code lint might).
  • There is no default that is always correct but invariance, and invariance is typically too strong.
  • Phantom type parameters occur relatively rarely anyhow.

Remove variance inference and use fully explicit declarations. Variance inference is a rare case where we do non-local inference across type declarations. It might seem more consistent to use explicit declarations. However, variance declarations are notoriously hard for people to understand. We were unable to come up with a suitable set of keywords or other system that felt sufficiently lightweight. Moreover, explicit annotations are error-prone when compared to the phantom data and fn approach (see example in the section regarding marker traits).

Unresolved questions

There is one significant unresolved question: the correct way to handle a *mut pointer. It was revealed recently that while the current treatment of *mut T is correct, it frequently yields overly conservative inference results in practice. At present the inference treats *mut T as invariant with respect to T: this is correct and sound, because a *mut represents aliasable, mutable data, and indeed the subtyping relation for *mut T is that *mut T <: *mut U if T=U.

However, in practice, *mut pointers are often used to build safe abstractions, the APIs of which do not in fact permit aliased mutation. Examples are Vec, Rc, HashMap, and so forth. In all of these cases, the correct variance is covariant – but because of the conservative treatment of *mut, all of these types are being inferred to an invariant result.

The complete solution to this seems to have two parts. First, for convenience and abstraction, we should not be building safe abstractions on raw *mut pointers anyway. We should have several convenient newtypes in the standard library, like ptr::Unique, that can be used, which would also help for handling OIBIT conditions and NonZero optimizations. In my branch I have used the existing (but unstable) type ptr::Unique for the primary role, which is kind of an “unsafe box”. Unique should ensure that it is covariant with respect to its argument.

However, this raises the question of how to implement Unique under the hood, and what to do with *mut T in general. There are various options:

  1. Change *mut so that it behaves like *const. This unfortunately means that abstractions that introduce shared mutability have a responsibility for add phantom data to that affect, something like PhantomData<*const Cell<T>>. This seems non-obvious and unnatural.

  2. Rewrite safe abstractions to use *const (or even usize) instead of *mut, casting to *mut only they have a &mut self method. This is probably the most conservative option.

  3. Change variance to ignore *mut referents entirely. Add a lint to detect types with a *mut T type and require some sort of explicit marker that covers T. This is perhaps the most explicit option. Like option 1, it creates the odd scenario that the variance computation and subtyping relation diverge.

Currently I lean towards option 2.

History

2015.09.18 – This RFC was partially superseded by RFC 1238, which removed the parametricity-based reasoning in favor of an attribute.

Summary

Remove #[unsafe_destructor] from the Rust language. Make it safe for developers to implement Drop on type- and lifetime-parameterized structs and enum (i.e. “Generic Drop”) by imposing new rules on code where such types occur, to ensure that the drop implementation cannot possibly read or write data via a reference of type &'a Data where 'a could have possibly expired before the drop code runs.

Note: This RFC is describing a feature that has been long in the making; in particular it was previously sketched in Rust Issue #8861 “New Destructor Semantics” (the source of the tongue-in-cheek “Start Date” given above), and has a prototype implementation that is being prepared to land. The purpose of this RFC is two-fold:

  1. standalone documentation of the (admittedly conservative) rules imposed by the new destructor semantics, and

  2. elicit community feedback on the rules, both in the form they will take for 1.0 (which is relatively constrained) and the form they might take in the future (which allows for hypothetical language extensions).

Motivation

Part of Rust’s design is rich use of Resource Acquisition Is Initialization (RAII) patterns, which requires destructors: code attached to certain types that runs only when a value of the type goes out of scope or is otherwise deallocated. In Rust, the Drop trait is used for this purpose.

Currently (as of Rust 1.0 alpha), a developer cannot implement Drop on a type- or lifetime-parametric type (e.g. struct Sneetch<'a> or enum Zax<T>) without attaching the #[unsafe_destructor] attribute to it. The reason this attribute is required is that the current implementation allows for such destructors to inject unsoundness accidentally (e.g. reads from or writes to deallocated memory, accessing data when its representation invariants are no longer valid).

Furthermore, while some destructors can be implemented with no danger of unsoundness, regardless of T (assuming that any Drop implementation attached to T is itself sound), as soon as one wants to interact with borrowed data within the fn drop code (e.g. access a field &'a StarOffMachine from a value of type Sneetch<'a> ), there is currently no way to enforce a rule that 'a strictly outlive the value itself. This is a huge gap in the language as it stands: as soon as a developer attaches #[unsafe_destructor] to such a type, it is imposing a subtle and unchecked restriction on clients of that type that they will not ever allow the borrowed data to expire first.

Lifetime parameterization: the Sneetch example

If today Sylvester writes:

// opt-in to the unsoundness!
#![feature(unsafe_destructor)]

pub mod mcbean {
    use std::cell::Cell;

    pub struct StarOffMachine {
        usable: bool,
        dollars: Cell<u64>,
    }

    impl Drop for StarOffMachine {
        fn drop(&mut self) {
            let contents = self.dollars.get();
            println!("Dropping a machine; sending {} dollars to Sylvester.",
                     contents);
            self.dollars.set(0);
            self.usable = false;
        }
    }

    impl StarOffMachine {
        pub fn new() -> StarOffMachine {
            StarOffMachine { usable: true, dollars: Cell::new(0) }
        }
        pub fn remove_star(&self, s: &mut Sneetch) {
            assert!(self.usable,
                    "No different than a read of a dangling pointer.");
            self.dollars.set(self.dollars.get() + 10);
            s.has_star = false;
        }
    }

    pub struct Sneetch<'a> {
        name: &'static str,
        has_star: bool,
        machine: Cell<Option<&'a StarOffMachine>>,
    }

    impl<'a> Sneetch<'a> {
        pub fn new(name: &'static str) -> Sneetch<'a> {
            Sneetch {
                name: name,
                has_star: true,
                machine: Cell::new(None)
            }
        }

        pub fn find_machine(&self, m: &'a StarOffMachine) {
            self.machine.set(Some(m));
        }
    }

    #[unsafe_destructor]
    impl<'a> Drop for Sneetch<'a> {
        fn drop(&mut self) {
            if let Some(m) = self.machine.get() {
                println!("{} says ``before I die, I want to join my \
                          plain-bellied brethren.''", self.name);
                m.remove_star(self);
            }
        }
    }
}

fn unwary_client() {
    use mcbean::{Sneetch, StarOffMachine};
    let (s1, m, s2, s3); // (accommodate PR 21657)
    s1 = Sneetch::new("Sneetch One");
    m = StarOffMachine::new();
    s2 = Sneetch::new("Sneetch Two");
    s3 = Sneetch::new("Sneetch Zee");

    s1.find_machine(&m);
    s2.find_machine(&m);
    s3.find_machine(&m);
}

fn main() {
    unwary_client();
}

This compiles today; if you run it, it prints the following:

Sneetch Zee says ``before I die, I want to join my plain-bellied brethren.''
Sneetch Two says ``before I die, I want to join my plain-bellied brethren.''
Dropping a machine; sending 20 dollars to Sylvester.
Sneetch One says ``before I die, I want to join my plain-bellied brethren.''
thread '<main>' panicked at 'No different than a read of a dangling pointer.', <anon>:27

Explanation: In Sylvester’s code, the Drop implementation for Sneetch invokes a method on the borrowed reference in the field machine. This implies there is an implicit restriction on an value s of type Sneetch<'a>: the lifetime 'a must strictly outlive s.

(The example encodes this constraint in a dynamically-checked manner via an explicit usable boolean flag that is only set to false in the machine’s own destructor; it is important to keep in mind that this is just a method to illustrate the violation in a semi-reliable manner: Using a machine after usable is set to false by its fn drop code is analogous to dereferencing a *mut T that has been deallocated, or similar soundness violations.)

Sylvester’s API does not encode the constraint “'a must strictly outlive the Sneetch<'a>” explicitly; Rust currently has no way of expressing the constraint that one lifetime be strictly greater than another lifetime or type (the form 'a:'b only formally says that 'a must live at least as long as 'b).

Thus, client code like that in unwary_client can inadvertently set up scenarios where Sylvester’s code may break, and Sylvester might be completely unaware of the vulnerability.

Type parameterization: the problem of trait bounds

One might think that all instances of this problem can be identified by the use of a lifetime-parametric Drop implementation, such as impl<'a> Drop for Sneetch<'a> { ..> }

However, consider this trait and struct:

trait Button { fn push(&self); }
struct Zook<B: Button> { button: B, }
#[unsafe_destructor]
impl<B: Button> Drop for Zook<B> {
    fn drop(&mut self) { self.button.push(); }
}

In this case, it is not obvious that there is anything wrong here.

But if we continue the example:

struct Bomb { usable: bool }
impl Drop for Bomb { fn drop(&mut self) { self.usable = false; } }
impl Bomb { fn activate(&self) { assert!(self.usable) } }

enum B<'a> { HarmlessButton, BigRedButton(&'a Bomb) }
impl<'a> Button for B<'a> {
    fn push(&self) {
        if let B::BigRedButton(borrowed) = *self {
            borrowed.activate();
        }
    }
}

fn main() {
    let (mut zook, ticking);
    zook = Zook { button: B::HarmlessButton };
    ticking = Bomb { usable: true };
    zook.button = B::BigRedButton(&ticking);
}

Within the zook there is a hidden reference to borrowed data, ticking, that is assigned the same lifetime as zook but that will be dropped before zook is.

(These examples may seem contrived; see Appendix A for a far less contrived example, that also illustrates how the use of borrowed data can lie hidden behind type parameters.)

The proposal

This RFC is proposes to fix this scenario, by having the compiler ensure that types with destructors are only employed in contexts where either any borrowed data with lifetime 'a within the type either strictly outlives the value of that type, or such borrowed data is provably not accessible from any Drop implementation via a reference of type &'a/&'a mut. This is the “Drop-Check” (aka dropck) rule.

Detailed design

The Drop-Check Rule

The Motivation section alluded to the compiler enforcing a new rule. Here is a more formal statement of that rule:

Let v be some value (either temporary or named) and 'a be some lifetime (scope); if the type of v owns data of type D, where (1.) D has a lifetime- or type-parametric Drop implementation, and (2.) the structure of D can reach a reference of type &'a _, and (3.) either:

  • (A.) the Drop impl for D instantiates D at 'a directly, i.e. D<'a>, or,

  • (B.) the Drop impl for D has some type parameter with a trait bound T where T is a trait that has at least one method,

then 'a must strictly outlive the scope of v.

(Note: This rule is using two phrases that deserve further elaboration and that are discussed further in sections that follow: “the type owns data of type D and “must strictly outlive”.)

(Note: When encountering a D of the form Box<Trait+'b>, we conservatively assume that such a type has a Drop implementation parametric in 'b.)

This rule allows much sound existing code to compile without complaint from rustc. This is largely due to the fact that many Drop implementations enjoy near-complete parametricity: They tend to not impose any bounds at all on their type parameters, and thus the rule does not apply to them.

At the same time, this rule catches the cases where a destructor could possibly reference borrowed data via a reference of type &'a _ or &'a mut_. Here is why:

Condition (A.) ensures that a type like Sneetch<'a> from the Sneetch example will only be assigned to an expression s where 'a strictly outlives s.

Condition (B.) catches cases like Zook<B<'a>> from the Zook example, where the destructor’s interaction with borrowed data is hidden behind a method call in the fn drop.

Near-complete parametricity suffices

Noncopy types

All non-Copy type parameters are (still) assumed to have a destructor. Thus, one would be correct in noting that even a type T with no bounds may still have one hidden method attached; namely, its Drop implementation.

However, the drop implementation for T can only be called when running the destructor for value v if either:

  1. the type of v owns data of type T, or

  2. the destructor of v constructs an instance of T.

In the first case, the Drop-Check rule ensures that T must satisfy either Condition (A.) or (B.). In this second case, the freshly constructed instance of T will only be able to access either borrowed data from v itself (and thus such data will already have lifetime that strictly outlives v) or data created during the execution of the destructor.

Any instances

All types implementing Any is forced to outlive 'static. So one should not be able to hide borrowed data behind the Any trait, and therefore it is okay for the analysis to treat Any like a black box whose destructor is safe to run (at least with respect to not accessing borrowed data).

Strictly outlives

There is a notion of “strictly outlives” within the compiler internals. (This RFC is not adding such a notion to the language itself; expressing “’a strictly outlives ’b” as an API constraint is not a strict necessity at this time.)

The heart of the idea is this: we approximate the notion of “strictly outlives” by the following rule: if a value U needs to strictly outlive another value V with code extent S, we could just say that U needs to live at least as long as the parent scope of S.

There are likely to be sound generalizations of the model given here (and we will likely need to consider such to adopt future extensions like Single-Entry-Multiple-Exit (SEME) regions, but that is out of scope for this RFC).

In terms of its impact on the language, the main change has already landed in the compiler; see Rust PR 21657, which added CodeExtent::Remainder, for more direct details on the implications of that change written in a user-oriented fashion.

One important detail of the strictly-outlives relationship that comes in part from Rust PR 21657: All bindings introduced by a single let statement are modeled as having the same lifetime. In an example like

let a;
let b;
let (c, d);
...

a strictly outlives b, and b strictly outlives both c and d. However, c and d are modeled as having the same lifetime; neither one strictly outlives the other. (Of course, during code execution, one of them will be dropped before the other; the point is that when rustc builds its internal model of the lifetimes of data, it approximates and assigns them both the same lifetime.) This is an important detail, because there are situations where one must assign the same lifetime to two distinct bindings in order to allow them to mutually refer to each other’s data.

For more details on this “strictly outlives” model, see Appendix B.

When does one type own another

The definition of the Drop-Check Rule used the phrase “if the type owns data of type D”.

This criteria is based on recursive descent of the structure of an input type E.

  • If E itself has a Drop implementation that satisfies either condition (A.) or (B.) then add, for all relevant 'a, the constraint that 'a must outlive the scope of the value that caused the recursive descent.

  • Otherwise, if we have previously seen E during the descent then skip it (i.e. we assume a type has no destructor of interest until we see evidence saying otherwise). This check prevents infinite-looping when we encounter recursive references to a type, which can arise in e.g. Option<Box<Type>>.

  • Otherwise, if E is a struct (or tuple), for each of the struct’s fields, recurse on the field’s type (i.e., a struct owns its fields).

  • Otherwise, if E is an enum, for each of the enum’s variants, and for each field of each variant, recurse on the field’s type (i.e., an enum owns its fields).

  • Otherwise, if E is of the form & T, &mut T, * T, or fn (T, ...) -> T, then skip this E (i.e., references, native pointers, and bare functions do not own the types they refer to).

  • Otherwise, recurse on any immediate type substructure of E. (i.e., an instantiation of a polymorphic type Poly<T_1, T_2> is assumed to own T_1 and T_2; note that structs and enums do not fall into this category, as they are handled up above; but this does cover cases like Box<Trait<T_1, T_2>+'a>).

Phantom Data

The above definition for type-ownership is (believed to be) sound for pure Rust programs that do not use unsafe, but it does not suffice for several important types without some tweaks.

In particular, consider the implementation of Vec<T>: as of “Rust 1.0 alpha”:

pub struct Vec<T> {
    ptr: NonZero<*mut T>,
    len: uint,
    cap: uint,
}

According to the above definition, Vec<T> does not own T. This is clearly wrong.

However, it generalizing the rule to say that *mut T owns T would be too conservative, since there are cases where one wants to use *mut T to model references to state that are not owned.

Therefore, we need some sort of marker, so that types like Vec<T> can express that values of that type own instances of T. The PhantomData<T> marker proposed by RFC 738 (“Support variance for type parameters”) is a good match for this. This RFC assumes that either RFC 738 will be accepted, or if necessary, this RFC will be amended so that it itself adds the concept of PhantomData<T> to the language. Therefore, as an additional special case to the criteria above for when the type E owns data of type D, we include:

  • If E is PhantomData<T>, then recurse on T.

Examples of changes imposed by the Drop-Check Rule

Some cyclic structure is still allowed

Earlier versions of the Drop-Check rule were quite conservative, to the point where cyclic data would be disallowed in many contexts. The Drop-Check rule presented in this RFC was crafted to try to keep many existing useful patterns working.

In particular, cyclic structure is still allowed in many contexts. Here is one concrete example:

use std::cell::Cell;

#[derive(Show)]
struct C<'a> {
    v: Vec<Cell<Option<&'a C<'a>>>>,
}

impl<'a> C<'a> {
    fn new() -> C<'a> {
        C { v: Vec::new() }
    }
}

fn f() {
    let (mut c1, mut c2, mut c3);
    c1 = C::new();
    c2 = C::new();
    c3 = C::new();

    c1.v.push(Cell::new(None));
    c1.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));

    c1.v[0].set(Some(&c2));
    c1.v[1].set(Some(&c3));
    c2.v[0].set(Some(&c2));
    c2.v[1].set(Some(&c3));
    c3.v[0].set(Some(&c1));
    c3.v[1].set(Some(&c2));
}

In this code, each of the nodes { c1, c2, c3 } contains a reference to the two other nodes, and those references are stored in a Vec. Note that all of the bindings are introduced by a single let-statement; this is to accommodate the region inference system which wants to assign a single code extent to the 'a lifetime, as discussed in the strictly-outlives section.

Even though Vec<T> itself is defined as implementing Drop, it puts no bounds on T, and therefore that Drop implementation is ignored by the Drop-Check rule.

Directly mixing cycles and Drop is rejected

The Sneetch example illustrates a scenario were borrowed data is dropped while there is still an outstanding borrow that will be accessed by a destructor. In that particular example, one can easily reorder the bindings to ensure that the StarOffMachine outlives all of the sneetches.

But there are other examples that have no such resolution. In particular, graph-structured data where the destructor for each node accesses the neighboring nodes in the graph; this simply cannot be done soundly, because when there are cycles, there is no legal order in which to drop the nodes.

(At least, we cannot do it soundly without imperatively removing a node from the graph as the node is dropped; but we are not going to attempt to support verifying such an invariant as part of this RFC; to my knowledge it is not likely to be feasible with type-checking based static analyses).

In any case, we can easily show some code that will now start to be rejected due to the Drop-Check rule: we take the same C<'a> example of cyclic structure given above, but we now attach a Drop implementation to C<'a>:

use std::cell::Cell;

#[derive(Show)]
struct C<'a> {
    v: Vec<Cell<Option<&'a C<'a>>>>,
}

impl<'a> C<'a> {
    fn new() -> C<'a> {
        C { v: Vec::new() }
    }
}

// (THIS IS NEW)
impl<'a> Drop for C<'a> {
    fn drop(&mut self) { }
}

fn f() {
    let (mut c1, mut c2, mut c3);
    c1 = C::new();
    c2 = C::new();
    c3 = C::new();

    c1.v.push(Cell::new(None));
    c1.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));

    c1.v[0].set(Some(&c2));
    c1.v[1].set(Some(&c3));
    c2.v[0].set(Some(&c2));
    c2.v[1].set(Some(&c3));
    c3.v[0].set(Some(&c1));
    c3.v[1].set(Some(&c2));
}

Now the addition of impl<'a> Drop for C<'a> changes the results entirely;

The Drop-Check rule sees the newly added impl<'a> Drop for C<'a>, which means that for every value of type C<'a>, 'a must strictly outlive the value. But in the binding let (mut c1, mut c2, mut c3) , all three bindings are assigned the same type C<'scope_of_c1_c2_and_c3>, where 'scope_of_c1_c2_and_c3 does not strictly outlive any of the three. Therefore this code will be rejected.

(Note: it is irrelevant that the Drop implementation is a no-op above. The analysis does not care what the contents of that code are; it solely cares about the public API presented by the type to its clients. After all, the Drop implementation for C<'a> could be rewritten tomorrow to contain code that accesses the neighboring nodes.

Some temporaries need to be given names

Due to the way that rustc implements the strictly-outlives relation in terms of code-extents, the analysis does not know in an expression like foo().bar().quux() in what order the temporary values foo() and foo().bar() will be dropped.

Therefore, the Drop-Check rule sometimes forces one to rewrite the code so that it is apparent to the compiler that the value from foo() will definitely outlive the value from foo().bar().

Thus, on occasion one is forced to rewrite:

let q = foo().bar().quux();
...

as:

let foo = foo();
let q = foo.bar().quux()
...

or even sometimes as:

let foo = foo();
let bar = foo.bar();
let q = bar.quux();
...

depending on the types involved.

In practice, pnkfelix saw this arise most often with code like this:

for line in old_io::stdin().lock().lines() {
    ...
}

Here, the result of stdin() is a StdinReader, which holds a RaceBox in a Mutex behind an Arc. The result of the lock() method is a StdinReaderGuard<'a>, which owns a MutexGuard<'a, RaceBox>. The MutexGuard has a Drop implementation that is parametric in 'a; thus, the Drop-Check rule insists that the lifetime assigned to 'a strictly outlive the MutexGuard.

So, under this RFC, we rewrite the code like so:

let stdin = old_io::stdin();
for line in stdin.lock().lines() {
    ...
}

(pnkfelix acknowledges that this rewrite is unfortunate. Potential future work would be to further revise the code extent system so that the compiler knows that the temporary from stdin() will outlive the temporary from stdin().lock(). However, such a change to the code extents could have unexpected fallout, analogous to the fallout that was associated with Rust PR 21657.)

Mixing acyclic structure and Drop is sometimes rejected

This is an example of sound code, accepted today, that is unfortunately rejected by the Drop-Check rule (at least in pnkfelix’s prototype):

#![feature(unsafe_destructor)]

use std::cell::Cell;

#[derive(Show)]
struct C<'a> {
    f: Cell<Option<&'a C<'a>>>,
}

impl<'a> C<'a> {
    fn new() -> C<'a> {
        C { f: Cell::new(None), }
    }
}

// force dropck to care about C<'a>
#[unsafe_destructor]
impl<'a> Drop for C<'a> {
    fn drop(&mut self) { }
}

fn f() {
    let c2;
    let mut c1;

    c1 = C::new();
    c2 = C::new();

    c1.f.set(Some(&c2));
}

fn main() {
    f();
}

In principle this should work, since c1 and c2 are assigned to distinct code extents, and c1 will be dropped before c2. However, in the prototype, the region inference system is determining that the lifetime 'a in &'a C<'a> (from the c1.f.set(Some(&c2)); statement) needs to cover the whole block, rather than just the block remainder extent that is actually covered by the let c2;.

(This may just be a bug somewhere in the prototype, but for the time being pnkfelix is going to assume that it will be a bug that this RFC is forced to live with indefinitely.)

Unsound APIs need to be revised or removed entirely

While the Drop-Check rule is designed to ensure that safe Rust code is sound in its use of destructors, it cannot assure us that unsafe code is sound. It is the responsibility of the author of unsafe code to ensure it does not perform unsound actions; thus, we need to audit our own API’s to ensure that the standard library is not providing functionality that circumvents the Drop-Check rule.

The most obvious instance of this is the arena crate: in particular: one can use an instance of arena::Arena to create cyclic graph structure where each node’s destructor accesses (via &_ references) its neighboring nodes.

Here is a version of our running C<'a> example (where we now do something interesting the destructor for C<'a>) that demonstrates the problem:

Example:

extern crate arena;

use std::cell::Cell;

#[derive(Show)]
struct C<'a> {
    name: &'static str,
    v: Vec<Cell<Option<&'a C<'a>>>>,
    usable: bool,
}

impl<'a> Drop for C<'a> {
    fn drop(&mut self) {
        println!("dropping {}", self.name);
        for neighbor in self.v.iter().map(|v|v.get()) {
            if let Some(neighbor) = neighbor {
                println!("  {} checking neighbor {}",
                         self.name, neighbor.name);
                assert!(neighbor.usable);
            }
        }
        println!("done dropping {}", self.name);
        self.usable = false;

    }
}

impl<'a> C<'a> {
    fn new(name: &'static str) -> C<'a> {
        C { name: name, v: Vec::new(), usable: true }
    }
}

fn f() {
    use arena::Arena;
    let arena = Arena::new();
    let (c1, c2, c3);

    c1 = arena.alloc(|| C::new("c1"));
    c2 = arena.alloc(|| C::new("c2"));
    c3 = arena.alloc(|| C::new("c3"));

    c1.v.push(Cell::new(None));
    c1.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c2.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));
    c3.v.push(Cell::new(None));

    c1.v[0].set(Some(c2));
    c1.v[1].set(Some(c3));
    c2.v[0].set(Some(c2));
    c2.v[1].set(Some(c3));
    c3.v[0].set(Some(c1));
    c3.v[1].set(Some(c2));
}

Calling f() results in the following printout:

dropping c3
  c3 checking neighbor c1
  c3 checking neighbor c2
done dropping c3
dropping c1
  c1 checking neighbor c2
  c1 checking neighbor c3
thread '<main>' panicked at 'assertion failed: neighbor.usable', ../src/test/compile-fail/dropck_untyped_arena_cycle.rs:19

This is unsound. It should not be possible to express such a scenario without using unsafe code.

This RFC suggests that we revise the Arena API by adding a phantom lifetime parameter to its type, and bound the values the arena allocates by that phantom lifetime, like so:

pub struct Arena<'longer_than_self> {
    _invariant: marker::InvariantLifetime<'longer_than_self>,
    ...
}

impl<'longer_than_self> Arena<'longer_than_self> {
    pub fn alloc<T:'longer_than_self, F>(&self, op: F) -> &mut T
        where F: FnOnce() -> T {
        ...
    }
}

Admittedly, this is a severe limitation, since it forces the data allocated by the Arena to store only references to data that strictly outlives the arena, regardless of whether the allocated data itself even has a destructor. (I.e., Arena would become much weaker than TypedArena when attempting to work with cyclic structures). (pnkfelix knows of no way to fix this without adding further extensions to the language, e.g. some way to express “this type’s destructor accesses none of its borrowed data”, which is out of scope for this RFC.)

Alternatively, we could just deprecate the Arena API, (which is not marked as stable anyway.

The example given here can be adapted to other kinds of backing storage structures, in order to double-check whether the API is likely to be sound or not. For example, the arena::TypedArena<T> type appears to be sound (as long as it carries PhantomData<T> just like Vec<T> does). In particular, when one ports the above example to use TypedArena instead of Arena, it is statically rejected by rustc.

The final goal: remove #[unsafe_destructor]

Once all of the above pieces have landed, lifetime- and type-parameterized Drop will be safe, and thus we will be able to remove #[unsafe_destructor]!

Drawbacks

  • The Drop-Check rule is a little complex, and does disallow some sound code that would compile today.

  • The change proposed in this RFC places restrictions on uses of types with attached destructors, but provides no way for a type Foo<'a> to state as part of its public interface that its drop implementation will not read from any borrowed data of lifetime 'a. (Extending the language with such a feature is potential future work, but is out of scope for this RFC.)

  • Some useful interfaces are going to be disallowed by this RFC. For example, the RFC recommends that the current arena::Arena be revised or simply deprecated, due to its unsoundness. (If desired, we could add an UnsafeArena that continues to support the current Arena API with the caveat that its users need to manually enforce the constraint that the destructors do not access data that has been already dropped. But again, that decision is out of scope for this RFC.)

Alternatives

We considered simpler versions of the Drop-Check rule; in particular, an earlier version of it simply said that if the type of v owns any type D that implements Drop, then for any lifetime 'a that D refers to, 'a must strictly outlive the scope of v, because the destructor for D might hypothetically access borrowed data of lifetime 'a.

  • This rule is simpler in the sense that it more obviously sound.

  • But this rule disallowed far more code; e.g. the Cyclic structure still allowed example was rejected under this more naive rule, because C<'a> owns D = Vec<Cell<Option<&'a C<'a>>>>, and this particular D refers to 'a.


Sticking with the current #[unsafe_destructor] approach to lifetime- and type-parametric types that implement Drop is not really tenable; we need to do something (and we have been planning to do something like this RFC for over a year).

Unresolved questions

  • Is the Drop-Check rule provably sound? pnkfelix has based his argument on informal reasoning about parametricity, but it would be good to put forth a more formal argument. (And in the meantime, pnkfelix invites the reader to try to find holes in the rule, preferably with concrete examples that can be fed into the prototype.)

  • How much can covariance help with some of the lifetime issues?

    See in particular Rust Issue 21198 “new scoping rules for safe dtors may benefit from variance on type params”

Before adding Condition (B.) to the Drop-Check Rule, it seemed like enabling covariance in more standard library types was going to be very important for landing this work. And even now, it is possible that covariance could still play an important role. But nonetheless, there are some API’s whose current form is fundamentally incompatible with covariance; e.g. the current TypedArena<T> API is fundamentally invariant with respect to T.

Appendices

Appendix A: Why and when would Drop read from borrowed data

Here is a story, about two developers, Julia and Kurt, and the code they hacked on.

Julia inherited some code, and it is misbehaving. It appears like key/value entries that the code inserts into the standard library’s HashMap are not always retrievable from the map. Julia’s current hypothesis is that something is causing the keys’ computed hash codes to change dynamically, sometime after the entries have been inserted into the map (but it is not obvious when or if this change occurs, nor what its source might be). Julia thinks this hypothesis is plausible, but does not want to audit all of the key variants for possible causes of hash code corruption until after she has hard evidence confirming the hypothesis.

Julia writes some code that walks a hash map’s internals and checks that all of the keys produce a hash code that is consistent with their location in the map. However, since it is not clear when the keys’ hash codes are changing, it is not clear where in the overall code base she should add such checks. (The hash map is sufficiently large that she cannot simply add calls to do this consistency check everywhere.)

However, there is one spot in the control flow that is a clear contender: if the check is run right before the hash map is dropped, then that would surely be sometime after the hypothesized corruption had occurred. In other words, a destructor for the hash map seems like a good place to start; Julia could make her own local copy of the hash map library and add this check to a impl<K,V,S> Drop for HashMap<K,V,S> { ... } implementation.

In this new destructor code, Julia needs to invoke the hash-code method on K. So she adds the bound where K: Eq + Hash<H> to her HashMap and its Drop implementation, along with the corresponding code to walk the table’s entries and check that the hash codes for all the keys matches their position in the table.

Using this, Julia manages confirms her hypothesis (yay). And since it was a reasonable amount of effort to do this experiment, she puts this variation of HashMap up on crates.io, calling it the CheckedHashMap type.

Sometime later, Kurt pulls a copy of CheckHashMap off of crates.io, and he happens to write some code that looks like this:

fn main() {
    #[derive(PartialEq, Eq, Hash, Debug)]
    struct Key<'a> { name: &'a str }

    {
        let (key, mut map, name) : (Key, CheckedHashMap<&Key, String>, String);
        name = format!("k1");
        map = CheckedHashMap::new();
        key = Key { name: &*name };
        map.map.insert(&key, format!("Value for k1"));
    }
}

And, kaboom: when the map goes out of scope, the destructor for CheckedHashMap attempts to compute a hashcode on a reference to key that may not still be valid, and even if key is still valid, it holds a reference to a slice of name that likewise may not still be valid.

This illustrates a case where one might legitimately mix destructor code with borrowed data. (Is this example any less contrived than the Sneetch example? That is in the eye of the beholder.)

Appendix B: strictly-outlives details

The rest of this section gets into some low-level details of parts of how rustc is implemented, largely because the changes described here do have an impact on what results the rustc region inference system produces (or fails to produce). It serves mostly to explain (1.) why Rust PR 21657 was implemented, and (2.) why one may sometimes see indecipherable region-inference errors.

Review: Code Extents

(Nothing here is meant to be new; its just providing context for the next subsection.)

Every Rust expression evaluates to a value V that is either placed into some location with an associated lifetime such as 'l, or V is associated with a block of code that statically delimits the V’s runtime extent (i.e. we know from the function’s text where V will be dropped). In the rustc source, the blocks of code are sometimes called “scopes” and sometimes “code extents”; I will try to stick to the latter term here, since the word “scope” is terribly overloaded.

Currently, the code extents in Rust are arranged into a tree hierarchy structured similarly to the abstract syntax tree; for any given code extent, the compiler can ask for its parent in this hierarchy.

Every Rust expression E has an associated “terminating extent” somewhere in its chain of parent code extents; temporary values created during the execution of E are stored at stack locations managed by E’s terminating extent. When we hit the end of the terminating extent, all such temporaries are dropped.

An example of a terminating extent: in a let-statement like:

let <pat> = <expr>;

the terminating extent of <expr> is the let-statement itself. So in an example like:

let a1 = input.f().g();`
...

there is a temporary value returned from input.f(), and it will live until the end of the let statement, but not into the subsequent code represented by .... (The value resulting from input.f().g(), on the other hand, will be stored in a1 and lives until the end of the block enclosing the let statement.)

(It is not important to this RFC to know the full set of rules dictating which parent expressions are deemed terminating extents; we just will assume that these things do exist.)

For any given code extent S, the parent code extent P of S, if it exists, potentially holds bits of code that will execute after S is done. Any cleanup code for any values assigned to P will only run after we have finished with all code associated with S.

A problem with 1.0 alpha code extents

So, with the above established, we have a hint at how to express that a lifetime 'a needs to strictly outlive a particular code extent S: simply say that 'a needs to live at least long as P.

However, this is a little too simplistic, at least for the Rust compiler circa Rust 1.0 alpha. The main problem is that all the bindings established by let statements in a block are assigned the same code extent.

This, combined with our simplistic definition, yields real problems. For example, in:

{
    use std::fmt;
    #[derive(Debug)] struct DropLoud<T:fmt::Debug>(&'static str, T);
    impl<T:fmt::Debug> Drop for DropLoud<T> {
        fn drop(&mut self) { println!("dropping {}:{:?}", self.0, self.1); }
    }

    let c1 = DropLoud("c1", 1);
    let c2 = DropLoud("c2", &c1);
}

In principle, the code above is legal: c2 will be dropped before c1 is, and thus it is okay that c2 holds a borrowed reference to c1 that will be read when c2 is dropped (indirectly via the fmt::Debug implementation.

However, with the structure of code extents as of Rust 1.0 alpha, c1 and c2 are both given the same code extent: that of the block itself. Thus in that context, this definition of “strictly outlives” indicates that c1 does not strictly outlive c2, because c1 does not live at least as long as the parent of the block; it only lives until the end of the block itself.

This illustrates why “All the bindings established by let statements in a block are assigned the same code extent” is a problem

Block Remainder Code Extents

The solution proposed here (motivated by experience with the prototype) is to introduce finer-grained code extents. This solution is essentially Rust PR 21657, which has already landed in rustc. (That is in part why this is merely an appendix, rather than part of the body of the RFC itself.)

The code extents remain in a tree-hierarchy, but there are now extra entries in the tree, which provide the foundation for a more precise “strictly outlives” relation.

We introduce a new code extent, called a “block remainder” extent, for every let statement in a block, representing the suffix of the block covered by the bindings in that let statement.

For example, given { let (a, b) = EXPR_1; let c = EXPR_2; ... }, which previously had a code extent structure like:

{ let (a, b) = EXPR_1; let c = EXPR_2; ... }
               +----+          +----+
  +------------------+ +-------------+
+------------------------------------------+

so the parent extent of each let statement was the whole block.

But under the new rules, there are two new block remainder extents introduced, with this structure:

{  let (a, b) = EXPR_1;  let c = EXPR_2; ...  }
                +----+           +----+
   +------------------+  +-------------+
                        +-------------------+   <-- new: block remainder 2
  +------------------------------------------+  <-- new: block remainder 1
+---------------------------------------------+

The first let-statement introduces a block remainder extent that covers the lifetime for a and b. The second let-statement introduces a block remainder extent that covers the lifetime for c.

Each let-statement continues to be the terminating extent for its initializer expression. But now, the parent of the extent of the second let statement is a block remainder extent (“block remainder 2”), and, importantly, the parent of block remainder 2 is another block remainder extent (“block remainder 1”). This way, we precisely represent the lifetimes of the named values bound by each let statement, and know that a and b both strictly outlive c as well as the temporary values created during evaluation of EXPR_2. Likewise, c strictly outlives the bindings and temporaries created in the ... that follows it.

Why stop at let-statements?

This RFC does not propose that we attempt to go further and track the order of destruction of the values bound by a single let statement.

Such an experiment could be made part of future work, but for now, we just continue to assign a and b to the same scope; the compiler does not attempt to reason about what order they will be dropped in, and thus we cannot for example reference data borrowed from a in any destructor code for b.

The main reason that we do not want to attempt to produce even finer grain scopes, at least not right now, is that there are scenarios where it is important to be able to assign the same region to two distinct pieces of data; in particular, this often arises when one wants to build cyclic structure, as discussed in Cyclic structure still allowed.

Summary

Add a once function to std::iter to construct an iterator yielding a given value one time, and an empty function to construct an iterator yielding no values.

Motivation

This is a common task when working with iterators. Currently, this can be done in many ways, most of which are unergonomic, do not work for all types (e.g. requiring Copy/Clone), or both. once and empty are simple to implement, simple to use, and simple to understand.

Detailed design

once will return a new struct, std::iter::Once<T>, implementing Iterator. Internally, Once<T> is simply a newtype wrapper around std::option::IntoIter<T>. The actual body of once is thus trivial:

pub struct Once<T>(std::option::IntoIter<T>);

pub fn once<T>(x: T) -> Once<T> {
	Once(
		Some(x).into_iter()
	)
}

empty is similar:

pub struct Empty<T>(std::option::IntoIter<T>);

pub fn empty<T>(x: T) -> Empty<T> {
	Empty(
		None.into_iter()
	)
}

These wrapper structs exist to allow future backwards-compatible changes, and hide the implementation.

Drawbacks

Although a tiny amount of code, it still does come with a testing, maintenance, etc. cost.

It’s already possible to do this via Some(x).into_iter(), std::iter::repeat(x).take(1) (for x: Clone), vec![x].into_iter(), various contraptions involving iterate

The existence of the Once struct is not technically necessary.

Alternatives

There are already many, many alternatives to this- Option::into_iter(), iterate

The Once struct could be not used, with std::option::IntoIter used instead.

Unresolved questions

Naturally, once is fairly bikesheddable. one_time? repeat_once?

Are versions of once that return &T/&mut T desirable?

Summary

Add type ascription to expressions. (An earlier version of this RFC covered type ascription in patterns too, that has been postponed).

Type ascription on expression has already been implemented.

See also discussion on #354 and rust issue 10502.

Motivation

Type inference is imperfect. It is often useful to help type inference by annotating a sub-expression with a type. Currently, this is only possible by extracting the sub-expression into a variable using a let statement and/or giving a type for a whole expression or pattern. This is un- ergonomic, and sometimes impossible due to lifetime issues. Specifically, where a variable has lifetime of its enclosing scope, but a sub-expression’s lifetime is typically limited to the nearest semi-colon.

Typical use cases are where a function’s return type is generic (e.g., collect) and where we want to force a coercion.

Type ascription can also be used for documentation and debugging - where it is unclear from the code which type will be inferred, type ascription can be used to precisely communicate expectations to the compiler or other programmers.

By allowing type ascription in more places, we remove the inconsistency that type ascription is currently only allowed on top-level patterns.

Examples:

(Somewhat simplified examples, in these cases there are sometimes better solutions with the current syntax).

Generic return type:

// Current.
let z = if ... {
    let x: Vec<_> = foo.enumerate().collect();
    x
} else {
    ...
};

// With type ascription.
let z = if ... {
    foo.enumerate().collect(): Vec<_>
} else {
    ...
};

Coercion:

fn foo<T>(a: T, b: T) { ... }

// Current.
let x = [1u32, 2, 4];
let y = [3u32];
...
let x: &[_] = &x;
let y: &[_] = &y;
foo(x, y);

// With type ascription.
let x = [1u32, 2, 4];
let y = [3u32];
...
foo(x: &[_], y: &[_]);

Generic return type and coercion:

// Current.
let x: T = {
    let temp: U<_> = foo();
    temp
};

// With type ascription.
let x: T = foo(): U<_>;

Detailed design

The syntax of expressions is extended with type ascription:

e ::= ... | e: T

where e is an expression and T is a type. Type ascription has the same precedence as explicit coercions using as.

When type checking e: T, e must have type T. The must have type test includes implicit coercions and subtyping, but not explicit coercions. T may be any well-formed type.

At runtime, type ascription is a no-op, unless an implicit coercion was used in type checking, in which case the dynamic semantics of a type ascription expression are exactly those of the implicit coercion.

@eddyb has implemented the expressions part of this RFC, PR.

This feature should land behind the ascription feature gate.

coercion and as vs :

A downside of type ascription is the overlap with explicit coercions (aka casts, the as operator). To the programmer, type ascription makes implicit coercions explicit (however, the compiler makes no distinction between coercions due to type ascription and other coercions). In RFC 401, it is proposed that all valid implicit coercions are valid explicit coercions. However, that may be too confusing for users, since there is no reason to use type ascription rather than as (if there is some coercion). Furthermore, if programmers do opt to use as as the default whether or not it is required, then it loses its function as a warning sign for programmers to beware of.

To address this I propose two lints which check for: trivial casts and trivial numeric casts. Other than these lints we stick with the proposal from #401 that unnecessary casts will no longer be an error.

A trivial cast is a cast x as T where x has type U and x can be implicitly coerced to T or is already a subtype of T.

A trivial numeric cast is a cast x as T where x has type U and x is implicitly coercible to T or U is a subtype of T, and both U and T are numeric types.

Like any lints, these can be customised per-crate by the programmer. Both lints are ‘warn’ by default.

Although this is a somewhat complex scheme, it allows code that works today to work with only minor adjustment, it allows for a backwards compatible path to ‘promoting’ type conversions from explicit casts to implicit coercions, and it allows customisation of a contentious kind of error (especially so in the context of cross-platform programming).

Type ascription and temporaries

There is an implementation choice between treating x: T as an lvalue or rvalue. Note that when an rvalue is used in ‘reference context’ (e.g., the subject of a reference operation), then the compiler introduces a temporary variable. Neither option is satisfactory, if we treat an ascription expression as an lvalue (i.e., no new temporary), then there is potential for unsoundness:

let mut foo: S = ...;
{
    let bar = &mut (foo: T);  // S <: T, no coercion required
    *bar = ... : T;
}
// Whoops, foo has type T, but the compiler thinks it has type S, where potentially T </: S

If we treat ascription expressions as rvalues (i.e., create a temporary in lvalue position), then we don’t have the soundness problem, but we do get the unexpected result that &(x: T) is not in fact a reference to x, but a reference to a temporary copy of x.

The proposed solution is that type ascription expressions inherit their ‘lvalue-ness’ from their underlying expressions. I.e., e: T is an lvalue if e is an lvalue, and an rvalue otherwise. If the type ascription expression is in reference context, then we require the ascribed type to exactly match the type of the expression, i.e., neither subtyping nor coercion is allowed. These reference contexts are as follows (where <expr> is a type ascription expression):

&[mut] <expr>
let ref [mut] x = <expr>
match <expr> { .. ref [mut] x .. => { .. } .. }
<expr>.foo() // due to autoref
<expr> = ...;

Drawbacks

More syntax, another feature in the language.

Interacts poorly with struct initialisers (changing the syntax for struct literals has been discussed and rejected and again in discuss).

If we introduce named arguments in the future, then it would make it more difficult to support the same syntax as field initialisers.

Alternatives

We could do nothing and force programmers to use temporary variables to specify a type. However, this is less ergonomic and has problems with scopes/lifetimes.

Rely on explicit coercions - the current plan RFC 401 is to allow explicit coercion to any valid type and to use a customisable lint for trivial casts (that is, those given by subtyping, including the identity case). If we allow trivial casts, then we could always use explicit coercions instead of type ascription. However, we would then lose the distinction between implicit coercions which are safe and explicit coercions, such as narrowing, which require more programmer attention. This also does not help with patterns.

We could use a different symbol or keyword instead of :, e.g., is.

Unresolved questions

Is the suggested precedence correct?

Should we remove integer suffixes in favour of type ascription?

Style guidelines - should we recommend spacing or parenthesis to make type ascription syntax more easily recognisable?

This RFC was previously approved, but later withdrawn

For details see the summary comment.

Summary

  • Change placement-new syntax from: box (<place-expr>) <expr> instead to: in <place-expr> { <block> }.

  • Change box <expr> to an overloaded operator that chooses its implementation based on the expected type.

  • Use unstable traits in core::ops for both operators, so that libstd can provide support for the overloaded operators; the traits are unstable so that the language designers are free to revise the underlying protocol in the future post 1.0.

  • Feature-gate the placement-in syntax via the feature name placement_in_syntax.

  • The overloaded box <expr> will reuse the box_syntax feature name.

(Note that <block> here denotes the interior of a block expression; i.e.:

<block> ::= [ <stmt> ';' | <item> ] * [ <expr> ]

This is the same sense in which the block nonterminal is used in the reference manual.)

Motivation

Goal 1: We want to support an operation analogous to C++’s placement new, as discussed previously in Placement Box RFC PR 470.

Goal 2: We also would like to overload our box syntax so that more types, such as Rc<T> and Arc<T> can gain the benefit of avoiding intermediate copies (i.e. allowing expressions to install their result value directly into the backing storage of the Rc<T> or Arc<T> when it is created).

However, during discussion of Placement Box RFC PR 470, some things became clear:

  • Many syntaxes using the in keyword are superior to box (<place-expr>) <expr> for the operation analogous to placement-new.

    The proposed in-based syntax avoids ambiguities such as having to write box () (<expr>) (or box (alloc::HEAP) (<expr>)) when one wants to surround <expr> with parentheses. It allows the parser to provide clearer error messages when encountering in <place-expr> <expr> (clearer compared to the previous situation with box <place-expr> <expr>).

  • It would be premature for Rust to commit to any particular protocol for supporting placement-in. A number of participants in the discussion of Placement Box RFC PR 470 were unhappy with the baroque protocol, especially since it did not support DST and potential future language changes would allow the protocol proposed there to be significantly simplified.

Therefore, this RFC proposes a middle ground for 1.0: Support the desired syntax, but do not provide stable support for end-user implementations of the operators. The only stable ways to use the overloaded box <expr> or in <place-expr> { <block> } operators will be in tandem with types provided by the stdlib, such as Box<T>.

Detailed design

  • Add traits to core::ops for supporting the new operators. This RFC does not commit to any particular set of traits, since they are not currently meant to be implemented outside of the stdlib. (However, a demonstration of one working set of traits is given in Appendix A.)

    Any protocol that we adopt for the operators needs to properly handle panics; i.e., box <expr> must properly cleanup any intermediate state if <expr> panics during its evaluation, and likewise for in <place-expr> { <block> }

    (See Placement Box RFC PR 470 or Appendix A for discussion on ways to accomplish this.)

  • Change box <expr> from built-in syntax (tightly integrated with Box<T>) into an overloaded-box operator that uses the expected return type to decide what kind of value to create. For example, if Rc<T> is extended with an implementation of the appropriate operator trait, then

    let x: Rc<_> = box format!("Hello");

    could be a legal way to create an Rc<String> without having to invoke the Rc::new function. This will be more efficient for building instances of Rc<T> when T is a large type. (It is also arguably much cleaner syntax to read, regardless of the type T.)

    Note that this change will require end-user code to no longer assume that box <expr> always produces a Box<T>; such code will need to either add a type annotation e.g. saying Box<_>, or will need to call Box::new(<expr>) instead of using box <expr>.

  • Add support for parsing in <place-expr> { <block> } as the basis for the placement operator.

    Remove support for box (<place-expr>) <expr> from the parser.

    Make in <place-expr> { <block> } an overloaded operator that uses the <place-expr> to determine what placement code to run.

    Note: when <place-expr> is just an identifier, <place-expr> { <block> } is not parsed as a struct literal. We accomplish this via the same means that is used e.g. for if expressions: we restrict <place-expr> to not include struct literals (see RFC 92).

  • The only stabilized implementation for the box <expr> operator proposed by this RFC is Box<T>. The question of which other types should support integration with box <expr> is a library design issue and needs to go through the conventions and library stabilization process.

    Similarly, this RFC does not propose any stabilized implementation for the in <place-expr> { <block> } operator. (An obvious candidate for in <place-expr> { <block> } integration would be a Vec::emplace_back method; but again, the choice of which such methods to add is a library design issue, beyond the scope of this RFC.)

    (A sample implementation illustrating how to support the operators on other types is given in Appendix A.)

  • Feature-gate the two syntaxes under separate feature identifiers, so that we have the option of removing the gate for one syntax without the other. (I.e. we already have much experience with non-overloaded box <expr>, but we have nearly no experience with placement-in as described here).

Drawbacks

  • End-users might be annoyed that they cannot add implementations of the overloaded-box and placement-in operators themselves. But such users who want to do such a thing will probably be using the nightly release channel, which will not have the same stability restrictions.

  • The currently-implemented desugaring does not infer that in an expression like box <expr> as Box<Trait>, the use of box <expr> should evaluate to some Box<_>. pnkfelix has found that this is due to a weakness in compiler itself (Rust PR 22012).

    Likewise, the currently-implemented desugaring does not interact well with the combination of type-inference and implicit coercions to trait objects. That is, when box <expr> is used in a context like this:

    fn foo(Box<SomeTrait>) { ... }
    foo(box some_expr());
    

    the type inference system attempts to unify the type Box<SomeTrait> with the return-type of ::protocol::Boxed::finalize(place). This may also be due to weakness in the compiler, but that is not immediately obvious.

    Appendix B has a complete code snippet (using a desugaring much like the one found in the other appendix) that illustrates two cases of interest where this weakness arises.

Alternatives

  • We could keep the box (<place-expr>) <expr> syntax. It is hard to see what the advantage of that is, unless (1.) we can identify many cases of types that benefit from supporting both overloaded-box and placement-in, or unless (2.) we anticipate some integration with box pattern syntax that would motivate using the box keyword for placement.

  • We could use the in (<place-expr>) <expr> syntax. An earlier version of this RFC used this alternative. It is easier to implement on the current code base, but I do not know of any other benefits. (Well, maybe parentheses are less “heavyweight” than curly-braces?)

  • A number of other syntaxes for placement have been proposed in the past; see for example discussion on RFC PR 405 as well as the previous placement RFC.

    The main constraints I want to meet are:

    1. Do not introduce ambiguity into the grammar for Rust
    2. Maintain left-to-right evaluation order (so the place should appear to the left of the value expression in the text).

    But otherwise I am not particularly attached to any single syntax.

    One particular alternative that might placate those who object to placement-in’s box-free form would be: box (in <place-expr>) <expr>.

  • Do nothing. I.e. do not even accept an unstable libstd-only protocol for placement-in and overloaded-box. This would be okay, but unfortunate, since in the past some users have identified intermediate copies to be a source of inefficiency, and proper use of box <expr> and placement-in can help remove intermediate copies.

Unresolved questions

This RFC represents the current plan for box/in. However, in the RFC discussion a number of questions arose, including possible design alternatives that might render the in keyword unnecessary. Before the work in this RFC can be unfeature-gated, these questions should be satisfactorily resolved:

  • Can the type-inference and coercion system of the compiler be enriched to the point where overloaded box and in are seamlessly usable? Or are type-ascriptions unavoidable when supporting overloading?

    In particular, I am assuming here that some amount of current weakness cannot be blamed on any particular details of the sample desugaring.

    (See Appendix B for example code showing weaknesses in rustc of today.)

  • Do we want to change the syntax for in(place) expr / in place { expr }?

  • Do we need in at all, or can we replace it with some future possible feature such as DerefSet or &out etc?

  • Do we want to improve the protocol in some way?

    • Note that the protocol was specifically excluded from this RFC.
    • Support for DST expressions such as box [22, ..count] (where count is a dynamic value)?
    • Protocol making use of more advanced language features?

Appendices

Appendix A: sample operator traits

The goal is to show that code like the following can be made to work in Rust today via appropriate desugarings and trait definitions.

fn main() {
    use std::rc::Rc;

    let mut v = vec![1,2];
    in v.emplace_back() { 3 }; // has return type `()`
    println!("v: {:?}", v); // prints [1,2,3]

    let b4: Box<i32> = box 4;
    println!("b4: {}", b4);

    let b5: Rc<i32> = box 5;
    println!("b5: {}", b5);

    let b6 = in HEAP { 6 }; // return type Box<i32>
    println!("b6: {}", b6);
}

To demonstrate the above, this appendix provides code that runs today; it demonstrates sample protocols for the proposed operators. (The entire code-block below should work when e.g. cut-and-paste into http::play.rust-lang.org )

#![feature(unsafe_destructor)] // (hopefully unnecessary soon with RFC PR 769)
#![feature(alloc)]

// The easiest way to illustrate the desugaring is by implementing
// it with macros.  So, we will use the macro `in_` for placement-`in`
// and the macro `box_` for overloaded-`box`; you should read
// `in_!( (<place-expr>) <expr> )` as if it were `in <place-expr> { <expr> }`
// and
// `box_!( <expr> )` as if it were `box <expr>`.

// The two macros have been designed to both 1. work with current Rust
// syntax (which in some cases meant avoiding certain associated-item
// syntax that currently causes the compiler to ICE) and 2. infer the
// appropriate code to run based only on either `<place-expr>` (for
// placement-`in`) or on the expected result type (for
// overloaded-`box`).

macro_rules! in_ {
    (($placer:expr) $value:expr) => { {
        let p = $placer;
        let mut place = ::protocol::Placer::make_place(p);
        let raw_place = ::protocol::Place::pointer(&mut place);
        let value = $value;
        unsafe {
            ::std::ptr::write(raw_place, value);
            ::protocol::InPlace::finalize(place)
        }
    } }
}

macro_rules! box_ {
    ($value:expr) => { {
        let mut place = ::protocol::BoxPlace::make_place();
        let raw_place = ::protocol::Place::pointer(&mut place);
        let value = $value;
        unsafe {
            ::std::ptr::write(raw_place, value);
            ::protocol::Boxed::finalize(place)
        }
    } }
}

// Note that while both desugarings are very similar, there are some
// slight differences.  In particular, the placement-`in` desugaring
// uses `InPlace::finalize(place)`, which is a `finalize` method that
// is overloaded based on the `place` argument (the type of which is
// derived from the `<place-expr>` input); on the other hand, the
// overloaded-`box` desugaring uses `Boxed::finalize(place)`, which is
// a `finalize` method that is overloaded based on the expected return
// type. Thus, the determination of which `finalize` method to call is
// derived from different sources in the two desugarings.

// The above desugarings refer to traits in a `protocol` module; these
// are the traits that would be put into `std::ops`, and are given
// below.

mod protocol {

/// Both `in PLACE { BLOCK }` and `box EXPR` desugar into expressions
/// that allocate an intermediate "place" that holds uninitialized
/// state.  The desugaring evaluates EXPR, and writes the result at
/// the address returned by the `pointer` method of this trait.
///
/// A `Place` can be thought of as a special representation for a
/// hypothetical `&uninit` reference (which Rust cannot currently
/// express directly). That is, it represents a pointer to
/// uninitialized storage.
///
/// The client is responsible for two steps: First, initializing the
/// payload (it can access its address via `pointer`). Second,
/// converting the agent to an instance of the owning pointer, via the
/// appropriate `finalize` method (see the `InPlace`.
///
/// If evaluating EXPR fails, then the destructor for the
/// implementation of Place to clean up any intermediate state
/// (e.g. deallocate box storage, pop a stack, etc).
pub trait Place<Data: ?Sized> {
    /// Returns the address where the input value will be written.
    /// Note that the data at this address is generally uninitialized,
    /// and thus one should use `ptr::write` for initializing it.
    fn pointer(&mut self) -> *mut Data;
}

/// Interface to implementations of  `in PLACE { BLOCK }`.
///
/// `in PLACE { BLOCK }` effectively desugars into:
///
/// ```
/// let p = PLACE;
/// let mut place = Placer::make_place(p);
/// let raw_place = Place::pointer(&mut place);
/// let value = { BLOCK };
/// unsafe {
///     std::ptr::write(raw_place, value);
///     InPlace::finalize(place)
/// }
/// ```
///
/// The type of `in PLACE { BLOCK }` is derived from the type of `PLACE`;
/// if the type of `PLACE` is `P`, then the final type of the whole
/// expression is `P::Place::Owner` (see the `InPlace` and `Boxed`
/// traits).
///
/// Values for types implementing this trait usually are transient
/// intermediate values (e.g. the return value of `Vec::emplace_back`)
/// or `Copy`, since the `make_place` method takes `self` by value.
pub trait Placer<Data: ?Sized> {
    /// `Place` is the intermediate agent guarding the
    /// uninitialized state for `Data`.
    type Place: InPlace<Data>;

    /// Creates a fresh place from `self`.
    fn make_place(self) -> Self::Place;
}

/// Specialization of `Place` trait supporting `in PLACE { BLOCK }`.
pub trait InPlace<Data: ?Sized>: Place<Data> {
    /// `Owner` is the type of the end value of `in PLACE { BLOCK }`
    ///
    /// Note that when `in PLACE { BLOCK }` is solely used for
    /// side-effecting an existing data-structure,
    /// e.g. `Vec::emplace_back`, then `Owner` need not carry any
    /// information at all (e.g. it can be the unit type `()` in that
    /// case).
    type Owner;

    /// Converts self into the final value, shifting
    /// deallocation/cleanup responsibilities (if any remain), over to
    /// the returned instance of `Owner` and forgetting self.
    unsafe fn finalize(self) -> Self::Owner;
}

/// Core trait for the `box EXPR` form.
///
/// `box EXPR` effectively desugars into:
///
/// ```
/// let mut place = BoxPlace::make_place();
/// let raw_place = Place::pointer(&mut place);
/// let value = $value;
/// unsafe {
///     ::std::ptr::write(raw_place, value);
///     Boxed::finalize(place)
/// }
/// ```
///
/// The type of `box EXPR` is supplied from its surrounding
/// context; in the above expansion, the result type `T` is used
/// to determine which implementation of `Boxed` to use, and that
/// `<T as Boxed>` in turn dictates determines which
/// implementation of `BoxPlace` to use, namely:
/// `<<T as Boxed>::Place as BoxPlace>`.
pub trait Boxed {
    /// The kind of data that is stored in this kind of box.
    type Data;  /* (`Data` unused b/c cannot yet express below bound.) */
    type Place; /* should be bounded by BoxPlace<Self::Data> */

    /// Converts filled place into final owning value, shifting
    /// deallocation/cleanup responsibilities (if any remain), over to
    /// returned instance of `Self` and forgetting `filled`.
    unsafe fn finalize(filled: Self::Place) -> Self;
}

/// Specialization of `Place` trait supporting `box EXPR`.
pub trait BoxPlace<Data: ?Sized> : Place<Data> {
    /// Creates a globally fresh place.
    fn make_place() -> Self;
}

} // end of `mod protocol`

// Next, we need to see sample implementations of these traits.
// First, `Box<T>` needs to support overloaded-`box`: (Note that this
// is not the desired end implementation; e.g.  the `BoxPlace`
// representation here is less efficient than it could be. This is
// just meant to illustrate that an implementation *can* be made;
// i.e. that the overloading *works*.)
//
// Also, just for kicks, I am throwing in `in HEAP { <block> }` support,
// though I do not think that needs to be part of the stable libstd.

struct HEAP;

mod impl_box_for_box {
    use protocol as proto;
    use std::mem;
    use super::HEAP;

    struct BoxPlace<T> { fake_box: Option<Box<T>> }

    fn make_place<T>() -> BoxPlace<T> {
        let t: T = unsafe { mem::zeroed() };
        BoxPlace { fake_box: Some(Box::new(t)) }
    }

    unsafe fn finalize<T>(mut filled: BoxPlace<T>) -> Box<T> {
        let mut ret = None;
        mem::swap(&mut filled.fake_box, &mut ret);
        ret.unwrap()
    }

    impl<'a, T> proto::Placer<T> for HEAP {
        type Place = BoxPlace<T>;
        fn make_place(self) -> BoxPlace<T> { make_place() }
    }

    impl<T> proto::Place<T> for BoxPlace<T> {
        fn pointer(&mut self) -> *mut T {
            match self.fake_box {
                Some(ref mut b) => &mut **b as *mut T,
                None => panic!("impossible"),
            }
        }
    }

    impl<T> proto::BoxPlace<T> for BoxPlace<T> {
        fn make_place() -> BoxPlace<T> { make_place() }
    }

    impl<T> proto::InPlace<T> for BoxPlace<T> {
        type Owner = Box<T>;
        unsafe fn finalize(self) -> Box<T> { finalize(self) }
    }

    impl<T> proto::Boxed for Box<T> {
        type Data = T;
        type Place = BoxPlace<T>;
        unsafe fn finalize(filled: BoxPlace<T>) -> Self { finalize(filled) }
    }
}

// Second, it might be nice if `Rc<T>` supported overloaded-`box`.
//
// (Note again that this may not be the most efficient implementation;
// it is just meant to illustrate that an implementation *can* be
// made; i.e. that the overloading *works*.)
 
mod impl_box_for_rc {
    use protocol as proto;
    use std::mem;
    use std::rc::{self, Rc};

    struct RcPlace<T> { fake_box: Option<Rc<T>> }

    impl<T> proto::Place<T> for RcPlace<T> {
        fn pointer(&mut self) -> *mut T {
            if let Some(ref mut b) = self.fake_box {
                if let Some(r) = rc::get_mut(b) {
                    return r as *mut T
                }
            }
            panic!("impossible");
        }
    }

    impl<T> proto::BoxPlace<T> for RcPlace<T> {
        fn make_place() -> RcPlace<T> {
            unsafe {
                let t: T = mem::zeroed();
                RcPlace { fake_box: Some(Rc::new(t)) }
            }
        }
    }

    impl<T> proto::Boxed for Rc<T> {
        type Data = T;
        type Place = RcPlace<T>;
        unsafe fn finalize(mut filled: RcPlace<T>) -> Self {
            let mut ret = None;
            mem::swap(&mut filled.fake_box, &mut ret);
            ret.unwrap()
        }
    }
}

// Third, we want something to demonstrate placement-`in`. Let us use
// `Vec::emplace_back` for that:

mod impl_in_for_vec_emplace_back {
    use protocol as proto;

    use std::mem;

    struct VecPlacer<'a, T:'a> { v: &'a mut Vec<T> }
    struct VecPlace<'a, T:'a> { v: &'a mut Vec<T> }

    pub trait EmplaceBack<T> { fn emplace_back(&mut self) -> VecPlacer<T>; }

    impl<T> EmplaceBack<T> for Vec<T> {
        fn emplace_back(&mut self) -> VecPlacer<T> { VecPlacer { v: self } }
    }

    impl<'a, T> proto::Placer<T> for VecPlacer<'a, T> {
        type Place = VecPlace<'a, T>;
        fn make_place(self) -> VecPlace<'a, T> { VecPlace { v: self.v } }
    }

    impl<'a, T> proto::Place<T> for VecPlace<'a, T> {
        fn pointer(&mut self) -> *mut T {
            unsafe {
                let idx = self.v.len();
                self.v.push(mem::zeroed());
                &mut self.v[idx]
            }
        }
    }
    impl<'a, T> proto::InPlace<T> for VecPlace<'a, T> {
        type Owner = ();
        unsafe fn finalize(self) -> () {
            mem::forget(self);
        }
    }

    #[unsafe_destructor]
    impl<'a, T> Drop for VecPlace<'a, T> {
        fn drop(&mut self) {
            unsafe {
                mem::forget(self.v.pop())
            }
        }
    }
}

// Okay, that's enough for us to actually demonstrate the syntax!
// Here's our `fn main`:

fn main() {
    use std::rc::Rc;
    // get hacked-in `emplace_back` into scope
    use impl_in_for_vec_emplace_back::EmplaceBack;

    let mut v = vec![1,2];
    in_!( (v.emplace_back()) 3 );
    println!("v: {:?}", v);

    let b4: Box<i32> = box_!( 4 );
    println!("b4: {}", b4);

    let b5: Rc<i32> = box_!( 5 );
    println!("b5: {}", b5);

    let b6 = in_!( (HEAP) 6 ); // return type Box<i32>
    println!("b6: {}", b6);
}

Appendix B: examples of interaction between desugaring, type-inference, and coercion

The following code works with the current version of box syntax in Rust, but needs some sort of type annotation in Rust as it stands today for the desugaring of box to work out.

(The following code uses cfg attributes to make it easy to switch between slight variations on the portions that expose the weakness.)

#![feature(box_syntax)]

// NOTE: Scroll down to "START HERE"

fn main() { }

macro_rules! box_ {
    ($value:expr) => { {
        let mut place = ::BoxPlace::make();
        let raw_place = ::Place::pointer(&mut place);
        let value = $value;
        unsafe { ::std::ptr::write(raw_place, value); ::Boxed::fin(place) }
    } }
}

// (Support traits and impls for examples below.)

pub trait BoxPlace<Data: ?Sized> : Place<Data> { fn make() -> Self; }
pub trait Place<Data: ?Sized> { fn pointer(&mut self) -> *mut Data; }
pub trait Boxed { type Place; fn fin(filled: Self::Place) -> Self; }

struct BP<T: ?Sized> { _fake_box: Option<Box<T>> }

impl<T> BoxPlace<T> for BP<T> { fn make() -> BP<T> { make_pl() } }
impl<T: ?Sized> Place<T> for BP<T> { fn pointer(&mut self) -> *mut T { pointer(self) } }
impl<T: ?Sized> Boxed for Box<T> { type Place = BP<T>; fn fin(x: BP<T>) -> Self { finaliz(x) } }

fn make_pl<T>() -> BP<T> { loop { } }
fn finaliz<T: ?Sized>(mut _filled: BP<T>) -> Box<T> { loop { } }
fn pointer<T: ?Sized>(_p: &mut BP<T>) -> *mut T { loop { } }

// START HERE

pub type BoxFn<'a> = Box<Fn() + 'a>;

#[cfg(all(not(coerce_works1),not(coerce_works2),not(coerce_works3)))]
pub fn coerce<'a, F>(f: F) -> BoxFn<'a> where F: Fn(), F: 'a { box_!( f ) }

#[cfg(coerce_works1)]
pub fn coerce<'a, F>(f: F) -> BoxFn<'a> where F: Fn(), F: 'a {   box  f   }

#[cfg(coerce_works2)]
pub fn coerce<'a, F>(f: F) -> BoxFn<'a> where F: Fn(), F: 'a { let b: Box<_> = box_!( f ); b }

#[cfg(coerce_works3)] // (This one assumes PR 22012 has landed)
pub fn coerce<'a, F>(f: F) -> BoxFn<'a> where F: Fn(), F: 'a { box_!( f ) as BoxFn }


trait Duh { fn duh() -> Self; }

#[cfg(all(not(duh_works1),not(duh_works2)))]
impl<T> Duh for Box<[T]> { fn duh() -> Box<[T]> { box_!( [] ) } }

#[cfg(duh_works1)]
impl<T> Duh for Box<[T]> { fn duh() -> Box<[T]> {   box  [] } }

#[cfg(duh_works2)]
impl<T> Duh for Box<[T]> { fn duh() -> Box<[T]> { let b: Box<[_; 0]> =  box_!( [] ); b } }

You can pass --cfg duh_worksN and --cfg coerce_worksM for suitable N and M to see them compile. Here is a transcript with those attempts, including the cases where type-inference fails in the desugaring.

% rustc /tmp/foo6.rs --cfg duh_works1 --cfg coerce_works1
% rustc /tmp/foo6.rs --cfg duh_works1 --cfg coerce_works2
% rustc /tmp/foo6.rs --cfg duh_works2 --cfg coerce_works1
% rustc /tmp/foo6.rs --cfg duh_works1
/tmp/foo6.rs:10:25: 10:41 error: the trait `Place<F>` is not implemented for the type `BP<core::ops::Fn()>` [E0277]
/tmp/foo6.rs:10         let raw_place = ::Place::pointer(&mut place);
                                        ^~~~~~~~~~~~~~~~
/tmp/foo6.rs:7:1: 14:2 note: in expansion of box_!
/tmp/foo6.rs:37:64: 37:76 note: expansion site
/tmp/foo6.rs:9:25: 9:41 error: the trait `core::marker::Sized` is not implemented for the type `core::ops::Fn()` [E0277]
/tmp/foo6.rs:9         let mut place = ::BoxPlace::make();
                                       ^~~~~~~~~~~~~~~~
/tmp/foo6.rs:7:1: 14:2 note: in expansion of box_!
/tmp/foo6.rs:37:64: 37:76 note: expansion site
error: aborting due to 2 previous errors
% rustc /tmp/foo6.rs                  --cfg coerce_works1
/tmp/foo6.rs:10:25: 10:41 error: the trait `Place<[_; 0]>` is not implemented for the type `BP<[T]>` [E0277]
/tmp/foo6.rs:10         let raw_place = ::Place::pointer(&mut place);
                                        ^~~~~~~~~~~~~~~~
/tmp/foo6.rs:7:1: 14:2 note: in expansion of box_!
/tmp/foo6.rs:52:51: 52:64 note: expansion site
/tmp/foo6.rs:9:25: 9:41 error: the trait `core::marker::Sized` is not implemented for the type `[T]` [E0277]
/tmp/foo6.rs:9         let mut place = ::BoxPlace::make();
                                       ^~~~~~~~~~~~~~~~
/tmp/foo6.rs:7:1: 14:2 note: in expansion of box_!
/tmp/foo6.rs:52:51: 52:64 note: expansion site
error: aborting due to 2 previous errors
% 

The point I want to get across is this: It looks like both of these cases can be worked around via explicit type ascription. Whether or not this is an acceptable cost is a reasonable question.

  • Note that type ascription is especially annoying for the fn duh case, where one needs to keep the array-length encoded in the type consistent with the length of the array generated by the expression. This might motivate extending the use of wildcard _ within type expressions to include wildcard constants, for use in the array length, i.e.: [T; _].

The fn coerce example comes from uses of the fn combine_structure function in the libsyntax crate.

The fn duh example comes from the implementation of the Default trait for Box<[T]>.

Both examples are instances of coercion; the fn coerce example is trying to express a coercion of a Box<Type> to a Box<Trait> (i.e. making a trait-object), and the fn duh example is trying to express a coercion of a Box<[T; k]> (specifically [T; 0]) to a Box<[T]>. Both are going from a pointer-to-sized to a pointer-to-unsized.

(Maybe there is a way to handle both of these cases in a generic fashion; pnkfelix is not sufficiently familiar with how coercions currently interact with type-inference in the first place.)

Summary

Pare back the std::hash module’s API to improve ergonomics of usage and definitions. While an alternative scheme more in line with what Java and C++ have is considered, the current std::hash module will remain largely as-is with modifications to its core traits.

Motivation

There are a number of motivations for this RFC, and each will be explained in term.

API ergonomics

Today the API of the std::hash module is sometimes considered overly complicated and it may not be pulling its weight. As a recap, the API looks like:

trait Hash<H: Hasher> {
    fn hash(&self, state: &mut H);
}
trait Hasher {
    type Output;
    fn reset(&mut self);
    fn finish(&self) -> Self::Output;
}
trait Writer {
    fn write(&mut self, data: &[u8]);
}

The Hash trait is implemented by various types where the H type parameter signifies the hashing algorithm that the impl block corresponds to. Each Hasher is opaque when taken generically and is frequently paired with a bound of Writer to allow feeding in arbitrary bytes.

The purpose of not having a Writer supertrait on Hasher or on the H type parameter is to allow hashing algorithms that are not byte-stream oriented (e.g. Java-like algorithms). Unfortunately all primitive types in Rust are only defined for Hash<H> where H: Writer + Hasher, essentially forcing a byte-stream oriented hashing algorithm for all hashing.

Some examples of using this API are:

use std::hash::{Hash, Hasher, Writer, SipHasher};

impl<S: Hasher + Writer> Hash<S> for MyType {
    fn hash(&self, s: &mut S) {
        self.field1.hash(s);
        // don't want to hash field2
        self.field3.hash(s);
    }
}

fn sip_hash<T: Hash<SipHasher>>(t: &T) -> u64 {
    let mut s = SipHasher::new_with_keys(0, 0);
    t.hash(&mut s);
    s.finish()
}

Forcing many impl blocks to require Hasher + Writer becomes onerous over times and also requires at least 3 imports for a custom implementation of hash. Taking a generically hashable T is also somewhat cumbersome, especially if the hashing algorithm isn’t known in advance.

Overall the std::hash API is generic enough that its usage is somewhat verbose and becomes tiresome over time to work with. This RFC strives to make this API easier to work with.

Forcing byte-stream oriented hashing

Much of the std::hash API today is oriented around hashing a stream of bytes (blocks of &[u8]). This is not a hard requirement by the API (discussed above), but in practice this is essentially what happens everywhere. This form of hashing is not always the most efficient, although it is often one of the more flexible forms of hashing.

Other languages such as Java and C++ have a hashing API that looks more like:

trait Hash {
    fn hash(&self) -> usize;
}

This expression of hashing is not byte-oriented but is also much less generic (an algorithm for hashing is predetermined by the type itself). This API is encodable with today’s traits as:

struct Slot(u64);

impl Hash<Slot> for MyType {
    fn hash(&self, slot: &mut Slot) {
        *slot = Slot(self.precomputed_hash);
    }
}

impl Hasher for Slot {
    type Output = u64;
    fn reset(&mut self) { *self = Slot(0); }
    fn finish(&self) -> u64 { self.0 }
}

This form of hashing (which is useful for performance sometimes) is difficult to work with primarily because of the frequent bounds on Writer for hashing.

Non-applicability for well-known hashing algorithms

One of the current aspirations for the std::hash module was to be appropriate for hashing algorithms such as MD5, SHA*, etc. The current API has proven inadequate, however, for the primary reason of hashing being so generic. For example it should in theory be possible to calculate the SHA1 hash of a byte slice via:

let data: &[u8] = ...;
let hash = std::hash::hash::<&[u8], Sha1>(data);

There are a number of pitfalls to this approach:

  • Due to slices being able to be hashed generically, each byte will be written individually to the Sha1 state, which is likely to not be very efficient.
  • Due to slices being able to be hashed generically, the length of the slice is first written to the Sha1 state, which is likely not desired.

The key observation is that the hash values produced in a Rust program are not reproducible outside of Rust. For this reason, APIs for reproducible hashes to be verified elsewhere will explicitly not be considered in the design for std::hash. It is expected that an external crate may wish to provide a trait for these hashing algorithms and it would not be bounded by std::hash::Hash, but instead perhaps a “byte container” of some form.

Detailed design

This RFC considers two possible designs as a replacement of today’s std::hash API. One is a “minor refactoring” of the current API while the other is a much more radical change towards being conservative. This section will propose the minor refactoring change and the other may be found in the Alternatives section.

API

The new API of std::hash would be:

trait Hash {
    fn hash<H: Hasher>(&self, h: &mut H);

    fn hash_slice<H: Hasher>(data: &[Self], h: &mut H) {
        for piece in data {
            data.hash(h);
        }
    }
}

trait Hasher {
    fn write(&mut self, data: &[u8]);
    fn finish(&self) -> u64;

    fn write_u8(&mut self, i: u8) { ... }
    fn write_i8(&mut self, i: i8) { ... }
    fn write_u16(&mut self, i: u16) { ... }
    fn write_i16(&mut self, i: i16) { ... }
    fn write_u32(&mut self, i: u32) { ... }
    fn write_i32(&mut self, i: i32) { ... }
    fn write_u64(&mut self, i: u64) { ... }
    fn write_i64(&mut self, i: i64) { ... }
    fn write_usize(&mut self, i: usize) { ... }
    fn write_isize(&mut self, i: isize) { ... }
}

This API is quite similar to today’s API, but has a few tweaks:

  • The Writer trait has been removed by folding it directly into the Hasher trait. As part of this movement the Hasher trait grew a number of specialized write_foo methods which the primitives will call. This should help regain some performance losses where forcing a byte-oriented stream is a performance loss.

  • The Hasher trait no longer has a reset method.

  • The Hash trait’s type parameter is on the method, not on the trait. This implies that the trait is no longer object-safe, but it is much more ergonomic to operate over generically.

  • The Hash trait now has a hash_slice method to slice a number of instances of Self at once. This will allow optimization of the Hash implementation of &[u8] to translate to a raw write as well as other various slices of primitives.

  • The Output associated type was removed in favor of an explicit u64 return from finish.

The purpose of this API is to continue to allow APIs to be generic over the hashing algorithm used. This would allow HashMap continue to use a randomly keyed SipHash as its default algorithm (e.g. continuing to provide DoS protection, more information on this below). An example encoding of the alternative API (proposed below) would look like:

impl Hasher for u64 {
    fn write(&mut self, data: &[u8]) {
        for b in data.iter() { self.write_u8(*b); }
    }
    fn finish(&self) -> u64 { *self }

    fn write_u8(&mut self, i: u8) { *self = combine(*self, i); }
    // and so on...
}

HashMap and HashState

For both this recommendation as well as the alternative below, this RFC proposes removing the HashState trait and Hasher structure (as well as the hash_state module) in favor of the following API:

struct HashMap<K, V, H = DefaultHasher>;

impl<K: Eq + Hash, V> HashMap<K, V> {
    fn new() -> HashMap<K, V, DefaultHasher> {
        HashMap::with_hasher(DefaultHasher::new())
    }
}

impl<K: Eq, V, H: Fn(&K) -> u64> HashMap<K, V, H> {
    fn with_hasher(hasher: H) -> HashMap<K, V, H>;
}

impl<K: Hash> Fn(&K) -> u64 for DefaultHasher {
    fn call(&self, arg: &K) -> u64 {
        let (k1, k2) = self.siphash_keys();
        let mut s = SipHasher::new_with_keys(k1, k2);
        arg.hash(&mut s);
        s.finish()
    }
}

The precise details will be affected based on which design in this RFC is chosen, but the general idea is to move from a custom trait to the standard Fn trait for calculating hashes.

Drawbacks

  • This design is a departure from the precedent set by many other languages. In doing so, however, it is arguably easier to implement Hash as it’s more obvious how to feed in incremental state. We also do not lock ourselves into a particular hashing algorithm in case we need to alternate in the future.

  • Implementations of Hash cannot be specialized and are forced to operate generically over the hashing algorithm provided. This may cause a loss of performance in some cases. Note that this could be remedied by moving the type parameter to the trait instead of the method, but this would lead to a loss in ergonomics for generic consumers of T: Hash.

  • Manual implementations of Hash are somewhat cumbersome still by requiring a separate Hasher parameter which is not necessarily always desired.

  • The API of Hasher is approaching the realm of serialization/reflection and it’s unclear whether its API should grow over time to support more basic Rust types. It would be unfortunate if the Hasher trait approached a full-blown Encoder trait (as rustc-serialize has).

Alternatives

As alluded to in the “Detailed design” section the primary alternative to this RFC, which still improves ergonomics, is to remove the generic-ness over the hashing algorithm.

API

The new API of std::hash would be:

trait Hash {
    fn hash(&self) -> usize;
}

fn combine(a: usize, b: usize) -> usize;

The Writer, Hasher, and SipHasher structures/traits would all be removed from std::hash. This definition is more or less the Rust equivalent of the Java/C++ hashing infrastructure. This API is a vast simplification of what exists today and allows implementations of Hash as well as consumers of Hash to quite ergonomically work with hash values as well as hashable objects.

Note: The choice of usize instead of u64 reflects C++’s choice here as well, but it is quite easy to use one instead of the other.

Hashing algorithm

With this definition of Hash, each type must pre-ordain a particular hash algorithm that it implements. Using an alternate algorithm would require a separate newtype wrapper.

Most implementations would still use #[derive(Hash)] which will leverage hash::combine to combine the hash values of aggregate fields. Manual implementations which only want to hash a select number of fields would look like:

impl Hash for MyType {
    fn hash(&self) -> usize {
        // ignore field2
        (&self.field1, &self.field3).hash()
    }
}

A possible implementation of combine can be found in the boost source code.

HashMap and DoS protection

Currently one of the features of the standard library’s HashMap implementation is that it by default provides DoS protection through two measures:

  1. A strong hashing algorithm, SipHash 2-4, is used which is fairly difficult to find collisions with.
  2. The SipHash algorithm is randomly seeded for each instance of HashMap. The algorithm is seeded with a 128-bit key.

These two measures ensure that each HashMap is randomly ordered, even if the same keys are inserted in the same order. As a result, it is quite difficult to mount a DoS attack against a HashMap as it is difficult to predict what collisions will happen.

The Hash trait proposed above, however, does not allow SipHash to be implemented generally any more. For example #[derive(Hash)] will no longer leverage SipHash. Additionally, there is no input of state into the hash function, so there is no random state per-HashMap to generate different hashes with.

Denial of service attacks against hash maps are no new phenomenon, they are well known and have been reported in Python, Ruby (other ruby), Perl, and many other languages/frameworks. Rust has taken a fairly proactive step from the start by using a strong and randomly seeded algorithm since HashMap’s inception.

In general the standard library does not provide many security-related guarantees beyond memory safety. For example the new Read::read_to_end function passes a safe buffer of uninitialized data to implementations of read using various techniques to prevent memory safety issues. A DoS attack against a hash map is such a common and well known exploit, however, that this RFC considers it critical to consider the design of Hash and its relationship with HashMap.

Mitigation of DoS attacks

Other languages have mitigated DoS attacks via a few measures:

  • C++ specifies that the return value of hash is not guaranteed to be stable across program executions, allowing for a global salt to be mixed into hashes calculated.
  • Ruby has a global seed which is randomly initialized on startup and is used when hashing blocks of memory (e.g. strings).
  • PHP and Tomcat have added limits to the maximum amount of keys allowed from a POST HTTP request (to limit the size of auto-generated maps). This strategy is not necessarily applicable to the standard library.

It has been claimed, however, that a global seed may only mitigate some of the simplest attacks. The primary downside is that a long-running process may leak the “global seed” through some other form which could compromise maps in that specific process.

One possible route to mitigating these attacks with the Hash trait above could be:

  1. All primitives (integers, etc) are combined with a global random seed which is initialized on first use.
  2. Strings will continue to use SipHash as the default algorithm and the initialization keys will be randomly initialized on first use.

Given the information available about other DoS mitigations in hash maps for other languages, however, it is not clear that this will provide the same level of DoS protection that is available today. For example @DaGenix explains well that we may not be able to provide any form of DoS protection guarantee at all.

Alternative Drawbacks

  • One of the primary drawbacks to the proposed Hash trait is that it is now not possible to select an algorithm that a type should be hashed with. Instead each type’s definition of hashing can only be altered through the use of a newtype wrapper.

  • Today most Rust types can be hashed using a byte-oriented algorithm, so any number of these algorithms (e.g. SipHash, Fnv hashing) can be used. With this new Hash definition they are not easily accessible.

  • Due to the lack of input state to hashing, the HashMap type can no longer randomly seed each individual instance but may at best have one global seed. This consequently elevates the risk of a DoS attack on a HashMap instance.

  • The method of combining hashes together is not proven among other languages and is not guaranteed to provide the guarantees we want. This departure from the may have unknown consequences.

Unresolved questions

  • To what degree should HashMap attempt to prevent DoS attacks? Is it the responsibility of the standard library to do so or should this be provided as an external crate on crates.io?

Summary

Add back the functionality of Vec::from_elem by improving the vec![x; n] sugar to work with Clone x and runtime n.

Motivation

High demand, mostly. There are currently a few ways to achieve the behaviour of Vec::from_elem(elem, n):

// #1
let vec = Vec::new();
for i in range(0, n) {
    vec.push(elem.clone())
}
// #2
let vec = vec![elem; n]
// #3
let vec = Vec::new();
vec.resize(elem, n);
// #4
let vec: Vec<_> = (0..n).map(|_| elem.clone()).collect()
// #5
let vec: Vec<_> = iter::repeat(elem).take(n).collect();

None of these quite match the convenience, power, and performance of:

let vec = Vec::from_elem(elem, n)
  • #1 is verbose and slow, because each push requires a capacity check.
  • #2 only works for a Copy elem and const n.
  • #3 needs a temporary, but should be otherwise identical performance-wise.
  • #4 and #5 are considered verbose and noisy. They also need to clone one more time than other methods strictly need to.

However the issues for #2 are entirely artificial. It’s simply a side-effect of forwarding the impl to the identical array syntax. We can just make the code in the vec! macro better. This naturally extends the compile-timey [x; n] array sugar to the more runtimey semantics of Vec, without introducing “another way to do it”.

vec![100; 10] is also slightly less ambiguous than from_elem(100, 10), because the [T; n] syntax is part of the language that developers should be familiar with, while from_elem is just a function with arbitrary argument order.

vec![x; n] is also known to be 47% more sick-rad than from_elem, which was of course deprecated to due its lack of sick-radness.

Detailed design

Upgrade the current vec! macro to have the following definition:

macro_rules! vec {
    ($x:expr; $y:expr) => (
        unsafe {
            use std::ptr;
            use std::clone::Clone;

            let elem = $x;
            let n: usize = $y;
            let mut v = Vec::with_capacity(n);
            let mut ptr = v.as_mut_ptr();
            for i in range(1, n) {
                ptr::write(ptr, Clone::clone(&elem));
                ptr = ptr.offset(1);
                v.set_len(i);
            }

            // No needless clones
            if n > 0 {
                ptr::write(ptr, elem);
                v.set_len(n);
            }

            v
        }
    );
    ($($x:expr),*) => (
        <[_] as std::slice::SliceExt>::into_vec(
            std::boxed::Box::new([$($x),*]))
    );
    ($($x:expr,)*) => (vec![$($x),*])
}

(note: only the [x; n] branch is changed)

Which allows all of the following to work:

fn main() {
    println!("{:?}", vec![1; 10]);
    println!("{:?}", vec![Box::new(1); 10]);
    let n = 10;
    println!("{:?}", vec![1; n]);
}

Drawbacks

Less discoverable than from_elem. All the problems that macros have relative to static methods.

Alternatives

Just un-delete from_elem as it was.

Unresolved questions

No.

Summary

Make all collections impl<'a, T: Copy> Extend<&'a T>.

This enables both vec.extend(&[1, 2, 3]), and vec.extend(&hash_set_of_ints). This partially covers the usecase of the awkward Vec::push_all with literally no ergonomic loss, while leveraging established APIs.

Motivation

Vec::push_all is kinda random and specific. Partially motivated by performance concerns, but largely just “nice” to not have to do something like vec.extend([1, 2, 3].iter().cloned()). The performance argument falls flat (we must make iterators fast, and trusted_len should get us there). The ergonomics argument is salient, though. Working with Plain Old Data types in Rust is super annoying because generic APIs and semantics are tailored for non-Copy types.

Even with Extend upgraded to take IntoIterator, that won’t work with &[Copy], because a slice can’t be moved out of. Collections would have to take IntoIterator<&T>, and copy out of the reference. So, do exactly that.

As a bonus, this is more expressive than push_all, because you can feed in any collection by-reference to clone the data out of it, not just slices.

Detailed design

  • For sequences and sets: impl<'a, T: Copy> Extend<&'a T>
  • For maps: impl<'a, K: Copy, V: Copy> Extend<(&'a K, &'a V)>

e.g.

use std::iter::IntoIterator;

impl<'a, T: Copy> Extend<&'a T> for Vec<T> {
    fn extend<I: IntoIterator<Item=&'a T>>(&mut self, iter: I) {
        self.extend(iter.into_iter().cloned())
    }
}


fn main() {
    let mut foo = vec![1];
    foo.extend(&[1, 2, 3, 4]);
    let bar = vec![1, 2, 3];
    foo.extend(&bar);
    foo.extend(bar.iter());

    println!("{:?}", foo);
}

Drawbacks

  • Mo’ generics, mo’ magic. How you gonna discover it?

  • This creates a potentially confusing behaviour in a generic context.

Consider the following code:

fn feed<'a, X: Extend<&'a T>>(&'a self, buf: &mut X) {
    buf.extend(self.data.iter());
}

One would reasonably expect X to contain &T’s, but with this proposal it is possible that X now instead contains T’s. It’s not clear that in “real” code that this would ever be a problem, though. It may lead to novices accidentally by-passing ownership through implicit copies.

It also may make inference fail in some other cases, as Extend would not always be sufficient to determine the type of a vec![].

  • This design does not fully replace the push_all, as it takes T: Clone.

Alternatives

The Cloneian Candidate

This proposal is artificially restricting itself to Copy rather than full Clone as a concession to the general Rustic philosophy of Clones being explicit. Since this proposal is largely motivated by simple shuffling of primitives, this is sufficient. Also, because Copy: Clone, it would be backwards compatible to upgrade to Clone in the future if demand is high enough.

The New Method

It is theoretically plausible to add a new defaulted method to Extend called extend_cloned that provides this functionality. This removes any concern of accidental clones and makes inference totally work. However this design cannot simultaneously support Sequences and Maps, as the signature for sequences would mean Maps can only Copy through &(K, V), rather than (&K, &V). This would make it impossible to copy-chain Maps through Extend.

Why not FromIterator?

FromIterator could also be extended in the same manner, but this is less useful for two reasons:

  • FromIterator is always called by calling collect, and IntoIterator doesn’t really “work” right in self position.
  • Introduces ambiguities in some cases. What is let foo: Vec<_> = [1, 2, 3].iter().collect()?

Of course, context might disambiguate in many cases, and let foo: Vec<i32> = [1, 2, 3].iter().collect() might still be nicer than let foo: Vec<_> = [1, 2, 3].iter().cloned().collect().

Unresolved questions

None.

Summary

Remove panics from CString::from_slice and CString::from_vec, making these functions return Result instead.

Motivation

As I shivered and brooded on the casting of that brain-blasting shadow, I knew that I had at last pried out one of earth’s supreme horrors—one of those nameless blights of outer voids whose faint daemon scratchings we sometimes hear on the farthest rim of space, yet from which our own finite vision has given us a merciful immunity.

— H. P. Lovecraft, The Lurking Fear

Currently the functions that produce std::ffi::CString out of Rust byte strings panic when the input contains NUL bytes. As strings containing NULs are not commonly seen in real-world usage, it is easy for developers to overlook the potential panic unless they test for such atypical input.

The panic is particularly sneaky when hidden behind an API using regular Rust string types. Consider this example:

fn set_text(text: &str) {
    let c_text = CString::from_slice(text.as_bytes());  // panic lurks here
    unsafe { ffi::set_text(c_text.as_ptr()) };
}

This implementation effectively imposes a requirement on the input string to contain no inner NUL bytes, which is generally permitted in pure Rust. This restriction is not apparent in the signature of the function and needs to be described in the documentation. Furthermore, the creator of the code may be oblivious to the potential panic.

The conventions on failure modes elsewhere in Rust libraries tend to limit panics to outcomes of programmer errors. Functions validating external data should return Result to allow graceful handling of the errors.

Detailed design

The return types of CString::from_slice and CString::from_vec is changed to Result:

impl CString {
    pub fn from_slice(s: &[u8]) -> Result<CString, NulError> { ... }
    pub fn from_vec(v: Vec<u8>) -> Result<CString, IntoCStrError> { ... }
}

The error type NulError provides information on the position of the first NUL byte found in the string. IntoCStrError wraps NulError and also provides the Vec which has been moved into CString::from_vec.

std::error::FromError implementations are provided to convert the error types above to std::io::Error of the InvalidInput kind. This facilitates use of the conversion functions in input-processing code.

Proof-of-concept implementation

The proposed changes are implemented in a crates.io project c_string, where the analog of CString is named CStrBuf.

Drawbacks

The need to extract the data from a Result in the success case is annoying. However, it may be viewed as a speed bump to make the developer aware of a potential failure and to require an explicit choice on how to handle it. Even the least graceful way, a call to unwrap, makes the potential panic apparent in the code.

Alternatives

Non-panicky functions can be added alongside the existing functions, e.g., as from_slice_failing. Adding new functions complicates the API where little reason for that exists; composition is preferred to adding function variants. Longer function names, together with a less convenient return value, may deter people from using the safer functions.

The panicky functions could also be renamed to unpack_slice and unpack_vec, respectively, to highlight their conceptual proximity to unpack.

If the panicky behavior is preserved, plentiful possibilities for DoS attacks and other unforeseen failures in the field may be introduced by code oblivious to the input constraints.

Unresolved questions

None.

Summary

Allow macros in type positions

Motivation

Macros are currently allowed in syntax fragments for expressions, items, and patterns, but not for types. This RFC proposes to lift that restriction.

  1. This would allow macros to be used more flexibly, avoiding the need for more complex item-level macros or plugins in some cases. For example, when creating trait implementations with macros, it is sometimes useful to be able to define the associated types using a nested type macro but this is currently problematic.

  2. Enable more programming patterns, particularly with respect to type level programming. Macros in type positions provide convenient way to express recursion and choice. It is possible to do the same thing purely through programming with associated types but the resulting code can be cumbersome to read and write.

Detailed design

Implementation

The proposed feature has been prototyped at this branch. The implementation is straightforward and the impact of the changes are limited in scope to the macro system. Type-checking and other phases of compilation should be unaffected.

The most significant change introduced by this feature is a TyMac case for the Ty_ enum so that the parser can indicate a macro invocation in a type position. In other words, TyMac is added to the ast and handled analogously to ExprMac, ItemMac, and PatMac.

Example: Heterogeneous Lists

Heterogeneous lists are one example where the ability to express recursion via type macros is very useful. They can be used as an alternative to or in combination with tuples. Their recursive structure provide a means to abstract over arity and to manipulate arbitrary products of types with operations like appending, taking length, adding/removing items, computing permutations, etc.

Heterogeneous lists can be defined like so:

#[derive(Copy, Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
struct Nil; // empty HList
#[derive(Copy, Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
struct Cons<H, T: HList>(H, T); // cons cell of HList

// trait to classify valid HLists
trait HList: MarkerTrait {}
impl HList for Nil {}
impl<H, T: HList> HList for Cons<H, T> {}

However, writing HList terms in code is not very convenient:

let xs = Cons("foo", Cons(false, Cons(vec![0u64], Nil)));

At the term-level, this is an easy fix using macros:

// term-level macro for HLists
macro_rules! hlist {
    {} => { Nil };
    {=> $($elem:tt),+ } => { hlist_pat!($($elem),+) };
    { $head:expr, $($tail:expr),* } => { Cons($head, hlist!($($tail),*)) };
    { $head:expr } => { Cons($head, Nil) };
}

// term-level HLists in patterns
macro_rules! hlist_pat {
    {} => { Nil };
    { $head:pat, $($tail:tt),* } => { Cons($head, hlist_pat!($($tail),*)) };
    { $head:pat } => { Cons($head, Nil) };
}

let xs = hlist!["foo", false, vec![0u64]];

Unfortunately, this solution is incomplete because we have only made HList terms easier to write. HList types are still inconvenient:

let xs: Cons<&str, Cons<bool, Cons<Vec<u64>, Nil>>> = hlist!["foo", false, vec![0u64]];

Allowing type macros as this RFC proposes would allows us to be able to use Rust’s macros to improve writing the HList type as well. The complete example follows:

// term-level macro for HLists
macro_rules! hlist {
    {} => { Nil };
    {=> $($elem:tt),+ } => { hlist_pat!($($elem),+) };
    { $head:expr, $($tail:expr),* } => { Cons($head, hlist!($($tail),*)) };
    { $head:expr } => { Cons($head, Nil) };
}

// term-level HLists in patterns
macro_rules! hlist_pat {
    {} => { Nil };
    { $head:pat, $($tail:tt),* } => { Cons($head, hlist_pat!($($tail),*)) };
    { $head:pat } => { Cons($head, Nil) };
}

// type-level macro for HLists
macro_rules! HList {
    {} => { Nil };
    { $head:ty } => { Cons<$head, Nil> };
    { $head:ty, $($tail:ty),* } => { Cons<$head, HList!($($tail),*)> };
}

let xs: HList![&str, bool, Vec<u64>] = hlist!["foo", false, vec![0u64]];

Operations on HLists can be defined by recursion, using traits with associated type outputs at the type-level and implementation methods at the term-level.

The HList append operation is provided as an example. Type macros are used to make writing append at the type level (see Expr!) more convenient than specifying the associated type projection manually:

use std::ops;

// nil case for HList append
impl<Ys: HList> ops::Add<Ys> for Nil {
    type Output = Ys;

    fn add(self, rhs: Ys) -> Ys {
        rhs
    }
}

// cons case for HList append
impl<Rec: HList + Sized, X, Xs: HList, Ys: HList> ops::Add<Ys> for Cons<X, Xs> where
    Xs: ops::Add<Ys, Output = Rec>,
{
    type Output = Cons<X, Rec>;

    fn add(self, rhs: Ys) -> Cons<X, Rec> {
        Cons(self.0, self.1 + rhs)
    }
}

// type macro Expr allows us to expand the + operator appropriately
macro_rules! Expr {
    { ( $($LHS:tt)+ ) } => { Expr!($($LHS)+) };
    { HList ! [ $($LHS:tt)* ] + $($RHS:tt)+ } => { <Expr!(HList![$($LHS)*]) as std::ops::Add<Expr!($($RHS)+)>>::Output };
    { $LHS:tt + $($RHS:tt)+ } => { <Expr!($LHS) as std::ops::Add<Expr!($($RHS)+)>>::Output };
    { $LHS:ty } => { $LHS };
}

// test demonstrating term level `xs + ys` and type level `Expr!(Xs + Ys)`
#[test]
fn test_append() {
    fn aux<Xs: HList, Ys: HList>(xs: Xs, ys: Ys) -> Expr!(Xs + Ys) where
        Xs: ops::Add<Ys>
    {
        xs + ys
    }
    let xs: HList![&str, bool, Vec<u64>] = hlist!["foo", false, vec![]];
    let ys: HList![u64, [u8; 3], ()] = hlist![0, [0, 1, 2], ()];

    // demonstrate recursive expansion of Expr!
    let zs: Expr!((HList![&str] + HList![bool] + HList![Vec<u64>]) +
                  (HList![u64] + HList![[u8; 3], ()]) +
                  HList![])
        = aux(xs, ys);
    assert_eq!(zs, hlist!["foo", false, vec![], 0, [0, 1, 2], ()])
}

Drawbacks

There seem to be few drawbacks to implementing this feature as an extension of the existing macro machinery. The change adds a small amount of additional complexity to the parser and conversion but the changes are minimal.

As with all feature proposals, it is possible that designs for future extensions to the macro system or type system might interfere with this functionality but it seems unlikely unless they are significant, breaking changes.

Alternatives

There are no direct alternatives. Extensions to the type system like data kinds, singletons, and other forms of staged programming (so-called CTFE) might alleviate the need for type macros in some cases, however it is unlikely that they would provide a comprehensive replacement, particularly where plugins are concerned.

Not implementing this feature would mean not taking some reasonably low-effort steps toward making certain programming patterns easier. One potential consequence of this might be more pressure to significantly extend the type system and other aspects of the language to compensate.

Unresolved questions

Alternative syntax for macro invocations in types

There is a question as to whether type macros should allow < and > as delimiters for invocations, e.g. Foo!<A>. This would raise a number of additional complications and is probably not necessary to consider for this RFC. If deemed desirable by the community, this functionality should be proposed separately.

Hygiene and type macros

This RFC also does not address the topic of hygiene regarding macros in types. It is not clear whether there are issues here or not but it may be worth considering in further detail.

Summary

Lex binary and octal literals as if they were decimal.

Motivation

Lexing all digits (even ones not valid in the given base) allows for improved error messages & future proofing (this is more conservative than the current approach) and less confusion, with little downside.

Currently, the lexer stops lexing binary and octal literals (0b10 and 0o12345670) as soon as it sees an invalid digit (2-9 or 8-9 respectively), and immediately starts lexing a new token, e.g. 0b0123 is two tokens, 0b01 and 23. Writing such a thing in normal code gives a strange error message:

<anon>:2:9: 2:11 error: expected one of `.`, `;`, `}`, or an operator, found `23`
<anon>:2     0b0123
                 ^~

However, it is valid to write such a thing in a macro (e.g. using the tt non-terminal), and thus lexing the adjacent digits as two tokens can lead to unexpected behaviour.

macro_rules! expr { ($e: expr) => { $e } }

macro_rules! add {
    ($($token: tt)*) => {
        0 $(+ expr!($token))*
    }
}
fn main() {
    println!("{}", add!(0b0123));
}

prints 24 (add expands to 0 + 0b01 + 23).

It would be nicer for both cases to print an error like:

error: found invalid digit `2` in binary literal
0b0123
    ^

(The non-macro case could be handled by detecting this pattern in the lexer and special casing the message, but this doesn’t not handle the macro case.)

Code that wants two tokens can opt in to it by 0b01 23, for example. This is easy to write, and expresses the intent more clearly anyway.

Detailed design

The grammar that the lexer uses becomes

(0b[0-9]+ | 0o[0-9]+ | [0-9]+ | 0x[0-9a-fA-F]+) suffix

instead of just [01] and [0-7] for the first two, respectively.

However, it is always an error (in the lexer) to have invalid digits in a numeric literal beginning with 0b or 0o. In particular, even a macro invocation like

macro_rules! ignore { ($($_t: tt)*) => { {} } }

ignore!(0b0123)

is an error even though it doesn’t use the tokens.

Drawbacks

This adds a slightly peculiar special case, that is somewhat unique to Rust. On the other hand, most languages do not expose the lexical grammar so directly, and so have more freedom in this respect. That is, in many languages it is indistinguishable if 0b1234 is one or two tokens: it is always an error either way.

Alternatives

Don’t do it, obviously.

Consider 0b123 to just be 0b1 with a suffix of 23, and this is an error or not depending if a suffix of 23 is valid. Handling this uniformly would require "foo"123 and 'a'123 also being lexed as a single token. (Which may be a good idea anyway.)

Unresolved questions

None.

Summary

Add intrinsics for single-threaded memory fences.

Motivation

Rust currently supports memory barriers through a set of intrinsics, atomic_fence and its variants, which generate machine instructions and are suitable as cross-processor fences. However, there is currently no compiler support for single-threaded fences which do not emit machine instructions.

Certain use cases require that the compiler not reorder loads or stores across a given barrier but do not require a corresponding hardware guarantee, such as when a thread interacts with a signal handler which will run on the same thread. By omitting a fence instruction, relatively costly machine operations can be avoided.

The C++ equivalent of this feature is std::atomic_signal_fence.

Detailed design

Add four language intrinsics for single-threaded fences:

  • atomic_compilerfence
  • atomic_compilerfence_acq
  • atomic_compilerfence_rel
  • atomic_compilerfence_acqrel

These have the same semantics as the existing atomic_fence intrinsics but only constrain memory reordering by the compiler, not by hardware.

The existing fence intrinsics are exported in libstd with safe wrappers, but this design does not export safe wrappers for the new intrinsics. The existing fence functions will still perform correctly if used where a single-threaded fence is called for, but with a slight reduction in efficiency. Not exposing these new intrinsics through a safe wrapper reduces the possibility for confusion on which fences are appropriate in a given situation, while still providing the capability for users to opt in to a single-threaded fence when appropriate.

Alternatives

  • Do nothing. The existing fence intrinsics support all use cases, but with a negative impact on performance in some situations where a compiler-only fence is appropriate.

  • Recommend inline assembly to get a similar effect, such as asm!("" ::: "memory" : "volatile"). LLVM provides an IR item specifically for this case (fence singlethread), so I believe taking advantage of that feature in LLVM is most appropriate, since its semantics are more rigorously defined and less likely to yield unexpected (but not necessarily wrong) behavior.

Unresolved questions

These intrinsics may be better represented with a different name, such as atomic_signal_fence or atomic_singlethread_fence. The existing implementation of atomic intrinsics in the compiler precludes the use of underscores in their names and I believe it is clearer to refer to this construct as a “compiler fence” rather than a “signal fence” because not all use cases necessarily involve signal handlers, hence the current choice of name.

Summary

Move the contents of std::thread_local into std::thread. Fully remove std::thread_local from the standard library.

Motivation

Thread locals are directly related to threading. Combining the modules would reduce the number of top level modules, combine related concepts, and make browsing the docs easier. It also would have the potential to slightly reduce the number of use statements.

Detailed design

The contents ofstd::thread_local module would be moved into to std::thread::local. Key would be renamed to LocalKey, and scoped would also be flattened, providing ScopedKey, etc. This way, all thread related code is combined in one module.

It would also allow using it as such:

use std::thread::{LocalKey, Thread};

Drawbacks

It’s pretty late in the 1.0 release cycle. This is a mostly bike shedding level of a change. It may not be worth changing it at this point and staying with two top level modules in std. Also, some users may prefer to have more top level modules.

Alternatives

An alternative (as the RFC originally proposed) would be to bring thread_local in as a submodule, rather than flattening. This was decided against in an effort to keep hierarchies flat, and because of the slim contents on the thread_local module.

Unresolved questions

The exact strategy for moving the contents into std::thread

Summary

Allow marking free functions and inherent methods as const, enabling them to be called in constants contexts, with constant arguments.

Motivation

As it is right now, UnsafeCell is a stabilization and safety hazard: the field it is supposed to be wrapping is public. This is only done out of the necessity to initialize static items containing atomics, mutexes, etc. - for example:

#[lang="unsafe_cell"]
struct UnsafeCell<T> { pub value: T }
struct AtomicUsize { v: UnsafeCell<usize> }
const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize {
    v: UnsafeCell { value: 0 }
};

This approach is fragile and doesn’t compose well - consider having to initialize an AtomicUsize static with usize::MAX - you would need a const for each possible value.

Also, types like AtomicPtr<T> or Cell<T> have no way at all to initialize them in constant contexts, leading to overuse of UnsafeCell or static mut, disregarding type safety and proper abstractions.

During implementation, the worst offender I’ve found was std::thread_local: all the fields of std::thread_local::imp::Key are public, so they can be filled in by a macro - and they’re also marked “stable” (due to the lack of stability hygiene in macros).

A pre-RFC for the removal of the dangerous (and often misused) static mut received positive feedback, but only under the condition that abstractions could be created and used in const and static items.

Another concern is the ability to use certain intrinsics, like size_of, inside constant expressions, including fixed-length array types. Unlike keyword-based alternatives, const fn provides an extensible and composable building block for such features.

The design should be as simple as it can be, while keeping enough functionality to solve the issues mentioned above.

The intention of this RFC is to introduce a minimal change that enables safe abstraction resembling the kind of code that one writes outside of a constant. Compile-time pure constants (the existing const items) with added parametrization over types and values (arguments) should suffice.

This RFC explicitly does not introduce a general CTFE mechanism. In particular, conditional branching and virtual dispatch are still not supported in constant expressions, which imposes a severe limitation on what one can express.

Detailed design

Functions and inherent methods can be marked as const:

const fn foo(x: T, y: U) -> Foo {
    stmts;
    expr
}
impl Foo {
    const fn new(x: T) -> Foo {
        stmts;
        expr
    }

    const fn transform(self, y: U) -> Foo {
        stmts;
        expr
    }
}

Traits, trait implementations and their methods cannot be const - this allows us to properly design a constness/CTFE system that interacts well with traits - for more details, see Alternatives.

Only simple by-value bindings are allowed in arguments, e.g. x: T. While by-ref bindings and destructuring can be supported, they’re not necessary and they would only complicate the implementation.

The body of the function is checked as if it were a block inside a const:

const FOO: Foo = {
    // Currently, only item "statements" are allowed here.
    stmts;
    // The function's arguments and constant expressions can be freely combined.
    expr
}

As the current const items are not formally specified (yet), there is a need to expand on the rules for const values (pure compile-time constants), instead of leaving them implicit:

  • the set of currently implemented expressions is: primitive literals, ADTs (tuples, arrays, structs, enum variants), unary/binary operations on primitives, casts, field accesses/indexing, capture-less closures, references and blocks (only item statements and a tail expression)
  • no side-effects (assignments, non-const function calls, inline assembly)
  • struct/enum values are not allowed if their type implements Drop, but this is not transitive, allowing the (perfectly harmless) creation of, e.g. None::<Vec<T>> (as an aside, this rule could be used to allow [x; N] even for non-Copy types of x, but that is out of the scope of this RFC)
  • references are truly immutable, no value with interior mutability can be placed behind a reference, and mutable references can only be created from zero-sized values (e.g. &mut || {}) - this allows a reference to be represented just by its value, with no guarantees for the actual address in memory
  • raw pointers can only be created from an integer, a reference or another raw pointer, and cannot be dereferenced or cast back to an integer, which means any constant raw pointer can be represented by either a constant integer or reference
  • as a result of not having any side-effects, loops would only affect termination, which has no practical value, thus remaining unimplemented
  • although more useful than loops, conditional control flow (if/else and match) also remains unimplemented and only match would pose a challenge
  • immutable let bindings in blocks have the same status and implementation difficulty as if/else and they both suffer from a lack of demand (blocks were originally introduced to const/static for scoping items used only in the initializer of a global).

For the purpose of rvalue promotion (to static memory), arguments are considered potentially varying, because the function can still be called with non-constant values at runtime.

const functions and methods can be called from any constant expression:

// Standalone example.
struct Point { x: i32, y: i32 }

impl Point {
    const fn new(x: i32, y: i32) -> Point {
        Point { x: x, y: y }
    }

    const fn add(self, other: Point) -> Point {
        Point::new(self.x + other.x, self.y + other.y)
    }
}

const ORIGIN: Point = Point::new(0, 0);

const fn sum_test(xs: [Point; 3]) -> Point {
    xs[0].add(xs[1]).add(xs[2])
}

const A: Point = Point::new(1, 0);
const B: Point = Point::new(0, 1);
const C: Point = A.add(B);
const D: Point = sum_test([A, B, C]);

// Assuming the Foo::new methods used here are const.
static FLAG: AtomicBool = AtomicBool::new(true);
static COUNTDOWN: AtomicUsize = AtomicUsize::new(10);
#[thread_local]
static TLS_COUNTER: Cell<u32> = Cell::new(1);

Type parameters and their bounds are not restricted, though trait methods cannot be called, as they are never const in this design. Accessing trait methods can still be useful - for example, they can be turned into function pointers:

const fn arithmetic_ops<T: Int>() -> [fn(T, T) -> T; 4] {
    [Add::add, Sub::sub, Mul::mul, Div::div]
}

const functions can also be unsafe, allowing construction of types that require invariants to be maintained (e.g. std::ptr::Unique requires a non-null pointer)

struct OptionalInt(u32);
impl OptionalInt {
    /// Value must be non-zero
    const unsafe fn new(val: u32) -> OptionalInt {
        OptionalInt(val)
    }
}

Drawbacks

  • A design that is not conservative enough risks creating backwards compatibility hazards that might only be uncovered when a more extensive CTFE proposal is made, after 1.0.

Alternatives

  • While not an alternative, but rather a potential extension, I want to point out there is only way I could make const fns work with traits (in an untested design, that is): qualify trait implementations and bounds with const. This is necessary for meaningful interactions with operator overloading traits:
const fn map_vec3<T: Copy, F: const Fn(T) -> T>(xs: [T; 3], f: F) -> [T; 3] {
    [f([xs[0]), f([xs[1]), f([xs[2])]
}

const fn neg_vec3<T: Copy + const Neg>(xs: [T; 3]) -> [T; 3] {
    map_vec3(xs, |x| -x)
}

const impl Add for Point {
    fn add(self, other: Point) -> Point {
        Point {
            x: self.x + other.x,
            y: self.y + other.y
        }
    }
}

Having const trait methods (where all implementations are const) seems useful, but it would not allow the usecase above on its own. Trait implementations with const methods (instead of the entire impl being const) would allow direct calls, but it’s not obvious how one could write a function generic over a type which implements a trait and requiring that a certain method of that trait is implemented as const.

Unresolved questions

  • Keep recursion or disallow it for now? The conservative choice of having no recursive const fns would not affect the usecases intended for this RFC. If we do allow it, we probably need a recursion limit, and/or an evaluation algorithm that can handle at least tail recursion. Also, there is no way to actually write a recursive const fn at this moment, because no control flow primitives are implemented for constants, but that cannot be taken for granted, at least if/else should eventually work.

History

  • This RFC was accepted on 2015-04-06. The primary concerns raised in the discussion concerned CTFE, and whether the const fn strategy locks us into an undesirable plan there.

Updates since being accepted

Since it was accepted, the RFC has been updated as follows:

  1. Allowed const unsafe fn

Summary

Replace Entry::get with Entry::or_insert and Entry::or_insert_with for better ergonomics and clearer code.

Motivation

Entry::get was introduced to reduce a lot of the boiler-plate involved in simple Entry usage. Two incredibly common patterns in particular stand out:

match map.entry(key) => {
    Entry::Vacant(entry) => { entry.insert(1); },
    Entry::Occupied(entry) => { *entry.get_mut() += 1; },
}
match map.entry(key) => {
    Entry::Vacant(entry) => { entry.insert(vec![val]); },
    Entry::Occupied(entry) => { entry.get_mut().push(val); },
}

This code is noisy, and is visibly fighting the Entry API a bit, such as having to suppress the return value of insert. It requires the Entry enum to be imported into scope. It requires the user to learn a whole new API. It also introduces a “many ways to do it” stylistic ambiguity:

match map.entry(key) => {
    Entry::Vacant(entry) => entry.insert(vec![]),
    Entry::Occupied(entry) => entry.into_mut(),
}.push(val);

Entry::get tries to address some of this by doing something similar to Result::ok. It maps the Entry into a more familiar Result, while automatically converting the Occupied case into an &mut V. Usage looks like:

*map.entry(key).get().unwrap_or_else(|entry| entry.insert(0)) += 1;
map.entry(key).get().unwrap_or_else(|entry| entry.insert(vec![])).push(val);

This is certainly nicer. No imports are needed, the Occupied case is handled, and we’re closer to a “only one way”. However this is still fairly tedious and arcane. get provides little meaning for what is done; unwrap_or_else is long and scary-sounding; and VacantEntry literally only supports insert, so having to call it seems redundant.

Detailed design

Replace Entry::get with the following two methods:

    /// Ensures a value is in the entry by inserting the default if empty, and returns
    /// a mutable reference to the value in the entry.
    pub fn or_insert(self, default: V) -> &'a mut V {
        match self {
            Occupied(entry) => entry.into_mut(),
            Vacant(entry) => entry.insert(default),
        }
    }

    /// Ensures a value is in the entry by inserting the result of the default function if empty,
    /// and returns a mutable reference to the value in the entry.
    pub fn or_insert_with<F: FnOnce() -> V>(self, default: F) -> &'a mut V {
        match self {
            Occupied(entry) => entry.into_mut(),
            Vacant(entry) => entry.insert(default()),
        }
    }

which allows the following:

*map.entry(key).or_insert(0) += 1;
// vec![] doesn't even allocate, and is only 3 ptrs big.
map.entry(key).or_insert(vec![]).push(val);
let val = map.entry(key).or_insert_with(|| expensive(big, data));

Look at all that ergonomics. Look at it. This pushes us more into the “one right way” territory, since this is unambiguously clearer and easier than a full match or abusing Result. Novices don’t really need to learn the entry API at all with this. They can just learn the .entry(key).or_insert(value) incantation to start, and work their way up to more complex usage later.

Oh hey look this entire RFC is already implemented with all of rust-lang/rust’s entry usage audited and updated: https://github.com/rust-lang/rust/pull/22930

Drawbacks

Replaces the composability of just mapping to a Result with more ad hoc specialty methods. This is hardly a drawback for the reasons stated in the RFC. Maybe someone was really leveraging the Result-ness in an exotic way, but it was likely an abuse of the API. Regardless, the get method is trivial to write as a consumer of the API.

Alternatives

Settle for Result chumpsville or abandon this sugar altogether. Truly, fates worse than death.

Unresolved questions

None.

Summary

Disallow hyphens in Rust crate names, but continue allowing them in Cargo packages.

Motivation

This RFC aims to reconcile two conflicting points of view.

First: hyphens in crate names are awkward to use, and inconsistent with the rest of the language. Anyone who uses such a crate must rename it on import:

extern crate "rustc-serialize" as rustc_serialize;

An earlier version of this RFC aimed to solve this issue by removing hyphens entirely.

However, there is a large amount of precedent for keeping - in package names. Systems as varied as GitHub, npm, RubyGems and Debian all have an established convention of using hyphens. Disallowing them would go against this precedent, causing friction with the wider community.

Fortunately, Cargo presents us with a solution. It already separates the concepts of package name (used by Cargo and crates.io) and crate name (used by rustc and extern crate). We can disallow hyphens in the crate name only, while still accepting them in the outer package. This solves the usability problem, while keeping with the broader convention.

Detailed design

Disallow hyphens in crates (only)

In rustc, enforce that all crate names are valid identifiers.

In Cargo, continue allowing hyphens in package names.

The difference will be in the crate name Cargo passes to the compiler. If the Cargo.toml does not specify an explicit crate name, then Cargo will use the package name but with all - replaced by _.

For example, if I have a package named apple-fritter, Cargo will pass --crate-name apple_fritter to the compiler instead.

Since most packages do not set their own crate names, this mapping will ensure that the majority of hyphenated packages continue to build unchanged.

Identify - and _ on crates.io

Right now, crates.io compares package names case-insensitively. This means, for example, you cannot upload a new package named RUSTC-SERIALIZE because rustc-serialize already exists.

Under this proposal, we will extend this logic to identify - and _ as well.

Remove the quotes from extern crate

Change the syntax of extern crate so that the crate name is no longer in quotes (e.g. extern crate photo_finish as photo;). This is viable now that all crate names are valid identifiers.

To ease the transition, keep the old extern crate syntax around, transparently mapping any hyphens to underscores. For example, extern crate "silver-spoon" as spoon; will be desugared to extern crate silver_spoon as spoon;. This syntax will be deprecated, and removed before 1.0.

Drawbacks

Inconsistency between packages and crates

This proposal makes package and crate names inconsistent: the former will accept hyphens while the latter will not.

However, this drawback may not be an issue in practice. As hinted in the motivation, most other platforms have different syntaxes for packages and crates/modules anyway. Since the package system is orthogonal to the language itself, there is no need for consistency between the two.

Inconsistency between - and _

Quoth @P1start:

… it’s also annoying to have to choose between - and _ when choosing a crate name, and to remember which of - and _ a particular crate uses.

I believe, like other naming issues, this problem can be addressed by conventions.

Alternatives

Do nothing

As with any proposal, we can choose to do nothing. But given the reasons outlined above, the author believes it is important that we address the problem before the beta release.

Disallow hyphens in package names as well

An earlier version of this RFC proposed to disallow hyphens in packages as well. The drawbacks of this idea are covered in the motivation.

Make extern crate match fuzzily

Alternatively, we can have the compiler consider hyphens and underscores as equal while looking up a crate. In other words, the crate flim-flam would match both extern crate flim_flam and extern crate "flim-flam" as flim_flam.

This involves much more magic than the original proposal, and it is not clear what advantages it has over it.

Repurpose hyphens as namespace separators

Alternatively, we can treat hyphens as path separators in Rust.

For example, the crate hoity-toity could be imported as

extern crate hoity::toity;

which is desugared to:

mod hoity {
    mod toity {
        extern crate "hoity-toity" as krate;
        pub use krate::*;
    }
}

However, on prototyping this proposal, the author found it too complex and fraught with edge cases. For these reasons the author chose not to push this solution.

Unresolved questions

None so far.

Summary

Add the family of [Op]Assign traits to allow overloading assignment operations like a += b.

Motivation

We already let users overload the binary operations, letting them overload the assignment version is the next logical step. Plus, this sugar is important to make mathematical libraries more palatable.

Detailed design

Add the following unstable traits to libcore and reexported them in libstd:

// `+=`
#[lang = "add_assign"]
trait AddAssign<Rhs=Self> {
    fn add_assign(&mut self, Rhs);
}

// the remaining traits have the same signature
// (lang items have been omitted for brevity)
trait BitAndAssign { .. }  // `&=`
trait BitOrAssign { .. }   // `|=`
trait BitXorAssign { .. }  // `^=`
trait DivAssign { .. }     // `/=`
trait MulAssign { .. }     // `*=`
trait RemAssign { .. }     // `%=`
trait ShlAssign { .. }     // `<<=`
trait ShrAssign { .. }     // `>>=`
trait SubAssign { .. }     // `-=`

Implement these traits for the primitive numeric types without overloading, i.e. only impl AddAssign<i32> for i32 { .. }.

Add an op_assign feature gate. When the feature gate is enabled, the compiler will consider these traits when typechecking a += b. Without the feature gate the compiler will enforce that a and b must be primitives of the same type/category as it does today.

Once we feel comfortable with the implementation we’ll remove the feature gate and mark the traits as stable. This can be done after 1.0 as this change is backwards compatible.

RHS: By value vs by ref

Taking the RHS by value is more flexible. The implementations allowed with a by value RHS are a superset of the implementations allowed with a by ref RHS. An example where taking the RHS by value is necessary would be operator sugar for extending a collection with an iterator [1]: vec ++= iter where vec: Vec<T> and iter impls Iterator<T>. This can’t be implemented with the by ref version as the iterator couldn’t be advanced in that case.

[1] Where ++ is the “combine” operator that has been proposed elsewhere. Note that this RFC doesn’t propose adding that particular operator or adding similar overloaded operations (vec += iter) to stdlib’s collections, but it leaves the door open to the possibility of adding them in the future (if desired).

Drawbacks

None that I can think of.

Alternatives

Take the RHS by ref. This is less flexible than taking the RHS by value but, in some instances, it can save writing &rhs when the RHS is owned and the implementation demands a reference. However, this last point will be moot if we implement auto-referencing for binary operators, as lhs += rhs would actually call add_assign(&mut lhs, &rhs) if Lhs impls AddAssign<&Rhs>.

Unresolved questions

Should we overload ShlAssign and ShrAssign, e.g. impl ShlAssign<u8> for i32, since we have already overloaded the Shl and Shr traits?

Should we overload all the traits for references, e.g. impl<'a> AddAssign<&'a i32> for i32 to allow x += &0;?

Summary

Restrict closure return type syntax for future compatibility.

Motivation

Today’s closure return type syntax juxtaposes a type and an expression. This is dangerous: if we choose to extend the type grammar to be more acceptable, we can easily break existing code.

Detailed design

The current closure syntax for annotating the return type is |Args| -> Type Expr, where Type is the return type and Expr is the body of the closure. This syntax is future hostile and relies on being able to determine the end point of a type. If we extend the syntax for types, we could cause parse errors in existing code.

An example from history is that we extended the type grammar to include things like Fn(..). This would have caused the following, previous, legal – closure not to parse: || -> Foo (Foo). As a simple fix, this RFC proposes that if a return type annotation is supplied, the body must be enclosed in braces: || -> Foo { (Foo) }. Types are already juxtaposed with open braces in fn items, so this should not be an additional danger for future evolution.

Drawbacks

This design is minimally invasive but perhaps unfortunate in that it’s not obvious that braces would be required. But then, return type annotations are very rarely used.

Alternatives

I am not aware of any alternate designs. One possibility would be to remove return type annotations altogether, perhaps relying on type ascription or other annotations to force the inferencer to figure things out, but they are useful in rare scenarios. In particular type ascription would not be able to handle a higher-ranked signature like for<'a> &'a X -> &'a Y without improving the type checker implementation in other ways (in particular, we don’t infer generalization over lifetimes at present, unless we can figure it out from the expected type or explicit annotations).

Unresolved questions

None.

Summary

Make the count parameter of SliceExt::splitn, StrExt::splitn and corresponding reverse variants mean the maximum number of items returned, instead of the maximum number of times to match the separator.

Motivation

The majority of other languages (see examples below) treat the count parameter as the maximum number of items to return. Rust already has many things newcomers need to learn, making other things similar can help adoption.

Detailed design

Currently splitn uses the count parameter to decide how many times the separator should be matched:

let v: Vec<_> = "a,b,c".splitn(2, ',').collect();
assert_eq!(v, ["a", "b", "c"]);

The simplest change we can make is to decrement the count in the constructor functions. If the count becomes zero, we mark the returned iterator as finished. See Unresolved questions for nicer transition paths.

Example usage

Strings

let input = "a,b,c";
let v: Vec<_> = input.splitn(2, ',').collect();
assert_eq!(v, ["a", "b,c"]);

let v: Vec<_> = input.splitn(1, ',').collect();
assert_eq!(v, ["a,b,c"]);

let v: Vec<_> = input.splitn(0, ',').collect();
assert_eq!(v, []);

Slices

let input = [1, 0, 2, 0, 3];
let v: Vec<_> = input.splitn(2, |&x| x == 0).collect();
assert_eq!(v, [[1], [2, 0, 3]]);

let v: Vec<_> = input.splitn(1, |&x| x == 0).collect();
assert_eq!(v, [[1, 0, 2, 0, 3]]);

let v: Vec<_> = input.splitn(0, |&x| x == 0).collect();
assert_eq!(v, []);

Languages where count is the maximum number of items returned

C#

"a,b,c".Split(new char[] {','}, 2)
// ["a", "b,c"]

Clojure

(clojure.string/split "a,b,c" #"," 2)
;; ["a" "b,c"]

Go

strings.SplitN("a,b,c", ",", 2)
// [a b,c]

Java

"a,b,c".split(",", 2);
// ["a", "b,c"]

Ruby

"a,b,c".split(',', 2)
# ["a", "b,c"]

Perl

split(",", "a,b,c", 2)
# ['a', 'b,c']

Languages where count is the maximum number of times the separator will be matched

Python

"a,b,c".split(',', 2)
# ['a', 'b', 'c']

Swift

split("a,b,c", { $0 == "," }, maxSplit: 2)
// ["a", "b", "c"]

Drawbacks

Changing the meaning of the count parameter without changing the type is sure to cause subtle issues. See Unresolved questions.

The iterator can only return 2^64 values; previously we could return 2^64 + 1. This could also be considered an upside, as we can now return an empty iterator.

Alternatives

  1. Keep the status quo. People migrating from many other languages will continue to be surprised.

  2. Add a parallel set of functions that clearly indicate that count is the maximum number of items that can be returned.

Unresolved questions

Is there a nicer way to change the behavior of count such that users of splitn get compile-time errors when migrating?

  1. Add a dummy parameter, and mark the methods unstable. Remove the parameterand re-mark as stable near the end of the beta period.

  2. Move the methods from SliceExt and StrExt to a new trait that needs to be manually imported. After the transition, move the methods back and deprecate the trait. This would not break user code that migrated to the new semantic.

Summary

Rust’s Write trait has the write_all method, which is a convenience method that writes a whole buffer, failing with ErrorKind::WriteZero if the buffer cannot be written in full.

This RFC proposes adding its Read counterpart: a method (here called read_exact) that reads a whole buffer, failing with an error (here called ErrorKind::UnexpectedEOF) if the buffer cannot be read in full.

Motivation

When dealing with serialization formats with fixed-length fields, reading or writing less than the field’s size is an error. For the Write side, the write_all method does the job; for the Read side, however, one has to call read in a loop until the buffer is completely filled, or until a premature EOF is reached.

This leads to a profusion of similar helper functions. For instance, the byteorder crate has a read_full function, and the postgres crate has a read_all function. However, their handling of the premature EOF condition differs: the byteorder crate has its own Error enum, with UnexpectedEOF and Io variants, while the postgres crate uses an io::Error with an io::ErrorKind::Other.

That can make it unnecessarily hard to mix uses of these helper functions; for instance, if one wants to read a 20-byte tag (using one’s own helper function) followed by a big-endian integer, either the helper function has to be written to use byteorder::Error, or the calling code has to deal with two different ways to represent a premature EOF, depending on which field encountered the EOF condition.

Additionally, when reading from an in-memory buffer, looping is not necessary; it can be replaced by a size comparison followed by a copy_memory (similar to write_all for &mut [u8]). If this non-looping implementation is #[inline], and the buffer size is known (for instance, it’s a fixed-size buffer in the stack, or there was an earlier check of the buffer size against a larger value), the compiler could potentially turn a read from the buffer followed by an endianness conversion into the native endianness (as can happen when using the byteorder crate) into a single-instruction direct load from the buffer into a register.

Detailed design

First, a new variant UnexpectedEOF is added to the io::ErrorKind enum.

The following method is added to the Read trait:

fn read_exact(&mut self, buf: &mut [u8]) -> Result<()>;

Additionally, a default implementation of this method is provided:

fn read_exact(&mut self, mut buf: &mut [u8]) -> Result<()> {
    while !buf.is_empty() {
        match self.read(buf) {
            Ok(0) => break,
            Ok(n) => { let tmp = buf; buf = &mut tmp[n..]; }
            Err(ref e) if e.kind() == ErrorKind::Interrupted => {}
            Err(e) => return Err(e),
        }
    }
    if !buf.is_empty() {
        Err(Error::new(ErrorKind::UnexpectedEOF, "failed to fill whole buffer"))
    } else {
        Ok(())
    }
}

And an optimized implementation of this method for &[u8] is provided:

#[inline]
fn read_exact(&mut self, buf: &mut [u8]) -> Result<()> {
    if (buf.len() > self.len()) {
        return Err(Error::new(ErrorKind::UnexpectedEOF, "failed to fill whole buffer"));
    }
    let (a, b) = self.split_at(buf.len());
    slice::bytes::copy_memory(a, buf);
    *self = b;
    Ok(())
}

The detailed semantics of read_exact are as follows: read_exact reads exactly the number of bytes needed to completely fill its buf parameter. If that’s not possible due to an “end of file” condition (that is, the read method would return 0 even when passed a buffer with at least one byte), it returns an ErrorKind::UnexpectedEOF error.

On success, the read pointer is advanced by the number of bytes read, as if the read method had been called repeatedly to fill the buffer. On any failure (including an ErrorKind::UnexpectedEOF), the read pointer might have been advanced by any number between zero and the number of bytes requested (inclusive), and the contents of its buf parameter should be treated as garbage (any part of it might or might not have been overwritten by unspecified data).

Even if the failure was an ErrorKind::UnexpectedEOF, the read pointer might have been advanced by a number of bytes less than the number of bytes which could be read before reaching an “end of file” condition.

The read_exact method will never return an ErrorKind::Interrupted error, similar to the read_to_end method.

Similar to the read method, no guarantees are provided about the contents of buf when this function is called; implementations cannot rely on any property of the contents of buf being true. It is recommended that implementations only write data to buf instead of reading its contents.

About ErrorKind::Interrupted

Whether or not read_exact can return an ErrorKind::Interrupted error is orthogonal to its semantics. One could imagine an alternative design where read_exact could return an ErrorKind::Interrupted error.

The reason read_exact should deal with ErrorKind::Interrupted itself is its non-idempotence. On failure, it might have already partially advanced its read pointer an unknown number of bytes, which means it can’t be easily retried after an ErrorKind::Interrupted error.

One could argue that it could return an ErrorKind::Interrupted error if it’s interrupted before the read pointer is advanced. But that introduces a non-orthogonality in the design, where it might either return or retry depending on whether it was interrupted at the beginning or in the middle. Therefore, the cleanest semantics is to always retry.

There’s precedent for this choice in the read_to_end method. Users who need finer control should use the read method directly.

About the read pointer

This RFC proposes a read_exact function where the read pointer (conceptually, what would be returned by Seek::seek if the stream was seekable) is unspecified on failure: it might not have advanced at all, have advanced in full, or advanced partially.

Two possible alternatives could be considered: never advance the read pointer on failure, or always advance the read pointer to the “point of error” (in the case of ErrorKind::UnexpectedEOF, to the end of the stream).

Never advancing the read pointer on failure would make it impossible to have a default implementation (which calls read in a loop), unless the stream was seekable. It would also impose extra costs (like creating a temporary buffer) to allow “seeking back” for non-seekable streams.

Always advancing the read pointer to the end on failure is possible; it happens without any extra code in the default implementation. However, it can introduce extra costs in optimized implementations. For instance, the implementation given above for &[u8] would need a few more instructions in the error case. Some implementations (for instance, reading from a compressed stream) might have a larger extra cost.

The utility of always advancing the read pointer to the end is questionable; for non-seekable streams, there’s not much that can be done on an “end of file” condition, so most users would discard the stream in both an “end of file” and an ErrorKind::UnexpectedEOF situation. For seekable streams, it’s easy to seek back, but most users would treat an ErrorKind::UnexpectedEOF as a “corrupted file” and discard the stream anyways.

Users who need finer control should use the read method directly, or when available use the Seek trait.

About the buffer contents

This RFC proposes that the contents of the output buffer be undefined on an error return. It might be untouched, partially overwritten, or completely overwritten (even if less bytes could be read; for instance, this method might in theory use it as a scratch space).

Two possible alternatives could be considered: do not touch it on failure, or overwrite it with valid data as much as possible.

Never touching the output buffer on failure would make it much more expensive for the default implementation (which calls read in a loop), since it would have to read into a temporary buffer and copy to the output buffer on success. Any implementation which cannot do an early return for all failure cases would have similar extra costs.

Overwriting as much as possible with valid data makes some sense; it happens without any extra cost in the default implementation. However, for optimized implementations this extra work is useless; since the caller can’t know how much is valid data and how much is garbage, it can’t make use of the valid data.

Users who need finer control should use the read method directly.

Naming

It’s unfortunate that write_all used WriteZero for its ErrorKind; were it named UnexpectedEOF (which is a much more intuitive name), the same ErrorKind could be used for both functions.

The initial proposal for this read_exact method called it read_all, for symmetry with write_all. However, that name could also be interpreted as “read as many bytes as you can that fit on this buffer, and return what you could read” instead of “read enough bytes to fill this buffer, and fail if you couldn’t read them all”. The previous discussion led to read_exact for the later meaning, and read_full for the former meaning.

Drawbacks

If this method fails, the buffer contents are undefined; the `read_exact’ method might have partially overwritten it. If the caller requires “all-or-nothing” semantics, it must clone the buffer. In most use cases, this is not a problem; the caller will discard or overwrite the buffer in case of failure.

In the same way, if this method fails, there is no way to determine how many bytes were read before it determined it couldn’t completely fill the buffer.

Situations that require lower level control can still use read directly.

Alternatives

The first alternative is to do nothing. Every Rust user needing this functionality continues to write their own read_full or read_exact function, or have to track down an external crate just for one straightforward and commonly used convenience method. Additionally, unless everybody uses the same external crate, every reimplementation of this method will have slightly different error handling, complicating mixing users of multiple copies of this convenience method.

The second alternative is to just add the ErrorKind::UnexpectedEOF or similar. This would lead in the long run to everybody using the same error handling for their version of this convenience method, simplifying mixing their uses. However, it’s questionable to add an ErrorKind variant which is never used by the standard library.

Another alternative is to return the number of bytes read in the error case. That makes the buffer contents defined also in the error case, at the cost of increasing the size of the frequently-used io::Error struct, for a rarely used return value. My objections to this alternative are:

  • If the caller has an use for the partially written buffer contents, then it’s treating the “buffer partially filled” case as an alternative success case, not as a failure case. This is not a good match for the semantics of an Err return.
  • Determining that the buffer cannot be completely filled can in some cases be much faster than doing a partial copy. Many callers are not going to be interested in an incomplete read, meaning that all the work of filling the buffer is wasted.
  • As mentioned, it increases the size of a commonly used type in all cases, even when the code has no mention of read_exact.

The final alternative is read_full, which returns the number of bytes read (Result<usize>) instead of failing. This means that every caller has to check the return value against the size of the passed buffer, and some are going to forget (or misimplement) the check. It also prevents some optimizations (like the early return in case there will never be enough data). There are, however, valid use cases for this alternative; for instance, reading a file in fixed-size chunks, where the last chunk (and only the last chunk) can be shorter. I believe this should be discussed as a separate proposal; its pros and cons are distinct enough from this proposal to merit its own arguments.

I believe that the case for read_full is weaker than read_exact, for the following reasons:

  • While read_exact needs an extra variant in ErrorKind, read_full has no new error cases. This means that implementing it yourself is easy, and multiple implementations have no drawbacks other than code duplication.
  • While read_exact can be optimized with an early return in cases where the reader knows its total size (for instance, reading from a compressed file where the uncompressed size was given in a header), read_full has to always write to the output buffer, so there’s not much to gain over a generic looping implementation calling read.

Summary

Custom coercions allow smart pointers to fully participate in the DST system. In particular, they allow practical use of Rc<T> and Arc<T> where T is unsized.

This RFC subsumes part of RFC 401 coercions.

Motivation

DST is not really finished without this, in particular there is a need for types like reference counted trait objects (Rc<Trait>) which are not currently well- supported (without coercions, it is pretty much impossible to create such values with such a type).

Detailed design

There is an Unsize trait and lang item. This trait signals that a type can be converted using the compiler’s coercion machinery from a sized to an unsized type. All implementations of this trait are implicit and compiler generated. It is an error to implement this trait. If &T can be coerced to &U then there will be an implementation of Unsize<U> for T. E.g, [i32; 42]: Unsize<[i32]>. Note that the existence of an Unsize impl does not signify a coercion can itself can take place, it represents an internal part of the coercion mechanism (it corresponds with coerce_inner from RFC 401). The trait is defined as:

#[lang="unsize"]
trait Unsize<T: ?Sized>: ::std::marker::PhantomFn<Self, T> {}

There are implementations for any fixed size array to the corresponding unsized array, for any type to any trait that that type implements, for structs and tuples where the last field can be unsized, and for any pair of traits where Self is a sub-trait of T (see RFC 401 for more details).

There is a CoerceUnsized trait which is implemented by smart pointer types to opt-in to DST coercions. It is defined as:

#[lang="coerce_unsized"]
trait CoerceUnsized<Target>: ::std::marker::PhantomFn<Self, Target> + Sized {}

An example implementation:

impl<T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<Rc<U>> for Rc<T> {}
impl<T: Zeroable+CoerceUnsized<U>, U: Zeroable> CoerceUnsized<NonZero<U>> for NonZero<T> {}

// For reference, the definitions of Rc and NonZero:
pub struct Rc<T: ?Sized> {
    _ptr: NonZero<*mut RcBox<T>>,
}
pub struct NonZero<T: Zeroable>(T);

Implementing CoerceUnsized indicates that the self type should be able to be coerced to the Target type. E.g., the above implementation means that Rc<[i32; 42]> can be coerced to Rc<[i32]>. There will be CoerceUnsized impls for the various pointer kinds available in Rust and which allow coercions, therefore CoerceUnsized when used as a bound indicates coercible types. E.g.,

fn foo<T: CoerceUnsized<U>, U>(x: T) -> U {
    x
}

Built-in pointer impls:

impl<'a, 'b: 'aT: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<&'a U> for &'b mut T {}
impl<'a, T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<&'a mut U> for &'a mut T {}
impl<'a, T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for &'a mut T {}
impl<'a, T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*mut U> for &'a mut T {}

impl<'a, 'b: 'a, T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<&'a U> for &'b T {}
impl<'b, T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for &'b T {}

impl<T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for *mut T {}
impl<T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*mut U> for *mut T {}

impl<T: ?Sized+Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for *const T {}

Note that there are some coercions which are not given by CoerceUnsized, e.g., from safe to unsafe function pointers, so it really is a CoerceUnsized trait, not a general Coerce trait.

Compiler checking

On encountering an implementation of CoerceUnsized (type collection phase)

  • If the impl is for a built-in pointer type, we check nothing, otherwise…
  • The compiler checks that the Self type is a struct or tuple struct and that the Target type is a simple substitution of type parameters from the Self type (i.e., That Self is Foo<Ts>, Target is Foo<Us> and that there exist Vs and Xs (where Xs are all type parameters) such that Target = [Vs/Xs]Self. One day, with HKT, this could be a regular part of type checking, for now it must be an ad hoc check). We might enforce that this substitution is of the form X/Y where X and Y are both formal type parameters of the implementation (I don’t think this is necessary, but it makes checking coercions easier and is satisfied for all smart pointers).
  • The compiler checks each field in the Self type against the corresponding field in the Target type. Assuming Fs is the type of a field in Self and Ft is the type of the corresponding field in Target, then either Ft <: Fs or Fs: CoerceUnsized<Ft> (note that this includes some built-in coercions, coercions unrelated to unsizing are excluded, these could probably be added later, if needed).
  • There must be only one non-PhantomData field that is coerced.
  • We record for each impl, the index of the field in the Self type which is coerced.

On encountering a potential coercion (type checking phase)

  • If we have an expression with type E where the type F is required during type checking and E is not a subtype of F, nor is it coercible using the built-in coercions, then we search for a bound of E: CoerceUnsized<F>. Note that we may not at this stage find the actual impl, but finding the bound is good enough for type checking.

  • If we require a coercion in the receiver of a method call or field lookup, we perform the same search that we currently do, except that where we currently check for coercions, we check for built-in coercions and then for CoerceUnsized bounds. We must also check for Unsize bounds for the case where the receiver is auto-deref’ed, but not autoref’ed.

On encountering an adjustment (translation phase)

  • In trans (which is post-monomorphisation) we should always be able to find an impl for any CoerceUnsized bound.
  • If the impl is for a built-in pointer type, then we use the current coercion code for the various pointer kinds (Box<T> has different behaviour than & and * pointers).
  • Otherwise, we lookup which field is coerced due to the opt-in coercion, move the object being coerced and coerce the field in question by recursing (the built-in pointers are the base cases).

Adjustment types

We add AdjustCustom to the AutoAdjustment enum as a placeholder for coercions due to a CoerceUnsized bound. I don’t think we need the UnsizeKind enum at all now, since all checking is postponed until trans or relies on traits and impls.

Drawbacks

Not as flexible as the previous proposal.

Alternatives

The original DST5 proposal contains a similar proposal with no opt-in trait, i.e., coercions are completely automatic and arbitrarily deep. This is a little too magical and unpredictable. It violates some ‘soft abstraction boundaries’ by interefering with the deep structure of objects, sometimes even automatically (and implicitly) allocating.

RFC 401 proposed a scheme for proposals where users write their own coercion using intrinsics. Although more flexible, this allows for implicit execution of arbitrary code. If we need the increased flexibility, I believe we can add a manual option to the CoerceUnsized trait backwards compatibly.

The proposed design could be tweaked: for example, we could change the CoerceUnsized trait in many ways (we experimented with an associated type to indicate the field type which is coerced, for example).

Unresolved questions

It is unclear to what extent DST coercions should support multiple fields that refer to the same type parameter. PhantomData<T> should definitely be supported as an “extra” field that’s skipped, but can all zero-sized fields be skipped? Are there cases where this would enable by-passing the abstractions that make some API safe?

Updates since being accepted

Since it was accepted, the RFC has been updated as follows:

  1. CoerceUnsized was specified to ignore PhantomData fields.
  • Feature Name: exit
  • Start Date: 2015-03-24
  • RFC PR: rust-lang/rfcs#1011
  • Rust Issue: (leave this empty)

Summary

Add a function to the std::process module to exit the process immediately with a specified exit code.

Motivation

Currently there is no stable method to exit a program in Rust with a nonzero exit code without panicking. The current unstable method for doing so is by using the exit_status feature with the std::env::set_exit_status function.

This function has not been stabilized as it diverges from the system APIs (there is no equivalent) and it represents an odd piece of global state for a Rust program to have. One example of odd behavior that may arise is that if a library calls env::set_exit_status, then the process is not guaranteed to exit with that status (e.g. Rust was called from C).

The purpose of this RFC is to provide at least one method on the path to stabilization which will provide a method to exit a process with an arbitrary exit code.

Detailed design

The following function will be added to the std::process module:

/// Terminates the current process with the specified exit code.
///
/// This function will never return and will immediately terminate the current
/// process. The exit code is passed through to the underlying OS and will be
/// available for consumption by another process.
///
/// Note that because this function never returns, and that it terminates the
/// process, no destructors on the current stack or any other thread's stack
/// will be run. If a clean shutdown is needed it is recommended to only call
/// this function at a known point where there are no more destructors left
/// to run.
pub fn exit(code: i32) -> !;

Implementation-wise this will correspond to the exit function on unix and the ExitProcess function on windows.

This function is also not marked unsafe, despite the risk of leaking allocated resources (e.g. destructors may not be run). It is already possible to safely create memory leaks in Rust, however, (with Rc + RefCell), so this is not considered a strong enough threshold to mark the function as unsafe.

Drawbacks

  • This API does not solve all use cases of exiting with a nonzero exit status. It is sometimes more convenient to simply return a code from the main function instead of having to call a separate function in the standard library.

Alternatives

  • One alternative would be to stabilize set_exit_status as-is today. The semantics of the function would be clearly documented to prevent against surprises, but it would arguably not prevent all surprises from arising. Some reasons for not pursuing this route, however, have been outlined in the motivation.

  • The main function of binary programs could be altered to require an i32 return value. This would greatly lessen the need to stabilize this function as-is today as it would be possible to exit with a nonzero code by returning a nonzero value from main. This is a backwards-incompatible change, however.

  • The main function of binary programs could optionally be typed as fn() -> i32 instead of just fn(). This would be a backwards-compatible change, but does somewhat add complexity. It may strike some as odd to be able to define the main function with two different signatures in Rust. Additionally, it’s likely that the exit functionality proposed will be desired regardless of whether the main function can return a code or not.

Unresolved questions

  • To what degree should the documentation imply that rt::at_exit handlers are run? Implementation-wise their execution is guaranteed, but we may not wish for this to always be so.

Summary

When calling println! it currently causes a panic if stdout does not exist. Change this to ignore this specific error and simply void the output.

Motivation

On Linux stdout almost always exists, so when people write games and turn off the terminal there is still an stdout that they write to. Then when getting the code to run on Windows, when the console is disabled, suddenly stdout doesn’t exist and println! panicks. This behavior difference is frustrating to developers trying to move to Windows.

There is also precedent with C and C++. On both Linux and Windows, if stdout is closed or doesn’t exist, neither platform will error when attempting to print to the console.

Detailed design

When using any of the convenience macros that write to either stdout or stderr, such as println! print! panic! and assert!, change the implementation to ignore the specific error of stdout or stderr not existing. The behavior of all other errors will be unaffected. This can be implemented by redirecting stdout and stderr to std::io::sink if the original handles do not exist.

Update the methods std::io::stdin std::io::stdout and std::io::stderr as follows:

  • If stdout or stderr does not exist, return the equivalent of std::io::sink.
  • If stdin does not exist, return the equivalent of std::io::empty.
  • For the raw versions, return a Result, and if the respective handle does not exist, return an Err.

Drawbacks

  • Hides an error from the user which we may want to expose and may lead to people missing panicks occurring in threads.
  • Some languages, such as Ruby and Python, do throw an exception when stdout is missing.

Alternatives

  • Make println! print! panic! assert! return errors that the user has to handle. This would lose a large part of the convenience of these macros.
  • Continue with the status quo and panic if stdout or stderr doesn’t exist.
  • For std::io::stdin std::io::stdout and std::io::stderr, make them return a Result. This would be a breaking change to the signature, so if this is desired it should be done immediately before 1.0. ** Alternatively, make the objects returned by these methods error upon attempting to write to/read from them if their respective handle doesn’t exist.

Unresolved questions

  • Which is better? Breaking the signatures of those three methods in std::io, making them silently redirect to empty/sink, or erroring upon attempting to write to/read from the handle?

Summary

This RFC proposes two rule changes:

  1. Modify the orphan rules so that impls of remote traits require a local type that is either a struct/enum/trait defined in the current crate LT = LocalTypeConstructor<...> or a reference to a local type LT = ... | &LT | &mut LT.
  2. Restrict negative reasoning so it too obeys the orphan rules.
  3. Introduce an unstable #[fundamental] attribute that can be used to extend the above rules in select cases (details below).

Motivation

The current orphan rules are oriented around allowing as many remote traits as possible. As so often happens, giving power to one party (in this case, downstream crates) turns out to be taking power away from another (in this case, upstream crates). The problem is that due to coherence, the ability to define impls is a zero-sum game: every impl that is legal to add in a child crate is also an impl that a parent crate cannot add without fear of breaking downstream crates. A detailed look at these problems is presented here; this RFC doesn’t go over the problems in detail, but will reproduce some of the examples found in that document.

This RFC proposes a shift that attempts to strike a balance between the needs of downstream and upstream crates. In particular, we wish to preserve the ability of upstream crates to add impls to traits that they define, while still allowing downstream creates to define the sorts of impls they need.

While exploring the problem, we found that in practice remote impls almost always are tied to a local type or a reference to a local type. For example, here are some impls from the definition of Vec:

// tied to Vec<T>
impl<T> Send for Vec<T>
    where T: Send

// tied to &Vec<T>
impl<'a,T> IntoIterator for &'a Vec<T>

On this basis, we propose that we limit remote impls to require that they include a type either defined in the current crate or a reference to a type defined in the current crate. This is more restrictive than the current definition, which merely requires a local type appear somewhere. So, for example, under this definition MyType and &MyType would be considered local, but Box<MyType>, Option<MyType>, and (MyType, i32) would not.

Furthermore, we limit the use of negative reasoning to obey the orphan rules. That is, just as a crate cannot define an impl Type: Trait unless Type or Trait is local, it cannot rely that Type: !Trait holds unless Type or Trait is local.

Together, these two changes cause very little code breakage while retaining a lot of freedom to add impls in a backwards compatible fashion. However, they are not quite sufficient to compile all the most popular cargo crates (though they almost succeed). Therefore, we propose an simple, unstable attribute #[fundamental] (described below) that can be used to extend the system to accommodate some additional patterns and types. This attribute is unstable because it is not clear whether it will prove to be adequate or need to be generalized; this part of the design can be considered somewhat incomplete, and we expect to finalize it based on what we observe after the 1.0 release.

Practical effect

Effect on parent crates

When you first define a trait, you must also decide whether that trait should have (a) a blanket impls for all T and (b) any blanket impls over references. These blanket impls cannot be added later without a major version bump, for fear of breaking downstream clients.

Here are some examples of the kinds of blanket impls that must be added right away:

impl<T:Foo> Bar for T { }
impl<'a,T:Bar> Bar for &'a T { }

Effect on child crates

Under the base rules, child crates are limited to impls that use local types or references to local types. They are also prevented from relying on the fact that Type: !Trait unless either Type or Trait is local. This turns out to be have very little impact.

In compiling the libstd facade and librustc, exactly two impls were found to be illegal, both of which followed the same pattern:

struct LinkedListEntry<'a> {
    data: i32,
    next: Option<&'a LinkedListEntry>
}

impl<'a> Iterator for Option<&'a LinkedListEntry> {
    type Item = i32;

    fn next(&mut self) -> Option<i32> {
        if let Some(ptr) = *self {
            *self = Some(ptr.next);
            Some(ptr.data)
        } else {
            None
        }
    }
}

The problem here is that Option<&LinkedListEntry> is no longer considered a local type. A similar restriction would be that one cannot define an impl over Box<LinkedListEntry>; but this was not observed in practice.

Both of these restrictions can be overcome by using a new type. For example, the code above could be changed so that instead of writing the impl for Option<&LinkedListEntry>, we define a type LinkedList that wraps the option and implement on that:

struct LinkedListEntry<'a> {
    data: i32,
    next: LinkedList<'a>
}

struct LinkedList<'a> {
    data: Option<&'a LinkedListEntry>
}

impl<'a> Iterator for LinkedList<'a> {
    type Item = i32;

    fn next(&mut self) -> Option<i32> {
        if let Some(ptr) = self.data {
            *self = Some(ptr.next);
            Some(ptr.data)
        } else {
            None
        }
    }
}

Errors from cargo and the fundamental attribute

We also applied our prototype to all the “Most Downloaded” cargo crates as well as the iron crate. That exercise uncovered a few patterns that the simple rules presented thus far can’t handle.

The first is that it is common to implement traits over boxed trait objects. For example, the error crate defines an impl:

  • impl<E: Error> FromError<E> for Box<Error>

Here, Error is a local trait defined in error, but FromError is the trait from libstd. This impl would be illegal because Box<Error> is not considered local as Box is not local.

The second is that it is common to use FnMut in blanket impls, similar to how the Pattern trait in libstd works. The regex crate in particular has the following impls:

  • impl<'t> Replacer for &'t str
  • impl<F> Replacer for F where F: FnMut(&Captures) -> String
  • these are in conflict because this requires that &str: !FnMut, and neither &str nor FnMut are local to regex

Given that overloading over closures is likely to be a common request, and that the Fn traits are well-known, core traits tied to the call operator, it seems reasonable to say that implementing a Fn trait is itself a breaking change. (This is not to suggest that there is something fundamental about the Fn traits that distinguish them from all other traits; just that if the goal is to have rules that users can easily remember, saying that implementing a core operator trait is a breaking change may be a reasonable rule, and it enables useful patterns to boot – patterns that are baked into the libstd APIs.)

To accommodate these cases (and future cases we will no doubt encounter), this RFC proposes an unstable attribute #[fundamental]. #[fundamental] can be applied to types and traits with the following meaning:

  • A #[fundamental] type Foo is one where implementing a blanket impl over Foo is a breaking change. As described, & and &mut are fundamental. This attribute would be applied to Box, making Box behave the same as & and &mut with respect to coherence.
  • A #[fundamental] trait Foo is one where adding an impl of Foo for an existing type is a breaking change. For now, the Fn traits and Sized would be marked fundamental, though we may want to extend this set to all operators or some other more-easily-remembered set.

The #[fundamental] attribute is intended to be a kind of “minimal commitment” that still permits the most important impl patterns we see in the wild. Because it is unstable, it can only be used within libstd for now. We are eventually committed to finding some way to accommodate the patterns above – which could be as simple as stabilizing #[fundamental] (or, indeed, reverting this RFC altogether). It could also be a more general mechanism that lets users specify more precisely what kind of impls are reserved for future expansion and which are not.

Detailed Design

Proposed orphan rules

Given an impl impl<P1...Pn> Trait<T1...Tn> for T0, either Trait must be local to the current crate, or:

  1. At least one type must meet the LT pattern defined above. Let Ti be the first such type.
  2. No type parameters P1...Pn may appear in the type parameters that precede Ti (that is, Tj where j < i).

Type locality and negative reasoning

Currently the overlap check employs negative reasoning to segregate blanket impls from other impls. For example, the following pair of impls would be legal only if MyType<U>: !Copy for all U (the notation Type: !Trait is borrowed from RFC 586):

impl<T:Copy> Clone for T {..}
impl<U> Clone for MyType<U> {..}

This proposal places limits on negative reasoning based on the orphan rules. Specifically, we cannot conclude that a proposition like T0: !Trait<T1..Tn> holds unless T0: Trait<T1..Tn> meets the orphan rules as defined in the previous section.

In practice this means that, by default, you can only assume negative things about traits and types defined in your current crate, since those are under your direct control. This permits parent crates to add any impls except for blanket impls over T, &T, or &mut T, as discussed before.

Effect on ABI compatibility and semver

We have not yet proposed a comprehensive semver RFC (it’s coming). However, this RFC has some effect on what that RFC would say. As discussed above, it is a breaking change for to add a blanket impl for a #[fundamental] type. It is also a breaking change to add an impl of a #[fundamental] trait to an existing type.

Drawbacks

The primary drawback is that downstream crates cannot write an impl over types other than references, such as Option<LocalType>. This can be overcome by defining wrapper structs (new types), but that can be annoying.

Alternatives

  • Status quo. In the status quo, the balance of power is heavily tilted towards child crates. Parent crates basically cannot add any impl for an existing trait to an existing type without potentially breaking child crates.

  • Take a hard line. We could forego the #[fundamental] attribute, but it would force people to forego Box<Trait> impls as well as the useful closure-overloading pattern. This seems unfortunate. Moreover, it seems likely we will encounter further examples of “reasonable cases” that #[fundamental] can easily accommodate.

  • Specializations, negative impls, and contracts. The gist referenced earlier includes a section covering various alternatives that I explored which came up short. These include specialization, explicit negative impls, and explicit contracts between the trait definer and the trait consumer.

Unresolved questions

None.

Summary

Add Default, IntoIterator and ToOwned trait to the prelude.

Motivation

Each trait has a distinct motivation:

  • For Default, the ergonomics have vastly improved now that you can write MyType::default() (thanks to UFCS). Thanks to this improvement, it now makes more sense to promote widespread use of the trait.

  • For IntoIterator, promoting to the prelude will make it feasible to deprecate the inherent into_iter methods and directly-exported iterator types, in favor of the trait (which is currently redundant).

  • For ToOwned, promoting to the prelude would add a uniform, idiomatic way to acquire an owned copy of data (including going from str to String, for which Clone does not work).

Detailed design

  • Add Default, IntoIterator and ToOwned trait to the prelude.

  • Deprecate inherent into_iter methods.

  • Ultimately deprecate module-level IntoIter types (e.g. in vec); this may want to wait until you can write Vec<T>::IntoIter rather than <Vec<T> as IntoIterator>::IntoIter.

Drawbacks

The main downside is that prelude entries eat up some amount of namespace (particularly, method namespace). However, these are all important, core traits in std, meaning that the method names are already quite unlikely to be used.

Strictly speaking, a prelude addition is a breaking change, but as above, this is highly unlikely to cause actual breakage. In any case, it can be landed prior to 1.0.

Alternatives

None.

Unresolved questions

The exact timeline of deprecation for IntoIter types.

Are there other traits or types that should be promoted before 1.0?

Summary

This RFC suggests stabilizing a reduced-scope Duration type that is appropriate for interoperating with various system calls that require timeouts. It does not stabilize a large number of conversion methods in Duration that have subtle caveats, with the intent of revisiting those conversions more holistically in the future.

Motivation

There are a number of different notions of “time”, each of which has a different set of caveats, and each of which can be designed for optimal ergonomics for its domain. This proposal focuses on one particular one: an amount of time in high-precision units.

Eventually, there are a number of concepts of time that deserve fleshed out APIs. Using the terminology from the popular Java time library JodaTime:

  • Duration: an amount of time, described in terms of a high precision unit.
  • Period: an amount of time described in human terms (“5 minutes, 27 seconds”), and which can only be resolved into a Duration relative to a moment in time.
  • Instant: a moment in time represented in terms of a Duration since some epoch.

Human complications such as leap seconds, days in a month, and leap years, and machine complications such as NTP adjustments make these concepts and their full APIs more complicated than they would at first appear. This proposal focuses on fleshing out a design for Duration that is sufficient for use as a timeout, leaving the other concepts of time to a future proposal.


For the most part, the system APIs that this type is used to communicate with either use timespec (u64 seconds plus u32 nanos) or take a timeout in milliseconds (u32 on Windows).

For example, GetQueuedCompletionStatus, one of the primary APIs in the Windows IOCP API, takes a dwMilliseconds parameter as a DWORD, which is a u32. Some Windows APIs use “ticks” or 100-nanosecond units.

In light of that, this proposal has two primary goals:

  • to define a type that can describe portable timeouts for cross- platform APIs
  • to describe what should happen if a large Duration is passed into an API that does not accept timeouts that large

In general, this proposal considers it acceptable to reduce the granularity of timeouts (eliminating nanosecond granularity if only milliseconds are supported) and to truncate very large timeouts.

This proposal retains the two fields in the existing Duration:

  • a u64 of seconds
  • a u32 of additional nanosecond precision

Timeout APIs defined in terms of milliseconds will truncate Durations that are more than u32::MAX in milliseconds, and will reduce the granularity of the nanosecond field.

A u32 of milliseconds supports a timeout longer than 45 days.

Future APIs to support a broader set of Durations APIs, a Period and Instant type, as well as coercions between these types, would be useful, compatible follow-ups to this RFC.

Detailed design

A Duration represents a period of time represented in terms of nanosecond granularity. It has u64 seconds and an additional u32 nanoseconds. There is no concept of a negative Duration.

A negative Duration has no meaning for many APIs that may wish to take a Duration, which means that all such APIs would need to decide what to do when confronted with a negative Duration. As a result, this proposal focuses on the predominant use-cases for Duration, where unsigned types remove a number of caveats and ambiguities.

pub struct Duration {
  secs: u64,
  nanos: u32 // may not be more than 1 billion
}

impl Duration {
    /// create a Duration from a number of seconds and an
    /// additional nanosecond precision. If nanos is one
    /// billion or greater, it carries into secs.
    pub fn new(secs: u64, nanos: u32) -> Timeout;

    /// create a Duration from a number of seconds
    pub fn from_secs(secs: u64) -> Timeout;

    /// create a Duration from a number of milliseconds
    pub fn from_millis(millis: u64) -> Timeout;

    /// the number of seconds represented by the Duration
    pub fn secs(self) -> u64;

    /// the number of additional nanosecond precision
    pub fn nanos(self) -> u32;
}

When Duration is used with a system API that expects u32 milliseconds, the Duration’s precision is coarsened to milliseconds, and, and the number is truncated to u32::MAX.

In general, this RFC assumes that timeout APIs permit spurious updates (see, for example, pthread_cond_timedwait, “Spurious wakeups from the pthread_cond_timedwait() or pthread_cond_wait() functions may occur”).

Duration implements:

  • Add, Sub, Mul, Div which follow the overflow and underflow rules for u64 when applied to the secs field (in particular, Sub will panic if the result would be negative). Nanoseconds must be less than 1 billion and great than or equal to 0, and carry into the secs field.
  • Display, which prints a number of seconds, milliseconds and nanoseconds (if more than 0). For example, a Duration would be represented as "15 seconds, 306 milliseconds, and 13 nanoseconds"
  • Debug, Ord (and PartialOrd), Eq (and PartialEq), Copy and Clone, which are derived.

This proposal does not, at this time, include mechanisms for instantiating a Duration from weeks, days, hours or minutes, because there are caveats to each of those units. In particular, the existence of leap seconds means that it is only possible to properly understand them relative to a particular starting point.

The Joda-Time library in Java explains the problem well in their documentation:

A duration in Joda-Time represents a duration of time measured in milliseconds. The duration is often obtained from an interval. Durations are a very simple concept, and the implementation is also simple. They have no chronology or time zone, and consist solely of the millisecond duration.

A period in Joda-Time represents a period of time defined in terms of fields, for example, 3 years 5 months 2 days and 7 hours. This differs from a duration in that it is inexact in terms of milliseconds. A period can only be resolved to an exact number of milliseconds by specifying the instant (including chronology and time zone) it is relative to.

In short, this is saying that people expect “23:50:00 + 10 minutes” to equal “00:00:00”, but it’s impossible to know for sure whether that’s true unless you know the exact starting point so you can take leap seconds into consideration.

In order to address this confusion, Joda-Time’s Duration has methods like standardDays/toStandardDays and standardHours/toStandardHours, which are meant to indicate to the user that the number of milliseconds is based on the standard number of milliseconds in an hour, rather than the colloquial notion of an “hour”.

An approach like this could work for Rust, but this RFC is intentionally limited in scope to areas without substantial tradeoffs in an attempt to allow a minimal solution to progress more quickly.

This proposal does not include a method to get a number of milliseconds from a Duration, because the number of milliseconds could exceed u64, and we would have to decide whether to return an Option, panic, or wait for a standard bignum. In the interest of limiting this proposal to APIs with a straight-forward design, this proposal defers such a method.

Drawbacks

The main drawback to this proposal is that it is significantly more minimal than the existing Duration API. However, this API is quite sufficient for timeouts, and without the caveats in the existing Duration API.

Alternatives

We could stabilize the existing Duration API. However, it has a number of serious caveats:

  • The caveats described above about some of the units it supports.
  • It supports converting a Duration into a number of microseconds or nanoseconds. Because that cannot be done reliably, those methods return Options, and APIs that need to convert Duration into nanoseconds have to re-surface the Option (unergonomic) or panic.
  • More generally, it has a fairly large API surface area, and almost every method has some caveat that would need to be explored in order to stabilize it.

We could also include a number of convenience APIs that convert from other units into Durations. This proposal assumes that some of those conveniences will eventually be added. However, the design of each of those conveniences is ambiguous, so they are not included in this initial proposal.


Finally, we could avoid any API for timeouts, and simply take milliseconds throughout the standard library. However, this has two drawbacks.

First, it does not allow us to represent higher-precision timeouts on systems that could support them.

Second, while this proposal does not yet include conveniences, it assumes that some conveniences should be added in the future once the design space is more fully explored. Starting with a simple type gives us space to grow into.

Unresolved questions

  • Should we implement all of the listed traits? Others?

Summary

Expand the scope of the std::fs module by enhancing existing functionality, exposing lower-level representations, and adding a few new functions.

Motivation

The current std::fs module serves many of the basic needs of interacting with a filesystem, but is missing a lot of useful functionality. For example, none of these operations are possible in stable Rust today:

  • Inspecting a file’s modification/access times
  • Reading low-level information like that contained in libc::stat
  • Inspecting the unix permission bits on a file
  • Blanket setting the unix permission bits on a file
  • Leveraging DirEntry for the extra metadata it might contain
  • Reading the metadata of a symlink (not what it points at)
  • Resolving all symlink in a path

There is some more functionality listed in the RFC issue, but this RFC will not attempt to solve the entirety of that issue at this time. This RFC strives to expose APIs for much of the functionality listed above that is on the track to becoming #[stable] soon.

Non-goals of this RFC

There are a few areas of the std::fs API surface which are not considered goals for this RFC. It will be left for future RFCs to add new APIs for these areas:

  • Enhancing copy to copy directories recursively or configuring how copying happens.
  • Enhancing or stabilizing walk and its functionality.
  • Temporary files or directories

Detailed design

First, a vision for how lowering APIs in general will be presented, and then a number of specific APIs will each be proposed. Many of the proposed APIs are independent from one another and this RFC may not be implemented all-in-one-go but instead piecemeal over time, allowing the designs to evolve slightly in the meantime.

Lowering APIs

The vision for the os module

One of the principles of IO reform was to:

Provide hooks for integrating with low-level and/or platform-specific APIs.

The original RFC went into some amount of detail for how this would look, in particular by use of the os module. Part of the goal of this RFC is to flesh out that vision in more detail.

Ultimately, the organization of os is planned to look something like the following:

os
  unix          applicable to all cfg(unix) platforms; high- and low-level APIs
    io            extensions to std::io
    fs            extensions to std::fs
    net           extensions to std::net
    env           extensions to std::env
    process       extensions to std::process
    ...
  linux         applicable to linux only
    io, fs, net, env, process, ...
  macos         ...
  windows       ...

APIs whose behavior is platform-specific are provided only within the std::os hierarchy, making it easy to audit for usage of such APIs. Organizing the platform modules internally in the same way as std makes it easy to find relevant extensions when working with std.

It is emphatically not the goal of the std::os::* modules to provide bindings to all system APIs for each platform; this work is left to external crates. The goals are rather to:

  1. Facilitate interop between abstract types like File that std provides and the underlying system. This is done via “lowering”: extension traits like AsRawFd allow you to extract low-level, platform-specific representations out of std types like File and TcpStream.

  2. Provide high-level but platform-specific APIs that feel like those in the rest of std. Just as with the rest of std, the goal here is not to include all possible functionality, but rather the most commonly-used or fundamental.

Lowering makes it possible for external crates to provide APIs that work “seamlessly” with std abstractions. For example, a crate for Linux might provide an epoll facility that can work directly with std::fs::File and std::net::TcpStream values, completely hiding the internal use of file descriptors. Eventually, such a crate could even be merged into std::os::unix, with minimal disruption – there is little distinction between std and other crates in this regard.

Concretely, lowering has two ingredients:

  1. Introducing one or more “raw” types that are generally direct aliases for C types (more on this in the next section).

  2. Providing an extension trait that makes it possible to extract a raw type from a std type. In some cases, it’s possible to go the other way around as well. The conversion can be by reference or by value, where the latter is used mainly to avoid the destructor associated with a std type (e.g. to extract a file descriptor from a File and eliminate the File object, without closing the file).

While we do not seek to exhaustively bind types or APIs from the underlying system, it is a goal to provide lowering operations for every high-level type to a system-level data type, whenever applicable. This RFC proposes several such lowerings that are currently missing from std::fs.

std::os::platform::raw

Each of the primitives in the standard library will expose the ability to be lowered into its component abstraction, facilitating the need to define these abstractions and organize them in the platform-specific modules. This RFC proposes the following guidelines for doing so:

  • Each platform will have a raw module inside of std::os which houses all of its platform specific definitions.
  • Only type definitions will be contained in raw modules, no function bindings, methods, or trait implementations.
  • Cross-platform types (e.g. those shared on all unix platforms) will be located in the respective cross-platform module. Types which only differ in the width of an integer type are considered to be cross-platform.
  • Platform-specific types will exist only in the raw module for that platform. A platform-specific type may have different field names, components, or just not exist on other platforms.

Differences in integer widths are not considered to be enough of a platform difference to define in each separate platform’s module, meaning that it will be possible to write code that uses os::unix but doesn’t compile on all Unix platforms. It is believed that most consumers of these types will continue to store the same type (e.g. not assume it’s an i32) throughout the application or immediately cast it to a known type.

To reiterate, it is not planned for each raw module to provide exhaustive bindings to each platform. Only those abstractions which the standard library is lowering into will be defined in each raw module.

Lowering Metadata (all platforms)

Currently the Metadata structure exposes very few pieces of information about a file. Some of this is because the information is not available across all platforms, but some of it is also because the standard library does not have the appropriate abstraction to return at this time (e.g. time stamps). The raw contents of Metadata (a stat on Unix), however, should be accessible via lowering no matter what.

The following trait hierarchy and new structures will be added to the standard library.

mod os::windows::fs {
    pub trait MetadataExt {
        fn file_attributes(&self) -> u32; // `dwFileAttributes` field
        fn creation_time(&self) -> u64; // `ftCreationTime` field
        fn last_access_time(&self) -> u64; // `ftLastAccessTime` field
        fn last_write_time(&self) -> u64; // `ftLastWriteTime` field
        fn file_size(&self) -> u64; // `nFileSizeHigh`/`nFileSizeLow` fields
    }
    impl MetadataExt for fs::Metadata { ... }
}

mod os::unix::fs {
    pub trait MetadataExt {
        fn as_raw(&self) -> &Metadata;
    }
    impl MetadataExt for fs::Metadata { ... }

    pub struct Metadata(raw::stat);
    impl Metadata {
        // Accessors for fields available in `raw::stat` for *all* unix platforms
        fn dev(&self) -> raw::dev_t; // st_dev field
        fn ino(&self) -> raw::ino_t; // st_ino field
        fn mode(&self) -> raw::mode_t; // st_mode field
        fn nlink(&self) -> raw::nlink_t; // st_nlink field
        fn uid(&self) -> raw::uid_t; // st_uid field
        fn gid(&self) -> raw::gid_t; // st_gid field
        fn rdev(&self) -> raw::dev_t; // st_rdev field
        fn size(&self) -> raw::off_t; // st_size field
        fn blksize(&self) -> raw::blksize_t; // st_blksize field
        fn blocks(&self) -> raw::blkcnt_t; // st_blocks field
        fn atime(&self) -> (i64, i32); // st_atime field, (sec, nsec)
        fn mtime(&self) -> (i64, i32); // st_mtime field, (sec, nsec)
        fn ctime(&self) -> (i64, i32); // st_ctime field, (sec, nsec)
    }
}

// st_flags, st_gen, st_lspare, st_birthtim, st_qspare
mod os::{linux, macos, freebsd, ...}::fs {
    pub mod raw {
        pub type dev_t = ...;
        pub type ino_t = ...;
        // ...
        pub struct stat {
            // ... same public fields as libc::stat
        }
    }
    pub trait MetadataExt {
        fn as_raw_stat(&self) -> &raw::stat;
    }
    impl MetadataExt for os::unix::fs::RawMetadata { ... }
    impl MetadataExt for fs::Metadata { ... }
}

The goal of this hierarchy is to expose all of the information in the OS-level metadata in as cross-platform of a method as possible while adhering to the design principles of the standard library.

The interesting part about working in a “cross platform” manner here is that the makeup of libc::stat on unix platforms can vary quite a bit between platforms. For example some platforms have a st_birthtim field while others do not. To enable as much ergonomic usage as possible, the os::unix module will expose the intersection of metadata available in libc::stat across all unix platforms. The information is still exposed in a raw fashion (in terms of the values returned), but methods are required as the raw structure is not exposed. The unix platforms then leverage the more fine-grained modules in std::os (e.g. linux and macos) to return the raw libc::stat structure. This will allow full access to the information in libc::stat in all platforms with clear opt-in to when you’re using platform-specific information.

One of the major goals of the os::unix::fs design is to enable as much functionality as possible when programming against “unix in general” while still allowing applications to choose to only program against macos, for example.

Fate of Metadata::{accessed, modified}

At this time there is no suitable type in the standard library to represent the return type of these two functions. The type would either have to be some form of time stamp or moment in time, both of which are difficult abstractions to add lightly.

Consequently, both of these functions will be deprecated in favor of requiring platform-specific code to access the modification/access time of files. This information is all available via the MetadataExt traits listed above.

Eventually, once a std type for cross-platform timestamps is available, these methods will be re-instated as returning that type.

Lowering and setting Permissions (Unix)

Note: this section only describes behavior on unix.

Currently there is no stable method of inspecting the permission bits on a file, and it is unclear whether the current unstable methods of doing so, PermissionsExt::mode, should be stabilized. The main question around this piece of functionality is whether to provide a higher level abstraction (e.g. similar to the bitflags crate) for the permission bits on unix.

This RFC proposes considering the methods for stabilization as-is and not pursuing a higher level abstraction of the unix permission bits. To facilitate in their inspection and manipulation, however, the following constants will be added:

mod os::unix::fs {
    pub const USER_READ: raw::mode_t;
    pub const USER_WRITE: raw::mode_t;
    pub const USER_EXECUTE: raw::mode_t;
    pub const USER_RWX: raw::mode_t;
    pub const OTHER_READ: raw::mode_t;
    pub const OTHER_WRITE: raw::mode_t;
    pub const OTHER_EXECUTE: raw::mode_t;
    pub const OTHER_RWX: raw::mode_t;
    pub const GROUP_READ: raw::mode_t;
    pub const GROUP_WRITE: raw::mode_t;
    pub const GROUP_EXECUTE: raw::mode_t;
    pub const GROUP_RWX: raw::mode_t;
    pub const ALL_READ: raw::mode_t;
    pub const ALL_WRITE: raw::mode_t;
    pub const ALL_EXECUTE: raw::mode_t;
    pub const ALL_RWX: raw::mode_t;
    pub const SETUID: raw::mode_t;
    pub const SETGID: raw::mode_t;
    pub const STICKY_BIT: raw::mode_t;
}

Finally, the set_permissions function of the std::fs module is also proposed to be marked #[stable] soon as a method of blanket setting permissions for a file.

Constructing Permissions

Currently there is no method to construct an instance of Permissions on any platform. This RFC proposes adding the following APIs:

mod os::unix::fs {
    pub trait PermissionsExt {
        fn from_mode(mode: raw::mode_t) -> Self;
    }
    impl PermissionsExt for Permissions { ... }
}

This RFC does not propose yet adding a cross-platform way to construct a Permissions structure due to the radical differences between how unix and windows handle permissions.

Creating directories with permissions

Currently the standard library does not expose an API which allows setting the permission bits on unix or security attributes on Windows. This RFC proposes adding the following API to std::fs:

pub struct DirBuilder { ... }

impl DirBuilder {
    /// Creates a new set of options with default mode/security settings for all
    /// platforms and also non-recursive.
    pub fn new() -> Self;

    /// Indicate that directories create should be created recursively, creating
    /// all parent directories if they do not exist with the same security and
    /// permissions settings.
    pub fn recursive(&mut self, recursive: bool) -> &mut Self;

    /// Create the specified directory with the options configured in this
    /// builder.
    pub fn create<P: AsRef<Path>>(&self, path: P) -> io::Result<()>;
}

mod os::unix::fs {
    pub trait DirBuilderExt {
        fn mode(&mut self, mode: raw::mode_t) -> &mut Self;
    }
    impl DirBuilderExt for DirBuilder { ... }
}

mod os::windows::fs {
    // once a `SECURITY_ATTRIBUTES` abstraction exists, this will be added
    pub trait DirBuilderExt {
        fn security_attributes(&mut self, ...) -> &mut Self;
    }
    impl DirBuilderExt for DirBuilder { ... }
}

This sort of builder is also extendable to other flavors of functions in the future, such as C++’s template parameter:

/// Use the specified directory as a "template" for permissions and security
/// settings of the new directories to be created.
///
/// On unix this will issue a `stat` of the specified directory and new
/// directories will be created with the same permission bits. On Windows
/// this will trigger the use of the `CreateDirectoryEx` function.
pub fn template<P: AsRef<Path>>(&mut self, path: P) -> &mut Self;

At this time, however, it is not proposed to add this method to DirBuilder.

Adding FileType

Currently there is no enumeration or newtype representing a list of “file types” on the local filesystem. This is partly done because the need is not so high right now. Some situations, however, imply that it is more efficient to learn the file type at once instead of testing for each individual file type itself.

For example some platforms’ DirEntry type can know the FileType without an extra syscall. If code were to test a DirEntry separately for whether it’s a file or a directory, it may issue more syscalls necessary than if it instead learned the type and then tested that if it was a file or directory.

The full set of file types, however, is not always known nor portable across platforms, so this RFC proposes the following hierarchy:

#[derive(Copy, Clone, PartialEq, Eq, Hash)]
pub struct FileType(..);

impl FileType {
    pub fn is_dir(&self) -> bool;
    pub fn is_file(&self) -> bool;
    pub fn is_symlink(&self) -> bool;
}

Extension traits can be added in the future for testing for other more flavorful kinds of files on various platforms (such as unix sockets on unix platforms).

Dealing with is_{file,dir} and file_type methods

Currently the fs::Metadata structure exposes stable is_file and is_dir accessors. The struct will also grow a file_type accessor for this newtype struct being added. It is proposed that Metadata will retain the is_{file,dir} convenience methods, but no other “file type testers” will be added.

Currently the std::fs module provides a soft_link and read_link function, but there is no method of doing other symlink related tasks such as:

  • Testing whether a file is a symlink
  • Reading the metadata of a symlink, not what it points to

The following APIs will be added to std::fs:

/// Returns the metadata of the file pointed to by `p`, and this function,
/// unlike `metadata` will **not** follow symlinks.
pub fn symlink_metadata<P: AsRef<Path>>(p: P) -> io::Result<Metadata>;

Binding realpath

There’s a long-standing issue that the unix function realpath is not bound, and this RFC proposes adding the following API to the fs module:

/// Canonicalizes the given file name to an absolute path with all `..`, `.`,
/// and symlink components resolved.
///
/// On unix this function corresponds to the return value of the `realpath`
/// function, and on Windows this corresponds to the `GetFullPathName` function.
///
/// Note that relative paths given to this function will use the current working
/// directory as a base, and the current working directory is not managed in a
/// thread-local fashion, so this function may need to be synchronized with
/// other calls to `env::change_dir`.
pub fn canonicalize<P: AsRef<Path>>(p: P) -> io::Result<PathBuf>;

Tweaking PathExt

Currently the PathExt trait is unstable, yet it is quite convenient! The main motivation for its #[unstable] tag is that it is unclear how much functionality should be on PathExt versus the std::fs module itself. Currently a small subset of functionality is offered, but it is unclear what the guiding principle for the contents of this trait are.

This RFC proposes a few guiding principles for this trait:

  • Only read-only operations in std::fs will be exposed on PathExt. All operations which require modifications to the filesystem will require calling methods through std::fs itself.

  • Some inspection methods on Metadata will be exposed on PathExt, but only those where it logically makes sense for Path to be the self receiver. For example PathExt::len will not exist (size of the file), but PathExt::is_dir will exist.

Concretely, the PathExt trait will be expanded to:

pub trait PathExt {
    fn exists(&self) -> bool;
    fn is_dir(&self) -> bool;
    fn is_file(&self) -> bool;
    fn metadata(&self) -> io::Result<Metadata>;
    fn symlink_metadata(&self) -> io::Result<Metadata>;
    fn canonicalize(&self) -> io::Result<PathBuf>;
    fn read_link(&self) -> io::Result<PathBuf>;
    fn read_dir(&self) -> io::Result<ReadDir>;
}

impl PathExt for Path { ... }

Expanding DirEntry

Currently the DirEntry API is quite minimalistic, exposing very few of the underlying attributes. Platforms like Windows actually contain an entire Metadata inside of a DirEntry, enabling much more efficient walking of directories in some situations.

The following APIs will be added to DirEntry:

impl DirEntry {
    /// This function will return the filesystem metadata for this directory
    /// entry. This is equivalent to calling `fs::symlink_metadata` on the
    /// path returned.
    ///
    /// On Windows this function will always return `Ok` and will not issue a
    /// system call, but on unix this will always issue a call to `stat` to
    /// return metadata.
    pub fn metadata(&self) -> io::Result<Metadata>;

    /// Return what file type this `DirEntry` contains.
    ///
    /// On some platforms this may not require reading the metadata of the
    /// underlying file from the filesystem, but on other platforms it may be
    /// required to do so.
    pub fn file_type(&self) -> io::Result<FileType>;

    /// Returns the file name for this directory entry.
    pub fn file_name(&self) -> OsString;
}

mod os::unix::fs {
    pub trait DirEntryExt {
        fn ino(&self) -> raw::ino_t; // read the d_ino field
    }
    impl DirEntryExt for fs::DirEntry { ... }
}

Drawbacks

  • This is quite a bit of surface area being added to the std::fs API, and it may perhaps be best to scale it back and add it in a more incremental fashion instead of all at once. Most of it, however, is fairly straightforward, so it seems prudent to schedule many of these features for the 1.1 release.

  • Exposing raw information such as libc::stat or WIN32_FILE_ATTRIBUTE_DATA possibly can hamstring altering the implementation in the future. At this point, however, it seems unlikely that the exposed pieces of information will be changing much.

Alternatives

  • Instead of exposing accessor methods in MetadataExt on Windows, the raw WIN32_FILE_ATTRIBUTE_DATA could be returned. We may change, however, to using BY_HANDLE_FILE_INFORMATION one day which would make the return value from this function more difficult to implement.

  • A std::os::MetadataExt trait could be added to access truly common information such as modification/access times across all platforms. The return value would likely be a u64 “something” and would be clearly documented as being a lossy abstraction and also only having a platform-specific meaning.

  • The PathExt trait could perhaps be implemented on DirEntry, but it doesn’t necessarily seem appropriate for all the methods and using inherent methods also seems more logical.

Unresolved questions

  • What is the ultimate role of crates like liblibc, and how do we draw the line between them and std::os definitions?

Summary

Add sockopt-style timeouts to std::net types.

Motivation

Currently, operations on various socket types in std::net block indefinitely (i.e., until the connection is closed or data is transferred). But there are many contexts in which timing out a blocking call is important.

The goal of the current IO system is to gradually expose cross-platform, blocking APIs for IO, especially APIs that directly correspond to the underlying system APIs. Sockets are widely available with nearly identical system APIs across the platforms Rust targets, and this includes support for timeouts via sockopts.

So timeouts are well-motivated and well-suited to std::net.

Detailed design

The proposal is to directly expose the timeout functionality provided by setsockopt, in much the same way we currently expose functionality like set_nodelay:

impl TcpStream {
    pub fn set_read_timeout(&self, dur: Option<Duration>) -> io::Result<()> { ... }
    pub fn read_timeout(&self) -> io::Result<Option<Duration>>;

    pub fn set_write_timeout(&self, dur: Option<Duration>) -> io::Result<()> { ... }
    pub fn write_timeout(&self) -> io::Result<Option<Duration>>;
}

impl UdpSocket {
    pub fn set_read_timeout(&self, dur: Option<Duration>) -> io::Result<()> { ... }
    pub fn read_timeout(&self) -> io::Result<Option<Duration>>;

    pub fn set_write_timeout(&self, dur: Option<Duration>) -> io::Result<()> { ... }
    pub fn write_timeout(&self) -> io::Result<Option<Duration>>;
}

The setter methods take an amount of time in the form of a Duration, which is undergoing stabilization. They are implemented via straightforward calls to setsockopt. The Option is used to signify no timeout (for both setting and getting). Consequently, Some(Duration::new(0, 0)) is a possible argument; the setter methods will return an IO error of kind InvalidInput in this case. (See Alternatives for other approaches.)

The corresponding socket options are SO_RCVTIMEO and SO_SNDTIMEO.

Drawbacks

One potential downside to this design is that the timeouts are set through direct mutation of the socket state, which can lead to composition problems. For example, a socket could be passed to another function which needs to use it with a timeout, but setting the timeout clobbers any previous values. This lack of composability leads to defensive programming in the form of “callee save” resets of timeouts, for example. An alternative design is given below.

The advantage of binding the mutating APIs directly is that we keep a close correspondence between the std::net types and their underlying system types, and a close correspondence between Rust APIs and system APIs. It’s not clear that this kind of composability is important enough in practice to justify a departure from the traditional API.

Alternatives

Taking Duration directly

Using an Option<Duration> introduces a certain amount of complexity – it raises the issue of Some(Duration::new(0, 0)), and it’s slightly more verbose to set a timeout.

An alternative would be to take a Duration directly, and interpret a zero length duration as “no timeout” (which is somewhat traditional in C APIs). That would make the API somewhat more familiar, but less Rustic, and it becomes somewhat easier to pass in a zero value by accident (without thinking about this possibility).

Note that both styles of API require code that does arithmetic on durations to check for zero in advance.

Aside from fitting Rust idioms better, the main proposal also gives a somewhat stronger indication of a bug when things go wrong (rather than simply failing to time out, for example).

Combining with nonblocking support

Another possibility would be to provide a single method that can choose between blocking indefinitely, blocking with a timeout, and nonblocking mode:

enum BlockingMode {
    Nonblocking,
    Blocking,
    Timeout(Duration)
}

This enum makes clear that it doesn’t make sense to have both a timeout and put the socket in nonblocking mode. On the other hand, it would relinquish the one-to-one correspondence between Rust configuration APIs and underlying socket options.

Wrapping for compositionality

A different approach would be to wrap socket types with a “timeout modifier”, which would be responsible for setting and resetting the timeouts:

struct WithTimeout<T> {
    timeout: Duration,
    inner: T
}

impl<T> WithTimeout<T> {
    /// Returns the wrapped object, resetting the timeout
    pub fn into_inner(self) -> T { ... }
}

impl TcpStream {
    /// Wraps the stream with a timeout
    pub fn with_timeout(self, timeout: Duration) -> WithTimeout<TcpStream> { ... }
}

impl<T: Read> Read for WithTimeout<T> { ... }
impl<T: Write> Write for WithTimeout<T> { ... }

A previous RFC spelled this out in more detail.

Unfortunately, such a “wrapping” API has problems of its own. It creates unfortunate type incompatibilities, since you cannot store a timeout-wrapped socket where a “normal” socket is expected. It is difficult to be “polymorphic” over timeouts.

Ultimately, it’s not clear that the extra complexities of the type distinction here are worth the better theoretical composability.

Unresolved questions

Should we consider a preliminary version of this RFC that introduces methods like set_read_timeout_ms, similar to wait_timeout_ms on Condvar? These methods have been introduced elsewhere to provide a stable way to use timeouts prior to Duration being stabilized.

Summary

Deprecate std::fs::soft_link in favor of platform-specific versions: std::os::unix::fs::symlink, std::os::windows::fs::symlink_file, and std::os::windows::fs::symlink_dir.

Motivation

Windows Vista introduced the ability to create symbolic links, in order to provide compatibility with applications ported from Unix:

Symbolic links are designed to aid in migration and application compatibility with UNIX operating systems. Microsoft has implemented its symbolic links to function just like UNIX links.

However, symbolic links on Windows behave differently enough than symbolic links on Unix family operating systems that you can’t, in general, assume that code that works on one will work on the other. On Unix family operating systems, a symbolic link may refer to either a directory or a file, and which one is determined when it is resolved to an actual file. On Windows, you must specify at the time of creation whether a symbolic link refers to a file or directory.

In addition, an arbitrary process on Windows is not allowed to create a symlink; you need to have particular privileges in order to be able to do so; while on Unix, ordinary users can create symlinks, and any additional security policy (such as Grsecurity) generally restricts whether applications follow symlinks, not whether a user can create them.

Thus, there needs to be a way to distinguish between the two operations on Windows, but that distinction is meaningless on Unix, and any code that deals with symlinks on Windows will need to depend on having appropriate privilege or have some way of obtaining appropriate privilege, which is all quite platform specific.

These two facts mean that it is unlikely that arbitrary code dealing with symbolic links will be portable between Windows and Unix. Rather than trying to support both under one API, it would be better to provide platform specific APIs, making it much more clear upon inspection where portability issues may arise.

In addition, the current name soft_link is fairly non-standard. At some point in the split up version of rust-lang/rfcs#517, std::fs::symlink was renamed to sym_link and then to soft_link.

The new name is somewhat surprising and can be difficult to find. After a poll of a number of different platforms and languages, every one appears to contain symlink, symbolic_link, or some camel case variant of those for their equivalent API. Every piece of formal documentation found, for both Windows and various Unix like platforms, used “symbolic link” exclusively in prose.

Here are the names I found for this functionality on various platforms, libraries, and languages:

The term “soft link”, probably as a contrast with “hard link”, is found frequently in informal descriptions, but almost always in the form of a parenthetical of an alternate phrase, such as “a symbolic link (or soft link)”. I could not find it used in any formal documentation or APIs outside of Rust.

The name soft_link was chosen to be shorter than symbolic_link, but without using Unix specific jargon like symlink, to not give undue weight to one platform over the other. However, based on the evidence above it doesn’t have any precedent as a formal name for the concept or API.

Furthermore, even on Windows, the name for the reparse point tag used to represent symbolic links is IO_REPARSE_TAG_SYMLINK.

If you do a Google search for “windows symbolic link” or “windows soft link”, many of the documents you find start using “symlink” after introducing the concept, so it seems to be a fairly common abbreviation for the full name even among Windows developers and users.

Detailed design

Move std::fs::soft_link to std::os::unix::fs::symlink, and create std::os::windows::fs::symlink_file and std::os::windows::fs::symlink_dir that call CreateSymbolicLink with the appropriate arguments.

Keep a deprecated compatibility wrapper std::fs::soft_link which wraps std::os::unix::fs::symlink or std::os::windows::fs::symlink_file, depending on the platform (as that is the current behavior of std::fs::soft_link, to create a file symbolic link).

Drawbacks

This deprecates a stable API during the 1.0.0 beta, leaving an extra wrapper around.

Alternatives

  • Have a cross platform symlink and symlink_dir, that do the same thing on Unix but differ on Windows. This has the drawback of invisible compatibility hazards; code that works on Unix using symlink may fail silently on Windows, as creating the wrong type of symlink may succeed but it may not be interpreted properly once a destination file of the other type is created.
  • Have a cross platform symlink that detects the type of the destination on Windows. This is not always possible as it’s valid to create dangling symbolic links.
  • Have symlink, symlink_dir, and symlink_file all cross-platform, where the first dispatches based on the destination file type, and the latter two panic if called with the wrong destination file type. Again, this is not always possible as it’s valid to create dangling symbolic links.
  • Rather than having two separate functions on Windows, you could have a separate parameter on Windows to specify the type of link to create; symlink("a", "b", FILE_SYMLINK) vs symlink("a", "b", DIR_SYMLINK). However, having a symlink that had different arity on Unix and Windows would likely be confusing, and since there are only the two possible choices, simply having two functions seems like a much simpler solution.

Other choices for the naming convention would be:

  • The status quo, soft_link
  • The original proposal from rust-lang/rfcs#517, sym_link
  • The full name, symbolic_link

The first choice is non-obvious, for people coming from either Windows or Unix. It is a classic compromise, that makes everyone unhappy.

sym_link is slightly more consistent with the complementary hard_link function, and treating “sym link” as two separate words has some precedent in two of the Windows-targeted APIs, Delphi and some of the PowerShell cmdlets observed. However, I have not found any other snake case API that uses that, and only a couple of Windows-specific APIs that use it in camel case; most usage prefers the single word “symlink” to the two word “sym link” as the abbreviation.

The full name symbolic_link, is a bit long and cumbersome compared to most of the rest of the API, but is explicit and is the term used in prose to describe the concept everywhere, so shouldn’t emphasize any one platform over the other. However, unlike all other operations for creating a file or directory (open, create, create_dir, etc), it is a noun, not a verb. When used as a verb, it would be called “symbolically link”, but that sounds quite odd in the context of an API: symbolically_link("a", "b"). “symlink”, on the other hand, can act as either a noun or a verb.

It would be possible to prefix any of the forms above that read as a noun with create_, such as create_symlink, create_sym_link, create_symbolic_link. This adds further to the verbosity, though it is consisted with create_dir; you would probably need to also rename hard_link to create_hard_link for consistency, and this seems like a lot of churn and extra verbosity for not much benefit, as symlink and hard_link already act as verbs on their own. If you picked this, then the Windows versions would need to be named create_file_symlink and create_dir_symlink (or the variations with sym_link or symbolic_link).

Unresolved questions

If we deprecate soft_link now, early in the beta cycle, would it be acceptable to remove it rather than deprecate it before 1.0.0, thus avoiding a permanently stable but deprecated API right out the gate?

Summary

Rename or replace str::words to side-step the ambiguity of “a word”.

Motivation

The str::words method is currently marked #[unstable(reason = "the precise algorithm to use is unclear")]. Indeed, the concept of “a word” is not easy to define in presence of punctuation or languages with various conventions, including not using spaces at all to separate words.

Issue #15628 suggests changing the algorithm to be based on the Word Boundaries section of Unicode Standard Annex #29: Unicode Text Segmentation.

While a Rust implementation of UAX#29 would be useful, it belong on crates.io more than in std:

  • It carries significant complexity that may be surprising from something that looks as simple as a parameter-less “words” method in the standard library. Users may not be aware of how subtle defining “a word” can be.

  • It is not a definitive answer. The standard itself notes:

    It is not possible to provide a uniform set of rules that resolves all issues across languages or that handles all ambiguous situations within a given language. The goal for the specification presented in this annex is to provide a workable default; tailored implementations can be more sophisticated.

    and gives many examples of such ambiguous situations.

Therefore, std would be better off avoiding the question of defining word boundaries entirely.

Detailed design

Rename the words method to split_whitespace, and keep the current behavior unchanged. (That is, return an iterator equivalent to s.split(char::is_whitespace).filter(|s| !s.is_empty()).)

Rename the return type std::str::Words to std::str::SplitWhitespace.

Optionally, keep a words wrapper method for a while, both #[deprecated] and #[unstable], with an error message that suggests split_whitespace or the chosen alternative.

Drawbacks

split_whitespace is very similar to the existing str::split<P: Pattern>(&self, P) method, and having a separate method seems like weak API design. (But see below.)

Alternatives

  • Replace str::words with struct Whitespace; with a custom Pattern implementation, which can be used in str::split. However this requires the Whitespace symbol to be imported separately.
  • Remove str::words entirely and tell users to use s.split(char::is_whitespace).filter(|s| !s.is_empty()) instead.

Unresolved questions

Is there a better alternative?

Summary

Add the Sync bound to io::Error by requiring that any wrapped custom errors also conform to Sync in addition to error::Error + Send.

Motivation

Adding the Sync bound to io::Error has 3 primary benefits:

  • Values that contain io::Errors will be able to be Sync
  • Perhaps more importantly, io::Error will be able to be stored in an Arc
  • By using the above, a cloneable wrapper can be created that shares an io::Error using an Arc in order to simulate the old behavior of being able to clone an io::Error.

Detailed design

The only thing keeping io::Error from being Sync today is the wrapped custom error type Box<error::Error+Send>. Changing this to Box<error::Error+Send+Sync> and adding the Sync bound to io::Error::new() is sufficient to make io::Error be Sync. In addition, the relevant convert::From impls that convert to Box<error::Error+Send> will be updated to convert to Box<error::Error+Send+Sync> instead.

Drawbacks

The only downside to this change is it means any types that conform to error::Error and are Send but not Sync will no longer be able to be wrapped in an io::Error. It’s unclear if there’s any types in the standard library that will be impacted by this. Looking through the list of implementors for error::Error, here’s all of the types that may be affected:

  • io::IntoInnerError: This type is only Sync if the underlying buffered writer instance is Sync. I can’t be sure, but I don’t believe we have any writers that are Send but not Sync. In addition, this type has a From impl that converts it to io::Error even if the writer is not Send.
  • sync::mpsc::SendError: This type is only Sync if the wrapped value T is Sync. This is of course also true for Send. I’m not sure if anyone is relying on the ability to wrap a SendError in an io::Error.
  • sync::mpsc::TrySendError: Same situation as SendError.
  • sync::PoisonError: This type is already not compatible with io::Error because it wraps mutex guards (such as sync::MutexGuard) which are not Send.
  • sync::TryLockError: Same situation as PoisonError.

So the only real question is about sync::mpsc::SendError. If anyone is relying on the ability to convert that into an io::Error a From impl could be added that returns an io::Error that is indistinguishable from a wrapped SendError.

Alternatives

Don’t do this. Not adding the Sync bound to io::Error means io::Errors cannot be stored in an Arc and types that contain an io::Error cannot be Sync.

We should also consider whether we should go a step further and change io::Error to use Arc instead of Box internally. This would let us restore the Clone impl for io::Error.

Unresolved questions

Should we add the From impl for SendError? There is no code in the rust project that relies on SendError being converted to io::Error, and I’m inclined to think it’s unlikely for anyone to be relying on that, but I don’t know if there are any third-party crates that will be affected.

Summary

Replace slice.tail(), slice.init() with new methods slice.split_first(), slice.split_last().

Motivation

The slice.tail() and slice.init() methods are relics from an older version of the slice APIs that included a head() method. slice no longer has head(), instead it has first() which returns an Option, and last() also returns an Option. While it’s generally accepted that indexing / slicing should panic on out-of-bounds access, tail()/init() are the only remaining methods that panic without taking an explicit index.

A conservative change here would be to simply change head()/tail() to return Option, but I believe we can do better. These operations are actually specializations of split_at() and should be replaced with methods that return Option<(&T,&[T])>. This makes the common operation of processing the first/last element and the remainder of the list more ergonomic, with very low impact on code that only wants the remainder (such code only has to add .1 to the expression). This has an even more significant effect on code that uses the mutable variants.

Detailed design

The methods head(), tail(), head_mut(), and tail_mut() will be removed, and new methods will be added:

fn split_first(&self) -> Option<(&T, &[T])>;
fn split_last(&self) -> Option<(&T, &[T])>;
fn split_first_mut(&mut self) -> Option<(&mut T, &mut [T])>;
fn split_last_mut(&mut self) -> Option<(&mut T, &mut [T])>;

Existing code using tail() or init() could be translated as follows:

  • slice.tail() becomes &slice[1..]
  • slice.init() becomes &slice[..slice.len()-1] or slice.split_last().unwrap().1

It is expected that a lot of code using tail() or init() is already either testing len() explicitly or using first() / last() and could be refactored to use split_first() / split_last() in a more ergonomic fashion. As an example, the following code from typeck:

if variant.fields.len() > 0 {
    for field in variant.fields.init() {

can be rewritten as:

if let Some((_, init_fields)) = variant.fields.split_last() {
    for field in init_fields {

And the following code from compiletest:

let argv0 = args[0].clone();
let args_ = args.tail();

can be rewritten as:

let (argv0, args_) = args.split_first().unwrap();

(the clone() ended up being unnecessary).

Drawbacks

The expression slice.split_last().unwrap().1 is more cumbersome than slice.init(). However, this is primarily due to the need for .unwrap() rather than the need for .1, and would affect the more conservative solution (of making the return type Option<&[T]>) as well. Furthermore, the more idiomatic translation is &slice[..slice.len()-1], which can be used any time the slice is already stored in a local variable.

Alternatives

Only change the return type to Option without adding the tuple. This is the more conservative change mentioned above. It still has the same drawback of requiring .unwrap() when translating existing code. And it’s unclear what the function names should be (the current names are considered suboptimal).

Just deprecate the current methods without adding replacements. This gets rid of the odd methods today, but it doesn’t do anything to make it easier to safely perform these operations.

Summary

Alter the signature of the std::mem::forget function to remove unsafe. Explicitly state that it is not considered unsafe behavior to not run destructors.

Motivation

It was recently discovered by @arielb1 that the thread::scoped API was unsound. To recap, this API previously allowed spawning a child thread sharing the parent’s stack, returning an RAII guard which join’d the child thread when it fell out of scope. The join-on-drop behavior here is critical to the safety of the API to ensure that the parent does not pop the stack frames the child is referencing. Put another way, the safety of thread::scoped relied on the fact that the Drop implementation for JoinGuard was always run.

The underlying issue for this safety hole was that it is possible to write a version of mem::forget without using unsafe code (which drops a value without running its destructor). This is done by creating a cycle of Rc pointers, leaking the actual contents. It has been pointed out that Rc is not the only vector of leaking contents today as there are known bugs where panic! may fail to run destructors. Furthermore, it has also been pointed out that not running destructors can affect the safety of APIs like Vec::drain_range in addition to thread::scoped.

It has never been a guarantee of Rust that destructors for a type will run, and this aspect was overlooked with the thread::scoped API which requires that its destructor be run! Reconciling these two desires has lead to a good deal of discussion of possible mitigation strategies for various aspects of this problem. This strategy proposed in this RFC aims to fit uninvasively into the standard library to avoid large overhauls or destabilizations of APIs.

Detailed design

Primarily, the unsafe annotation on the mem::forget function will be removed, allowing it to be called from safe Rust. This transition will be made possible by stating that destructors may not run in all circumstances (from both the language and library level). The standard library and the primitives it provides will always attempt to run destructors, but will not provide a guarantee that destructors will be run.

It is still likely to be a footgun to call mem::forget as memory leaks are almost always undesirable, but the purpose of the unsafe keyword in Rust is to indicate memory unsafety instead of being a general deterrent for “should be avoided” APIs. Given the premise that types must be written assuming that their destructor may not run, it is the fault of the type in question if mem::forget would trigger memory unsafety, hence allowing mem::forget to be a safe function.

Note that this modification to mem::forget is a breaking change due to the signature of the function being altered, but it is expected that most code will not break in practice and this would be an acceptable change to cherry-pick into the 1.0 release.

Drawbacks

It is clearly a very nice feature of Rust to be able to rely on the fact that a destructor for a type is always run (e.g. the thread::scoped API). Admitting that destructors may not be run can lead to difficult API decisions later on and even accidental unsafety. This route, however, is the least invasive for the standard library and does not require radically changing types like Rc or fast-tracking bug fixes to panicking destructors.

Alternatives

The main alternative this proposal is to provide the guarantee that a destructor for a type is always run and that it is memory unsafe to not do so. This would require a number of pieces to work together:

  • Panicking destructors not running other locals’ destructors would need to be fixed
  • Panics in the elements of containers would need to be fixed to continue running other elements’ destructors.
  • The Rc and Arc types would need be reevaluated somehow. One option would be to statically prevent cycles, and another option would be to disallow types that are unsafe to leak from being placed in Rc and Arc (more details below).
  • An audit would need to be performed to ensure that there are no other known locations of leaks for types. There are likely more than one location than those listed here which would need to be addressed, and it’s also likely that there would continue to be locations where destructors were not run.

There has been quite a bit of discussion specifically on the topic of Rc and Arc as they may be tricky cases to fix. Specifically, the compiler could perform some form of analysis could to forbid all cycles or just those that would cause memory unsafety. Unfortunately, forbidding all cycles is likely to be too limiting for Rc to be useful. Forbidding only “bad” cycles, however, is a more plausible option.

Another alternative, as proposed by @arielb1, would be a Leak marker trait to indicate that a type is “safe to leak”. Types like Rc would require that their contents are Leak, and the JoinGuard type would opt-out of it. This marker trait could work similarly to Send where all types are considered leakable by default, but types could opt-out of Leak. This approach, however, requires Rc and Arc to have a Leak bound on their type parameter which can often leak unfortunately into many generic contexts (e.g. trait objects). Another option would be to treat Leak more similarly to Sized where all type parameters have a Leak bound by default. This change may also cause confusion, however, by being unnecessarily restrictive (e.g. all collections may want to take T: ?Leak).

Overall the changes necessary for this strategy are more invasive than admitting destructors may not run, so this alternative is not proposed in this RFC.

Unresolved questions

Are there remaining APIs in the standard library which rely on destructors being run for memory safety?

  • Feature Name: not applicable
  • Start Date: 2015-02-27
  • RFC PR: rust-lang/rfcs#1068
  • Rust Issue: N/A

Summary

This RFC proposes to expand, and make more explicit, Rust’s governance structure. It seeks to supplement today’s core team with several subteams that are more narrowly focused on specific areas of interest.

Thanks to Nick Cameron, Manish Goregaokar, Yehuda Katz, Niko Matsakis and Dave Herman for many suggestions and discussions along the way.

Motivation

Rust’s governance has evolved over time, perhaps most dramatically with the introduction of the RFC system – which has itself been tweaked many times. RFCs have been a major boon for improving design quality and fostering deep, productive discussion. It’s something we all take pride in.

That said, as Rust has matured, a few growing pains have emerged.

We’ll start with a brief review of today’s governance and process, then discuss what needs to be improved.

Background: today’s governance structure

Rust is governed by a core team, which is ultimately responsible for all decision-making in the project. Specifically, the core team:

  • Sets the overall direction and vision for the project;
  • Sets the priorities and release schedule;
  • Makes final decisions on RFCs.

The core team currently has 8 members, including some people working full-time on Rust, some volunteers, and some production users.

Most technical decisions are decided through the RFC process. RFCs are submitted for essentially all changes to the language, most changes to the standard library, and a few other topics. RFCs are either closed immediately (if they are clearly not viable), or else assigned a shepherd who is responsible for keeping the discussion moving and ensuring all concerns are responded to.

The final decision to accept or reject an RFC is made by the core team. In many cases this decision follows after many rounds of consensus-building among all stakeholders for the RFC. In the end, though, most decisions are about weighting various tradeoffs, and the job of the core team is to make the final decision about such weightings in light of the overall direction of the language.

What needs improvement

At a high level, we need to improve:

  • Process scalability.
  • Stakeholder involvement.
  • Clarity/transparency.
  • Moderation processes.

Below, each of these bullets is expanded into a more detailed analysis of the problems. These are the problems this RFC is trying to solve. The “Detailed Design” section then gives the actual proposal.

Scalability: RFC process

In some ways, the RFC process is a victim of its own success: as the volume and depth of RFCs has increased, it’s harder for the entire core team to stay educated and involved in every RFC. The shepherding process has helped make sure that RFCs don’t fall through the cracks, but even there it’s been hard for the relatively small number of shepherds to keep up (on top of the other work that they do).

Part of the problem, of course, is due to the current push toward 1.0, which has both increased RFC volume and takes up a great deal of attention from the core team. But after 1.0 is released, the community is likely to grow significantly, and feature requests will only increase.

Growing the core team over time has helped, but there’s a practical limit to the number of people who are jointly making decisions and setting direction.

A distinct problem in the other direction has also emerged recently: we’ve slowly been requiring RFCs for increasingly minor changes. While it’s important that user-facing changes and commitments be vetted, the process has started to feel heavyweight (especially for newcomers), so a recalibration may be in order.

We need a way to scale up the RFC process that:

  • Ensures each RFC is thoroughly reviewed by several people with interest and expertise in the area, but with different perspectives and concerns.

  • Ensures each RFC continues moving through the pipeline at a reasonable pace.

  • Ensures that accepted RFCs are well-aligned with the values, goals, and direction of the project, and with other RFCs (past, present, and future).

  • Ensures that simple, uncontentious changes can be made quickly, without undue process burden.

Scalability: areas of focus

In addition, there are increasingly areas of important work that are only loosely connected with decisions in the core language or APIs: tooling, documentation, infrastructure, for example. These areas all need leadership, but it’s not clear that they require the same degree of global coordination that more “core” areas do.

These areas are only going to increase in number and importance, so we should remove obstacles holding them back.

Stakeholder involvement

RFC shepherds are intended to reach out to “stakeholders” in an RFC, to solicit their feedback. But that is different from the stakeholders having a direct role in decision making.

To the extent practical, we should include a diverse range of perspectives in both design and decision-making, and especially include people who are most directly affected by decisions: users.

We have taken some steps in this direction by diversifying the core team itself, but (1) members of the core team by definition need to take a balanced, global view of things and (2) the core team should not grow too large. So some other way of including more stakeholders in decisions would be preferable.

Clarity and transparency

Despite many steps toward increasing the clarity and openness of Rust’s processes, there is still room for improvement:

  • The priorities and values set by the core team are not always clearly communicated today. This in turn can make the RFC process seem opaque, since RFCs move along at different speeds (or are even closed as postponed) according to these priorities.

    At a large scale, there should be more systematic communication about high-level priorities. It should be clear whether a given RFC topic would be considered in the near term, long term, or never. Recent blog posts about the 1.0 release and stabilization have made a big step in this direction. After 1.0, as part of the regular release process, we’ll want to find some regular cadence for setting and communicating priorities.

    At a smaller scale, it is still the case that RFCs fall through the cracks or have unclear statuses (see Scalability problems above). Clearer, public tracking of the RFC pipeline would be a significant improvement.

  • The decision-making process can still be opaque: it’s not always clear to an RFC author exactly when and how a decision on the RFC will be made, and how best to work with the team for a favorable decision. We strive to make core team meetings as uninteresting as possible (that is, all interesting debate should happen in public online communication), but there is still room for being more explicit and public.

Community norms and the Code of Conduct

Rust’s design process and community norms are closely intertwined. The RFC process is a joint exploration of design space and tradeoffs, and requires consensus-building. The process – and the Rust community – is at its best when all participants recognize that

… people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

This and other important values and norms are recorded in the project code of conduct (CoC), which also includes language about harassment and marginalized groups.

Rust’s community has long upheld a high standard of conduct, and has earned a reputation for doing so.

However, as the community grows, as people come and go, we must continually work to maintain this standard. Usually, it suffices to lead by example, or to gently explain the kind of mutual respect that Rust’s community practices. Sometimes, though, that’s not enough, and explicit moderation is needed.

One problem that has emerged with the CoC is the lack of clarity about the mechanics of moderation:

  • Who is responsible for moderation?
  • What about conflicts of interest? Are decision-makers also moderators?
  • How are moderation decisions reached? When are they unilateral?
  • When does moderation begin, and how quickly should it occur?
  • Does moderation take into account past history?
  • What venues does moderation apply to?

Answering these questions, and generally clarifying how the CoC is viewed and enforced, is an important step toward scaling up the Rust community.

Detailed design

The basic idea is to supplement the core team with several “subteams”. Each subteam is focused on a specific area, e.g., language design or libraries. Most of the RFC review process will take place within the relevant subteam, scaling up our ability to make decisions while involving a larger group of people in that process.

To ensure global coordination and a strong, coherent vision for the project as a whole, each subteam is led by a member of the core team.

Subteams

The primary roles of each subteam are:

  • Shepherding RFCs for the subteam area. As always, that means (1) ensuring that stakeholders are aware of the RFC, (2) working to tease out various design tradeoffs and alternatives, and (3) helping build consensus.

  • Accepting or rejecting RFCs in the subteam area.

  • Setting policy on what changes in the subteam area require RFCs, and reviewing direct PRs for changes that do not require an RFC.

  • Delegating reviewer rights for the subteam area. The ability to r+ is not limited to team members, and in fact earning r+ rights is a good stepping stone toward team membership. Each team should set reviewing policy, manage reviewing rights, and ensure that reviews take place in a timely manner. (Thanks to Nick Cameron for this suggestion.)

Subteams make it possible to involve a larger, more diverse group in the decision-making process. In particular, they should involve a mix of:

  • Rust project leadership, in the form of at least one core team member (the leader of the subteam).

  • Area experts: people who have a lot of interest and expertise in the subteam area, but who may be far less engaged with other areas of the project.

  • Stakeholders: people who are strongly affected by decisions in the subteam area, but who may not be experts in the design or implementation of that area. It is crucial that some people heavily using Rust for applications/libraries have a seat at the table, to make sure we are actually addressing real-world needs.

Members should have demonstrated a good sense for design and dealing with tradeoffs, an ability to work within a framework of consensus, and of course sufficient knowledge about or experience with the subteam area. Leaders should in addition have demonstrated exceptional communication, design, and people skills. They must be able to work with a diverse group of people and help lead it toward consensus and execution.

Each subteam is led by a member of the core team. The leader is responsible for:

  • Setting up the subteam:

    • Deciding on the initial membership of the subteam (in consultation with the core team). Once the subteam is up and running.

    • Working with subteam members to determine and publish subteam policies and mechanics, including the way that subteam members join or leave the team (which should be based on subteam consensus).

  • Communicating core team vision downward to the subteam.

  • Alerting the core team to subteam RFCs that need global, cross-cutting attention, and to RFCs that have entered the “final comment period” (see below).

  • Ensuring that RFCs and PRs are progressing at a reasonable rate, re-assigning shepherds/reviewers as needed.

  • Making final decisions in cases of contentious RFCs that are unable to reach consensus otherwise (should be rare).

The way that subteams communicate internally and externally is left to each subteam to decide, but:

  • Technical discussion should take place as much as possible on public forums, ideally on RFC/PR threads and tagged discuss posts.

  • Each subteam will have a dedicated discuss forum tag.

  • Subteams should actively seek out discussion and input from stakeholders who are not members of the team.

  • Subteams should have some kind of regular meeting or other way of making decisions. The content of this meeting should be summarized with the rationale for each decision – and, as explained below, decisions should generally be about weighting a set of already-known tradeoffs, not discussing or discovering new rationale.

  • Subteams should regularly publish the status of RFCs, PRs, and other news related to their area. Ideally, this would be done in part via a dashboard like the Homu queue

Core team

The core team serves as leadership for the Rust project as a whole. In particular, it:

  • Sets the overall direction and vision for the project. That means setting the core values that are used when making decisions about technical tradeoffs. It means steering the project toward specific use cases where Rust can have a major impact. It means leading the discussion, and writing RFCs for, major initiatives in the project.

  • Sets the priorities and release schedule. Design bandwidth is limited, and it’s dangerous to try to grow the language too quickly; the core team makes some difficult decisions about which areas to prioritize for new design, based on the core values and target use cases.

  • Focuses on broad, cross-cutting concerns. The core team is specifically designed to take a global view of the project, to make sure the pieces are fitting together in a coherent way.

  • Spins up or shuts down subteams. Over time, we may want to expand the set of subteams, and it may make sense to have temporary “strike teams” that focus on a particular, limited task.

  • Decides whether/when to ungate a feature. While the subteams make decisions on RFCs, the core team is responsible for pulling the trigger that moves a feature from nightly to stable. This provides an extra check that features have adequately addressed cross-cutting concerns, that the implementation quality is high enough, and that language/library commitments are reasonable.

The core team should include both the subteam leaders, and, over time, a diverse set of other stakeholders that are both actively involved in the Rust community, and can speak to the needs of major Rust constituencies, to ensure that the project is addressing real-world needs.

Decision-making

Consensus

Rust has long used a form of consensus decision-making. In a nutshell the premise is that a successful outcome is not where one side of a debate has “won”, but rather where concerns from all sides have been addressed in some way. This emphatically does not entail design by committee, nor compromised design. Rather, it’s a recognition that

… every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

Breakthrough designs sometimes end up changing the playing field by eliminating tradeoffs altogether, but more often difficult decisions have to be made. The key is to have a clear vision and set of values and priorities, which is the core team’s responsibility to set and communicate, and the subteam’s responsibility to act upon.

Whenever possible, we seek to reach consensus through discussion and design revision. Concretely, the steps are:

  • Initial RFC proposed, with initial analysis of tradeoffs.
  • Comments reveal additional drawbacks, problems, or tradeoffs.
  • RFC revised to address comments, often by improving the design.
  • Repeat above until “major objections” are fully addressed, or it’s clear that there is a fundamental choice to be made.

Consensus is reached when most people are left with only “minor” objections, i.e., while they might choose the tradeoffs slightly differently they do not feel a strong need to actively block the RFC from progressing.

One important question is: consensus among which people, exactly? Of course, the broader the consensus, the better. But at the very least, consensus within the members of the subteam should be the norm for most decisions. If the core team has done its job of communicating the values and priorities, it should be possible to fit the debate about the RFC into that framework and reach a fairly clear outcome.

Lack of consensus

In some cases, though, consensus cannot be reached. These cases tend to split into two very different camps:

  • “Trivial” reasons, e.g., there is not widespread agreement about naming, but there is consensus about the substance.

  • “Deep” reasons, e.g., the design fundamentally improves one set of concerns at the expense of another, and people on both sides feel strongly about it.

In either case, an alternative form of decision-making is needed.

  • For the “trivial” case, usually either the RFC shepherd or subteam leader will make an executive decision.

  • For the “deep” case, the subteam leader is empowered to make a final decision, but should consult with the rest of the core team before doing so.

How and when RFC decisions are made, and the “final comment period”

Each RFC has a shepherd drawn from the relevant subteam. The shepherd is responsible for driving the consensus process – working with both the RFC author and the broader community to dig out problems, alternatives, and improved design, always working to reach broader consensus.

At some point, the RFC comments will reach a kind of “steady state”, where no new tradeoffs are being discovered, and either objections have been addressed, or it’s clear that the design has fundamental downsides that need to be weighed.

At that point, the shepherd will announce that the RFC is in a “final comment period” (which lasts for one week). This is a kind of “last call” for strong objections to the RFC. The announcement of the final comment period for an RFC should be very visible; it should be included in the subteam’s periodic communications.

Note that the final comment period is in part intended to help keep RFCs moving. Historically, RFCs sometimes stall out at a point where discussion has died down but a decision isn’t needed urgently. In this proposed model, the RFC author could ask the shepherd to move to the final comment period (and hence toward a decision).

After the final comment period, the subteam can make a decision on the RFC. The role of the subteam at that point is not to reveal any new technical issues or arguments; if these come up during discussion, they should be added as comments to the RFC, and it should undergo another final comment period.

Instead, the subteam decision is based on weighing the already-revealed tradeoffs against the project’s priorities and values (which the core team is responsible for setting, globally). In the end, these decisions are about how to weight tradeoffs. The decision should be communicated in these terms, pointing out the tradeoffs that were raised and explaining how they were weighted, and never introducing new arguments.

Keeping things lightweight

In addition to the “final comment period” proposed above, this RFC proposes some further adjustments to the RFC process to keep it lightweight.

A key observation is that, thanks to the stability system and nightly/stable distinction, it’s easy to experiment with features without commitment.

Clarifying what needs an RFC

Over time, we’ve been drifting toward requiring an RFC for essentially any user-facing change, which sometimes means that very minor changes get stuck awaiting an RFC decision. While subteams + final comment period should help keep the pipeline flowing a bit better, it would also be good to allow “minor” changes to go through without an RFC, provided there is sufficient review in some other way. (And in the end, the core team ungates features, which ensures at least a final review.)

This RFC does not attempt to answer the question “What needs an RFC”, because that question will vary for each subteam. However, this RFC stipulates that each subteam should set an explicit policy about:

  1. What requires an RFC for the subteam’s area, and
  2. What the non-RFC review process is.

These guidelines should try to keep the process lightweight for minor changes.

Clarifying the “finality” of RFCs

While RFCs are very important, they do not represent the final state of a design. Often new issues or improvements arise during implementation, or after gaining some experience with a feature. The nightly/stable distinction exists in part to allow for such design iteration.

Thus RFCs do not need to be “perfect” before acceptance. If consensus is reached on major points, the minor details can be left to implementation and revision.

Later, if an implementation differs from the RFC in substantial ways, the subteam should be alerted, and may ask for an explicit amendment RFC. Otherwise, the changes should just be explained in the commit/PR.

The teams

With all of that out of the way, what subteams should we start with? This RFC proposes the following initial set:

  • Language design
  • Libraries
  • Compiler
  • Tooling and infrastructure
  • Moderation

In the long run, we will likely also want teams for documentation and for community events, but these can be spun up once there is a more clear need (and available resources).

Language design team

Focuses on the design of language-level features; not all team members need to have extensive implementation experience.

Some example RFCs that fall into this area:

Library team

Oversees both std and, ultimately, other crates in the rust-lang github organization. The focus up to this point has been the standard library, but we will want “official” libraries that aren’t quite std territory but are still vital for Rust. (The precise plan here, as well as the long-term plan for std, is one of the first important areas of debate for the subteam.) Also includes API conventions.

Some example RFCs that fall into this area:

Compiler team

Focuses on compiler internals, including implementation of language features. This broad category includes work in codegen, factoring of compiler data structures, type inference, borrowck, and so on.

There is a more limited set of example RFCs for this subteam, in part because we haven’t generally required RFCs for this kind of internals work, but here are two:

Tooling and infrastructure team

Even more broad is the “tooling” subteam, which at inception is planned to encompass every “official” (rust-lang managed) non-rustc tool:

  • rustdoc
  • rustfmt
  • Cargo
  • crates.io
  • CI infrastructure
  • Debugging tools
  • Profiling tools
  • Editor/IDE integration
  • Refactoring tools

It’s not presently clear exactly what tools will end up under this umbrella, nor which should be prioritized.

Moderation team

Finally, the moderation team is responsible for dealing with CoC violations.

One key difference from the other subteams is that the moderation team does not have a leader. Its members are chosen directly by the core team, and should be community members who have demonstrated the highest standard of discourse and maturity. To limit conflicts of interest, the moderation subteam should not include any core team members. However, the subteam is free to consult with the core team as it deems appropriate.

The moderation team will have a public email address that can be used to raise complaints about CoC violations (forwards to all active moderators).

Initial plan for moderation

What follows is an initial proposal for the mechanics of moderation. The moderation subteam may choose to revise this proposal by drafting an RFC, which will be approved by the core team.

Moderation begins whenever a moderator becomes aware of a CoC problem, either through a complaint or by observing it directly. In general, the enforcement steps are as follows:

These steps are adapted from text written by Manish Goregaokar, who helped articulate them from experience as a Stack Exchange moderator.

  • Except for extreme cases (see below), try first to address the problem with a light public comment on thread, aimed to de-escalate the situation. These comments should strive for as much empathy as possible. Moderators should emphasize that dissenting opinions are valued, and strive to ensure that the technical points are heard even as they work to cool things down.

    When a discussion has just gotten a bit heated, the comment can just be a reminder to be respectful and that there is rarely a clear “right” answer. In cases that are more clearly over the line into personal attacks, it can directly call out a problematic comment.

  • If the problem persists on thread, or if a particular person repeatedly comes close to or steps over the line of a CoC violation, moderators then email the offender privately. The message should include relevant portions of the CoC together with the offending comments. Again, the goal is to de-escalate, and the email should be written in a dispassionate and empathetic way. However, the message should also make clear that continued violations may result in a ban.

  • If problems still persist, the moderators can ban the offender. Banning should occur for progressively longer periods, for example starting at 1 day, then 1 week, then permanent. The moderation subteam will determine the precise guidelines here.

In general, moderators can and should unilaterally take the first step, but steps beyond that (particularly banning) should be done via consensus with the other moderators. Permanent bans require core team approval.

Some situations call for more immediate, drastic measures: deeply inappropriate comments, harassment, or comments that make people feel unsafe. (See the code of conduct for some more details about this kind of comment). In these cases, an individual moderator is free to take immediate, unilateral steps including redacting or removing comments, or instituting a short-term ban until the subteam can convene to deal with the situation.

The moderation team is responsible for interpreting the CoC. Drastic measures like bans should only be used in cases of clear, repeated violations.

Moderators themselves are held to a very high standard of behavior, and should strive for professional and impersonal interactions when dealing with a CoC violation. They should always push to de-escalate. And they should recuse themselves from moderation in threads where they are actively participating in the technical debate or otherwise have a conflict of interest. Moderators who fail to keep up this standard, or who abuse the moderation process, may be removed by the core team.

Subteam, and especially core team members are also held to a high standard of behavior. Part of the reason to separate the moderation subteam is to ensure that CoC violations by Rust’s leadership be addressed through the same independent body of moderators.

Moderation covers all rust-lang venues, which currently include github repos, IRC channels (#rust, #rust-internals, #rustc, #rust-libs), and the two discourse forums. (The subreddit already has its own moderation structure, and isn’t directly associated with the rust-lang organization.)

Drawbacks

One possibility is that decentralized decisions may lead to a lack of coherence in the overall design of Rust. However, the existence of the core team – and the fact that subteam leaders will thus remain in close communication on cross-cutting concerns in particular – serves to greatly mitigate that risk.

As with any change to governance, there is risk that this RFC would harm processes that are working well. In particular, bringing on a large number of new people into official decision-making roles carries a risk of culture clash or problems with consensus-building.

By setting up this change as a relatively slow build-out from the current core team, some of this risk is mitigated: it’s not a radical restructuring, but rather a refinement of the current process. In particular, today core team members routinely seek input directly from other community members who would be likely subteam members; in some ways, this RFC just makes that process more official.

For the moderation subteam, there is a significant shift toward strong enforcement of the CoC, and with that a risk of over-application: the goal is to make discourse safe and productive, not to introduce fear of violating the CoC. The moderation guidelines, careful selection of moderators, and ability to withdraw moderators mitigate this risk.

Alternatives

There are numerous other forms of open-source governance out there, far more than we can list or detail here. And in any case, this RFC is intended as an expansion of Rust’s existing governance to address a few scaling problems, rather than a complete rethink.

Mozilla’s module system, was a partial inspiration for this RFC. The proposal here can be seen as an evolution of the module system where the subteam leaders (module owners) are integrated into an explicit core team, providing for tighter intercommunication and a more unified sense of vision and purpose. Alternatively, the proposal is an evolution of the current core team structure to include subteams.

One seemingly minor, but actually important aspect is naming:

  • The name “subteam” (from jQuery) felt like a better fit than “module” both to avoid confusion (having two different kinds of modules associated with Mozilla seems problematic) and because it emphasizes the more unified nature of this setup.

  • The term “leader” was chosen to reflect that there is a vision for each subteam (as part of the larger vision for Rust), which the leader is responsible for moving the subteam toward. Notably, this is how “module owner” is actually defined in Mozilla’s module system:

    A “module owner” is the person to whom leadership of a module’s work has been delegated.

  • The term “team member” is just following standard parlance. It could be replaced by something like “peer” (following the module system tradition), or some other term that is less bland than “member”. Ideally, the term would highlight the significant stature of team membership: being part of the decision-making group for a substantial area of the Rust project.

Unresolved questions

Subteams

This RFC purposefully leaves several subteam-level questions open:

  • What is the exact venue and cadence for subteam decision-making?
  • Do subteams have dedicated IRC channels or other forums? (This RFC stipulates only dedicated discourse tags.)
  • How large is each subteam?
  • What are the policies for when RFCs are required, or when PRs may be reviewed directly?

These questions are left to be address by subteams after their formation, in part because good answers will likely require some iterations to discover.

Broader questions

There are many other questions that this RFC doesn’t seek to address, and this is largely intentional. For one, it avoids trying to set out too much structure in advance, making it easier to iterate on the mechanics of subteams. In addition, there is a danger of too much policy and process, especially given that this RFC is aimed to improve the scalability of decision-making. It should be clear that this RFC is not the last word on governance, and over time we will probably want to grow more explicit policies in other areas – but a lightweight, iterative approach seems the best way to get there.

  • Feature Name: remove-static-assert
  • Start Date: 2015-04-28
  • RFC PR: rust-lang/rfcs#1096
  • Rust Issue: https://github.com/rust-lang/rust/pull/24910

Summary

Remove the static_assert feature.

Motivation

To recap, static_assert looks like this:

#![feature(static_assert)]
#[static_assert]
static assertion: bool = true;

If assertion is false instead, this fails to compile:

error: static assertion failed
static assertion: bool = false;
                         ^~~~~

If you don’t have the feature flag, you get another interesting error:

error: `#[static_assert]` is an experimental feature, and has a poor API

Throughout its life, static_assert has been… weird. Graydon suggested it in May of 2013, and it was implemented shortly after. Another issue was created to give it a ‘better interface’. Here’s why:

The biggest problem with it is you need a static variable with a name, that goes through trans and ends up in the object file.

In other words, assertion above ends up as a symbol in the final output. Not something you’d usually expect from some kind of static assertion.

So why not improve static_assert? With compile time function evaluation, the idea of a ‘static assertion’ doesn’t need to have language semantics. Either const functions or full-blown CTFE is a useful feature in its own right that we’ve said we want in Rust. In light of it being eventually added, static_assert doesn’t make sense any more.

static_assert isn’t used by the compiler at all.

Detailed design

Remove static_assert. Implementation submitted here.

Drawbacks

Why should we not do this?

Alternatives

This feature is pretty binary: we either remove it, or we don’t. We could keep the feature, but build out some sort of alternate version that’s not as weird.

Unresolved questions

None with the design, only “should we do this?”

Summary

Rename .connect() to .join() in SliceConcatExt.

Motivation

Rust has a string concatenation method named .connect() in SliceConcatExt. However, this does not align with the precedents in other languages. Most languages use .join() for that purpose, as seen later.

This is probably because, in the ancient Rust, join was a keyword to join a task. However, join retired as a keyword in 2011 with the commit rust-lang/rust@d1857d3. While .connect() is technically correct, the name may not be directly inferred by the users of the mainstream languages. There was a question about this on reddit.

The languages that use the name of join are:

The languages not using join are as follows. Interestingly, they are all functional-ish languages.

Note that Rust also has .concat() in SliceConcatExt, which is a specialized version of .connect() that uses an empty string as a separator.

Another reason is that the term “join” already has similar usage in the standard library. There are std::path::Path::join and std::env::join_paths which are used to join the paths.

Detailed design

While the SliceConcatExt trait is unstable, the .connect() method itself is marked as stable. So we need to:

  1. Deprecate the .connect() method.
  2. Add the .join() method.

Or, if we are to achieve the instability guarantee, we may remove the old method entirely, as it’s still pre-1.0. However, the author considers that this may require even more consensus.

Drawbacks

Having a deprecated method in a newborn language is not pretty.

If we do remove the .connect() method, the language becomes pretty again, but it breaks the stability guarantee at the same time.

Alternatives

Keep the status quo. Improving searchability in the docs will help newcomers find the appropriate method.

Unresolved questions

Are there even more clever names for the method? How about .homura(), or .madoka()?

  • Feature Name: not applicable
  • Start Date: 2015-05-04
  • RFC PR: rust-lang/rfcs#1105
  • Rust Issue: N/A

Summary

This RFC proposes a comprehensive set of guidelines for which changes to stable APIs are considered breaking from a semver perspective, and which are not. These guidelines are intended for both the standard library and for the crates.io ecosystem.

This does not mean that the standard library should be completely free to make non-semver-breaking changes; there are sometimes still risks of ecosystem pain that need to be taken into account. Rather, this RFC makes explicit an initial set of changes that absolutely cannot be made without a semver bump.

Along the way, it also discusses some interactions with potential language features that can help mitigate pain for non-breaking changes.

The RFC covers only API issues; other issues related to language features, lints, type inference, command line arguments, Cargo, and so on are considered out of scope.

The stability promise specifically does not apply to unstable features, even if they are accidentally usable on the Stable release channel under certain conditions such as because of bugs in the compiler.

Motivation

Both Rust and its library ecosystem have adopted semver, a technique for versioning platforms/libraries partly in terms of the effect on the code that uses them. In a nutshell, the versioning scheme has three components::

  1. Major: must be incremented for changes that break client code.
  2. Minor: incremented for backwards-compatible feature additions.
  3. Patch: incremented for backwards-compatible bug fixes.

Rust 1.0.0 will mark the beginning of our commitment to stability, and from that point onward it will be important to be clear about what constitutes a breaking change, in order for semver to play a meaningful role. As we will see, this question is more subtle than one might think at first – and the simplest approach would make it effectively impossible to grow the standard library.

The goal of this RFC is to lay out a comprehensive policy for what must be considered a breaking API change from the perspective of semver, along with some guidance about non-semver-breaking changes.

Detailed design

For clarity, in the rest of the RFC, we will use the following terms:

  • Major change: a change that requires a major semver bump.
  • Minor change: a change that requires only a minor semver bump.
  • Breaking change: a change that, strictly speaking, can cause downstream code to fail to compile.

What we will see is that in Rust today, almost any change is technically a breaking change. For example, given the way that globs currently work, adding any public item to a library can break its clients (more on that later). But not all breaking changes are equal.

So, this RFC proposes that all major changes are breaking, but not all breaking changes are major.

Overview

Principles of the policy

The basic design of the policy is that the same code should be able to run against different minor revisions. Furthermore, minor changes should require at most a few local annotations to the code you are developing, and in principle no changes to your dependencies.

In more detail:

  • Minor changes should require at most minor amounts of work upon upgrade. For example, changes that may require occasional type annotations or use of UFCS to disambiguate are not automatically “major” changes. (But in such cases, one must evaluate how widespread these “minor” changes are).

  • In principle, it should be possible to produce a version of dependency code that will not break when upgrading other dependencies, or Rust itself, to a new minor revision. This goes hand-in-hand with the above bullet; as we will see, it’s possible to save a fully “elaborated” version of upstream code that does not require any disambiguation. The “in principle” refers to the fact that getting there may require some additional tooling or language support, which this RFC outlines.

That means that any breakage in a minor release must be very “shallow”: it must always be possible to locally fix the problem through some kind of disambiguation that could have been done in advance (by using more explicit forms) or other annotation (like disabling a lint). It means that minor changes can never leave you in a state that requires breaking changes to your own code.

Although this general policy allows some (very limited) breakage in minor releases, it is not a license to make these changes blindly. The breakage that this RFC permits, aside from being very simple to fix, is also unlikely to occur often in practice. The RFC will discuss measures that should be employed in the standard library to ensure that even these minor forms of breakage do not cause widespread pain in the ecosystem.

Scope of the policy

The policy laid out by this RFC applies to stable, public APIs in the standard library. Eventually, stability attributes will be usable in external libraries as well (this will require some design work), but for now public APIs in external crates should be understood as de facto stable after the library reaches 1.0.0 (per semver).

Policy by language feature

Most of the policy is simplest to lay out with reference to specific language features and the way that APIs using them can, and cannot, evolve in a minor release.

Breaking changes are assumed to be major changes unless otherwise stated. The RFC covers many, but not all breaking changes that are major; it covers all breaking changes that are considered minor.

Crates

Major change: going from stable to nightly

Changing a crate from working on stable Rust to requiring a nightly is considered a breaking change. That includes using #[feature] directly, or using a dependency that does so. Crate authors should consider using Cargo “features” for their crate to make such use opt-in.

Minor change: altering the use of Cargo features

Cargo packages can provide opt-in features, which enable #[cfg] options. When a common dependency is compiled, it is done so with the union of all features opted into by any packages using the dependency. That means that adding or removing a feature could technically break other, unrelated code.

However, such breakage always represents a bug: packages are supposed to support any combination of features, and if another client of the package depends on a given feature, that client should specify the opt-in themselves.

Modules

Major change: renaming/moving/removing any public items.

Although renaming an item might seem like a minor change, according to the general policy design this is not a permitted form of breakage: it’s not possible to annotate code in advance to avoid the breakage, nor is it possible to prevent the breakage from affecting dependencies.

Of course, much of the effect of renaming/moving/removing can be achieved by instead using deprecation and pub use, and the standard library should not be afraid to do so! In the long run, we should consider hiding at least some old deprecated items from the docs, and could even consider putting out a major version solely as a kind of “garbage collection” for long-deprecated APIs.

Minor change: adding new public items.

Note that adding new public items is currently a breaking change, due to glob imports. For example, the following snippet of code will break if the foo module introduces a public item called bar:

use foo::*;
fn bar() { ... }

The problem here is that glob imports currently do not allow any of their imports to be shadowed by an explicitly-defined item.

This is considered a minor change because under the principles of this RFC: the glob imports could have been written as more explicit (expanded) use statements. It is also plausible to do this expansion automatically for a crate’s dependencies, to prevent breakage in the first place.

(This RFC also suggests permitting shadowing of a glob import by any explicit item. This has been the intended semantics of globs, but has not been implemented. The details are left to a future RFC, however.)

Structs

See “Signatures in type definitions” for some general remarks about changes to the actual types in a struct definition.

Major change: adding a private field when all current fields are public.

This change has the effect of making external struct literals impossible to write, which can break code irreparably.

Major change: adding a public field when no private field exists.

This change retains the ability to use struct literals, but it breaks existing uses of such literals; it likewise breaks exhaustive matches against the struct.

Minor change: adding or removing private fields when at least one already exists (before and after the change).

No existing code could be relying on struct literals for the struct, nor on exhaustively matching its contents, and client code will likewise be oblivious to the addition of further private fields.

For tuple structs, this is only a minor change if furthermore all fields are currently private. (Tuple structs with mixtures of public and private fields are bad practice in any case.)

Minor change: going from a tuple struct with all private fields (with at least one field) to a normal struct, or vice versa.

This is technically a breaking change:

// in some other module:
pub struct Foo(SomeType);

// in downstream code
let Foo(_) = foo;

Changing Foo to a normal struct can break code that matches on it – but there is never any real reason to match on it in that circumstance, since you cannot extract any fields or learn anything of interest about the struct.

Enums

See “Signatures in type definitions” for some general remarks about changes to the actual types in an enum definition.

Major change: adding new variants.

Exhaustiveness checking means that a match that explicitly checks all the variants for an enum will break if a new variant is added. It is not currently possible to defend against this breakage in advance.

A postponed RFC discusses a language feature that allows an enum to be marked as “extensible”, which modifies the way that exhaustiveness checking is done and would make it possible to extend the enum without breakage.

Major change: adding new fields to a variant.

If the enum is public, so is the full contents of all of its variants. As per the rules for structs, this means it is not allowed to add any new fields (which will automatically be public).

If you wish to allow for this kind of extensibility, consider introducing a new, explicit struct for the variant up front.

Traits

Major change: adding a non-defaulted item.

Adding any item without a default will immediately break all trait implementations.

It’s possible that in the future we will allow some kind of “sealing” to say that a trait can only be used as a bound, not to provide new implementations; such a trait would allow arbitrary items to be added.

Major change: any non-trivial change to item signatures.

Because traits have both implementors and consumers, any change to the signature of e.g. a method will affect at least one of the two parties. So, for example, abstracting a concrete method to use generics instead might work fine for clients of the trait, but would break existing implementors. (Note, as above, the potential for “sealed” traits to alter this dynamic.)

Minor change: adding a defaulted item.

Adding a defaulted item is technically a breaking change:

trait Trait1 {}
trait Trait2 {
    fn foo(&self);
}

fn use_both<T: Trait1 + Trait2>(t: &T) {
    t.foo()
}

If a foo method is added to Trait1, even with a default, it would cause a dispatch ambiguity in use_both, since the call to foo could be referring to either trait.

(Note, however, that existing implementations of the trait are fine.)

According to the basic principles of this RFC, such a change is minor: it is always possible to annotate the call t.foo() to be more explicit in advance using UFCS: Trait2::foo(t). This kind of annotation could be done automatically for code in dependencies (see Elaborated source). And it would also be possible to mitigate this problem by allowing method renaming on trait import.

While the scenario of adding a defaulted method to a trait may seem somewhat obscure, the exact same hazards arise with implementing existing traits (see below), which is clearly vital to allow; we apply a similar policy to both.

All that said, it is incumbent on library authors to ensure that such “minor” changes are in fact minor in practice: if a conflict like t.foo() is likely to arise at all often in downstream code, it would be advisable to explore a different choice of names. More guidelines for the standard library are given later on.

There are two circumstances when adding a defaulted item is still a major change:

  • The new item would change the trait from object safe to non-object safe.
  • The trait has a defaulted associated type and the item being added is a defaulted function/method. In this case, existing impls that override the associated type will break, since the function/method default will not apply. (See the associated item RFC).
  • Adding a default to an existing associated type is likewise a major change if the trait has defaulted methods, since it will invalidate use of those defaults for the methods in existing trait impls.

Minor change: adding a defaulted type parameter.

As with “Signatures in type definitions”, traits are permitted to add new type parameters as long as defaults are provided (which is backwards compatible).

Trait implementations

Major change: implementing any “fundamental” trait.

A recent RFC introduced the idea of “fundamental” traits which are so basic that not implementing such a trait right off the bat is considered a promise that you will never implement the trait. The Sized and Fn traits are examples.

The coherence rules take advantage of fundamental traits in such a way that adding a new implementation of a fundamental trait to an existing type can cause downstream breakage. Thus, such impls are considered major changes.

Minor change: implementing any non-fundamental trait.

Unfortunately, implementing any existing trait can cause breakage:

// Crate A
    pub trait Trait1 {
        fn foo(&self);
    }

    pub struct Foo; // does not implement Trait1

// Crate B
    use crateA::Trait1;

    trait Trait2 {
        fn foo(&self);
    }

    impl Trait2 for crateA::Foo { .. }

    fn use_foo(f: &crateA::Foo) {
        f.foo()
    }

If crate A adds an implementation of Trait1 for Foo, the call to f.foo() in crate B will yield a dispatch ambiguity (much like the one we saw for defaulted items). Thus technically implementing any existing trait is a breaking change! Completely prohibiting such a change is clearly a non-starter.

However, as before, this kind of breakage is considered “minor” by the principles of this RFC (see “Adding a defaulted item” above).

Inherent implementations

Minor change: adding any inherent items.

Adding an inherent item cannot lead to dispatch ambiguity, because inherent items trump any trait items with the same name.

However, introducing an inherent item can lead to breakage if the signature of the item does not match that of an in scope, implemented trait:

// Crate A
    pub struct Foo;

// Crate B
    trait Trait {
        fn foo(&self);
    }

    impl Trait for crateA::Foo { .. }

    fn use_foo(f: &crateA::Foo) {
        f.foo()
   }

If crate A adds a method:

impl Foo {
    fn foo(&self, x: u8) { ... }
}

then crate B would no longer compile, since dispatch would prefer the inherent impl, which has the wrong type.

Once more, this is considered a minor change, since UFCS can disambiguate (see “Adding a defaulted item” above).

It’s worth noting, however, that if the signatures did happen to match then the change would no longer cause a compilation error, but might silently change runtime behavior. The case where the same method for the same type has meaningfully different behavior is considered unlikely enough that the RFC is willing to permit it to be labeled as a minor change – and otherwise, inherent methods could never be added after the fact.

Other items

Most remaining items do not have any particularly unique items:

Cross-cutting concerns

Behavioral changes

This RFC is largely focused on API changes which may, in particular, cause downstream code to stop compiling. But in some sense it is even more pernicious to make a change that allows downstream code to continue compiling, but causes its runtime behavior to break.

This RFC does not attempt to provide a comprehensive policy on behavioral changes, which would be extremely difficult. In general, APIs are expected to provide explicit contracts for their behavior via documentation, and behavior that is not part of this contract is permitted to change in minor revisions. (Remember: this RFC is about setting a minimum bar for when major version bumps are required.)

This policy will likely require some revision over time, to become more explicit and perhaps lay out some best practices.

Signatures in type definitions

Major change: tightening bounds.

Adding new constraints on existing type parameters is a breaking change, since existing uses of the type definition can break. So the following is a major change:

// MAJOR CHANGE

// Before
struct Foo<A> { .. }

// After
struct Foo<A: Clone> { .. }

Minor change: loosening bounds.

Loosening bounds, on the other hand, cannot break code because when you reference Foo<A>, you do not learn anything about the bounds on A. (This is why you have to repeat any relevant bounds in impl blocks for Foo, for example.) So the following is a minor change:

// MINOR CHANGE

// Before
struct Foo<A: Clone> { .. }

// After
struct Foo<A> { .. }

Minor change: adding defaulted type parameters.

All existing references to a type/trait definition continue to compile and work correctly after a new defaulted type parameter is added. So the following is a minor change:

// MINOR CHANGE

// Before
struct Foo { .. }

// After
struct Foo<A = u8> { .. }

Minor change: generalizing to generics.

A struct or enum field can change from a concrete type to a generic type parameter, provided that the change results in an identical type for all existing use cases. For example, the following change is permitted:

// MINOR CHANGE

// Before
struct Foo(pub u8);

// After
struct Foo<T = u8>(pub T);

because existing uses of Foo are shorthand for Foo<u8> which yields the identical field type. (Note: this is not actually true today, since default type parameters are not fully implemented. But this is the intended semantics.)

On the other hand, the following is not permitted:

// MAJOR CHANGE

// Before
struct Foo<T = u8>(pub T, pub u8);

// After
struct Foo<T = u8>(pub T, pub T);

since there may be existing uses of Foo with a non-default type parameter which would break as a result of the change.

It’s also permitted to change from a generic type to a more-generic one in a minor revision:

// MINOR CHANGE

// Before
struct Foo<T>(pub T, pub T);

// After
struct Foo<T, U = T>(pub T, pub U);

since, again, all existing uses of the type Foo<T> will yield the same field types as before.

Signatures in functions

All of the changes mentioned below are considered major changes in the context of trait methods, since they can break implementors.

Major change: adding/removing arguments.

At the moment, Rust does not provide defaulted arguments, so any change in arity is a breaking change.

Minor change: introducing a new type parameter.

Technically, adding a (non-defaulted) type parameter can break code:

// MINOR CHANGE (but causes breakage)

// Before
fn foo<T>(...) { ... }

// After
fn foo<T, U>(...) { ... }

will break any calls like foo::<u8>. However, such explicit calls are rare enough (and can usually be written in other ways) that this breakage is considered minor. (However, one should take into account how likely it is that the function in question is being called with explicit type arguments). This RFC also suggests adding a ... notation to explicit parameter lists to keep them open-ended (see suggested language changes).

Such changes are an important ingredient of abstracting to use generics, as described next.

Minor change: generalizing to generics.

The type of an argument to a function, or its return value, can be generalized to use generics, including by introducing a new type parameter (as long as it can be instantiated to the original type). For example, the following change is allowed:

// MINOR CHANGE

// Before
fn foo(x: u8) -> u8;
fn bar<T: Iterator<Item = u8>>(t: T);

// After
fn foo<T: Add>(x: T) -> T;
fn bar<T: IntoIterator<Item = u8>>(t: T);

because all existing uses are instantiations of the new signature. On the other hand, the following isn’t allowed in a minor revision:

// MAJOR CHANGE

// Before
fn foo(x: Vec<u8>);

// After
fn foo<T: Copy + IntoIterator<Item = u8>>(x: T);

because the generics include a constraint not satisfied by the original type.

Introducing generics in this way can potentially create type inference failures, but these are considered acceptable per the principles of the RFC: they only require local annotations that could have been inserted in advance.

Perhaps somewhat surprisingly, generalization applies to trait objects as well, given that every trait implements itself:

// MINOR CHANGE

// Before
fn foo(t: &Trait);

// After
fn foo<T: Trait + ?Sized>(t: &T);

(The use of ?Sized is essential; otherwise you couldn’t recover the original signature).

Lints

Minor change: introducing new lint warnings/errors

Lints are considered advisory, and changes that cause downstream code to receive additional lint warnings/errors are still considered “minor” changes.

Making this work well in practice will likely require some infrastructure work along the lines of this RFC issue

Mitigation for minor changes

The Crater tool

@brson has been hard at work on a tool called “Crater” which can be used to exercise changes on the entire crates.io ecosystem, looking for regressions. This tool will be indispensable when weighing the costs of a minor change that might cause some breakage – we can actually gauge what the breakage would look like in practice.

While this would, of course, miss code not available publicly, the hope is that code on crates.io is a broadly representative sample, good enough to turn up problems.

Any breaking, but minor change to the standard library must be evaluated through Crater before being committed.

Nightlies

One line of defense against a “minor” change causing significant breakage is the nightly release channel: we can get feedback about breakage long before it makes even into a beta release. And of course the beta cycle itself provides another line of defense.

Elaborated source

When compiling upstream dependencies, it is possible to generate an “elaborated” version of the source code where all dispatch is resolved to explicit UFCS form, all types are annotated, and all glob imports are replaced by explicit imports.

This fully-elaborated form is almost entirely immune to breakage due to any of the “minor changes” listed above.

You could imagine Cargo storing this elaborated form for dependencies upon compilation. That would in turn make it easy to update Rust, or some subset of dependencies, without breaking any upstream code (even in minor ways). You would be left only with very small, local changes to make to the code you own.

While this RFC does not propose any such tooling change right now, the point is mainly that there are a lot of options if minor changes turn out to cause breakage more often than anticipated.

Trait item renaming

One very useful mechanism would be the ability to import a trait while renaming some of its items, e.g. use some_mod::SomeTrait with {foo_method as bar}. In particular, when methods happen to conflict across traits defined in separate crates, a user of the two traits could rename one of the methods out of the way.

Thoughts on possible language changes (unofficial)

The following is just a quick sketch of some focused language changes that would help our API evolution story.

Glob semantics

As already mentioned, the fact that glob imports currently allow no shadowing is deeply problematic: in a technical sense, it means that the addition of any public item can break downstream code arbitrarily.

It would be much better for API evolution (and for ergonomics and intuition) if explicitly-defined items trump glob imports. But this is left to a future RFC.

Globs with fine-grained control

Another useful tool for working with globs would be the ability to exclude certain items from a glob import, e.g. something like:

use some_module::{* without Foo};

This is especially useful for the case where multiple modules being glob imported happen to export items with the same name.

Another possibility would be to not make it an error for two glob imports to bring the same name into scope, but to generate the error only at the point that the imported name was actually used. Then collisions could be resolved simply by adding a single explicit, shadowing import.

Default type parameters

Some of the minor changes for moving to more generic code depends on an interplay between defaulted type parameters and type inference, which has been accepted as an RFC but not yet implemented.

“Extensible” enums

There is already an RFC for an enum annotation that would make it possible to add variants without ever breaking downstream code.

Sealed traits

The ability to annotate a trait with some “sealed” marker, saying that no external implementations are allowed, would be useful in certain cases where a crate wishes to define a closed set of types that implements a particular interface. Such an attribute would make it possible to evolve the interface without a major version bump (since no downstream implementors can exist).

Defaulted parameters

Also known as “optional arguments” – an oft-requested feature. Allowing arguments to a function to be optional makes it possible to add new arguments after the fact without a major version bump.

Open-ended explicit type parameters

One hazard is that with today’s explicit type parameter syntax, you must always specify all type parameters: foo::<T, U>(x, y). That means that adding a new type parameter to foo can break code, even if a default is provided.

This could be easily addressed by adding a notation like ... to leave additional parameters unspecified: foo::<T, ...>(x, y).

[Amendment] Misuse of accessible(..)

RFC 2523 introduces #[cfg(accessible($path)]. Based on the accessibility of a to-the-current-crate external $path, the flag allows conditional compilation. When combined with #[cfg(feature = "unstable")], this has certain breakage risks. Such breakage due to misuse, as outlined in the RFC, is considered acceptable and not covered by our stability promises. Please see the RFC for more details.

Drawbacks and Alternatives

The main drawback to the approach laid out here is that it makes the stability and semver guarantees a bit fuzzier: the promise is not that code will never break, full stop, but rather that minor release breakage is of an extremely limited form, for which there are a variety of mitigation strategies. This approach tries to strike a middle ground between a very hard line for stability (which, for Rust, would rule out many forms of extension) and willy-nilly breakage: it’s an explicit, but pragmatic policy.

An alternative would be to take a harder line and find some other way to allow API evolution. Supposing that we resolved the issues around glob imports, the main problems with breakage have to do with adding new inherent methods or trait implementations – both of which are vital forms of evolution. It might be possible, in the standard library case, to provide some kind of version-based opt in to this evolution: a crate could opt in to breaking changes for a particular version of Rust, which might in turn be provided only through some cfg-like mechanism.

Note that these strategies are not mutually exclusive. Rust’s development processes involved a very steady, strong stream of breakage, and while we need to be very serious about stabilization, it is possible to take an iterative approach. The changes considered “major” by this RFC already move the bar very significantly from what was permitted pre-1.0. It may turn out that even the minor forms of breakage permitted here are, in the long run, too much to tolerate; at that point we could revise the policies here and explore some opt-in scheme, for example.

Unresolved questions

Behavioral issues

  • Is it permitted to change a contract from “abort” to “panic”? What about from “panic” to “return an Err”?

  • Should we try to lay out more specific guidance for behavioral changes at this point?

Summary

Add an expect method to the Result type, bounded to E: Debug

Motivation

While Result::unwrap exists, it does not allow annotating the panic message with the operation attempted (e.g. what file was being opened). This is at odds to ‘Option’ which includes both unwrap and expect (with the latter taking an arbitrary failure message).

Detailed design

Add a new method to the same impl block as Result::unwrap that takes a &str message and returns T if the Result was Ok. If the Result was Err, it panics with both the provided message and the error value.

The format of the error message is left undefined in the documentation, but will most likely be the following

panic!("{}: {:?}", msg, e)

Drawbacks

  • It involves adding a new method to a core rust type.
  • The panic message format is less obvious than it is with Option::expect (where the panic message is the message passed)

Alternatives

  • We are perfectly free to not do this.
  • A macro could be introduced to fill the same role (which would allow arbitrary formatting of the panic message).

Unresolved questions

Are there any issues with the proposed format of the panic string?

Summary

This RFC has the goal of defining what sorts of breaking changes we will permit for the Rust language itself, and giving guidelines for how to go about making such changes.

Motivation

With the release of 1.0, we need to establish clear policy on what precisely constitutes a “minor” vs “major” change to the Rust language itself (as opposed to libraries, which are covered by RFC 1105). This RFC proposes that minor releases may only contain breaking changes that fix compiler bugs or other type-system issues. Primarily, this means soundness issues where “innocent” code can cause undefined behavior (in the technical sense), but it also covers cases like compiler bugs and tightening up the semantics of “underspecified” parts of the language (more details below).

However, simply landing all breaking changes immediately could be very disruptive to the ecosystem. Therefore, the RFC also proposes specific measures to mitigate the impact of breaking changes, and some criteria when those measures might be appropriate.

In rare cases, it may be deemed a good idea to make a breaking change that is not a soundness problem or compiler bug, but rather correcting a defect in design. Such cases should be rare. But if such a change is deemed worthwhile, then the guidelines given here can still be used to mitigate its impact.

Detailed design

The detailed design is broken into two major sections: how to address soundness changes, and how to address other, opt-in style changes. We do not discuss non-breaking changes here, since obviously those are safe.

Soundness changes

When compiler or type-system bugs are encountered in the language itself (as opposed to in a library), clearly they ought to be fixed. However, it is important to fix them in such a way as to minimize the impact on the ecosystem.

The first step then is to evaluate the impact of the fix on the crates found in the crates.io website (using e.g. the crater tool). If impact is found to be “small” (which this RFC does not attempt to precisely define), then the fix can simply be landed. As today, the commit message of any breaking change should include the term [breaking-change] along with a description of how to resolve the problem, which helps those people who are affected to migrate their code. A description of the problem should also appear in the relevant subteam report.

In cases where the impact seems larger, any effort to ease the transition is sure to be welcome. The following are suggestions for possible steps we could take (not all of which will be applicable to all scenarios):

  1. Identify important crates (such as those with many dependants) and work with the crate author to correct the code as quickly as possible, ideally before the fix even lands.
  2. Work hard to ensure that the error message identifies the problem clearly and suggests the appropriate solution.
    • If we develop a rustfix tool, in some cases we may be able to extend that tool to perform the fix automatically.
  3. Provide an annotation that allows for a scoped “opt out” of the newer rules, as described below. While the change is still breaking, this at least makes it easy for crates to update and get back to compiling status quickly.
  4. Begin with a deprecation or other warning before issuing a hard error. In extreme cases, it might be nice to begin by issuing a deprecation warning for the unsound behavior, and only make the behavior a hard error after the deprecation has had time to circulate. This gives people more time to update their crates. However, this option may frequently not be available, because the source of a compilation error is often hard to pin down with precision.

Some of the factors that should be taken into consideration when deciding whether and how to minimize the impact of a fix:

  • How important is the change?
    • Soundness holes that can be easily exploited or which impact running code are obviously much more concerning than minor corner cases. There is somewhat in tension with the other factors: if there is, for example, a widely deployed vulnerability, fixing that vulnerability is important, but it will also cause a larger disruption.
  • How many crates on crates.io are affected?
    • This is a general proxy for the overall impact (since of course there will always be private crates that are not part of crates.io).
  • Were particularly vital or widely used crates affected?
    • This could indicate that the impact will be wider than the raw number would suggest.
  • Does the change silently change the result of running the program, or simply cause additional compilation failures?
    • The latter, while frustrating, are easier to diagnose.
  • What changes are needed to get code compiling again? Are those changes obvious from the error message?
    • The more cryptic the error, the more frustrating it is when compilation fails.

What is a “compiler bug” or “soundness change”?

In the absence of a formal spec, it is hard to define precisely what constitutes a “compiler bug” or “soundness change” (see also the section below on underspecified parts of the language). The obvious cases are soundness violations in a rather strict sense:

  • Cases where the user is able to produce Undefined Behavior (UB) purely from safe code.
  • Cases where the user is able to produce UB using standard library APIs or other unsafe code that “should work”.

However, there are other kinds of type-system inconsistencies that might be worth fixing, even if they cannot lead directly to UB. Bugs in the coherence system that permit uncontrolled overlap between impls are one example. Another example might be inference failures that cause code to compile which should not (because ambiguities exist). Finally, there is a list below of areas of the language which are generally considered underspecified.

We expect that there will be cases that fall on a grey line between bug and expected behavior, and discussion will be needed to determine where it falls. The recent conflict between Rc and scoped threads is an example of such a discussion: it was clear that both APIs could not be legal, but not clear which one was at fault. The results of these discussions will feed into the Rust spec as it is developed.

Opting out

In some cases, it may be useful to permit users to opt out of new type rules. The intention is that this “opt out” is used as a temporary crutch to make it easy to get the code up and running. Typically this opt out will thus be removed in a later release. But in some cases, particularly those cases where the severity of the problem is relatively small, it could be an option to leave the “opt out” mechanism in place permanently. In either case, use of the “opt out” API would trigger the deprecation lint.

Note that we should make every effort to ensure that crates which employ this opt out can be used compatibly with crates that do not.

Changes that alter dynamic semantics versus typing rules

In some cases, fixing a bug may not cause crates to stop compiling, but rather will cause them to silently start doing something different than they were doing before. In cases like these, the same principle of using mitigation measures to lessen the impact (and ease the transition) applies, but the precise strategy to be used will have to be worked out on a more case-by-case basis. This is particularly relevant to the underspecified areas of the language described in the next section.

Our approach to handling dynamic drop is a good example. Because we expect that moving to the complete non-zeroing dynamic drop semantics will break code, we’ve made an intermediate change that altered the compiler to fill with use a non-zero value, which helps to expose code that was implicitly relying on the current behavior (much of which has since been restructured in a more future-proof way).

Underspecified language semantics

There are a number of areas where the precise language semantics are currently somewhat underspecified. Over time, we expect to be fully defining the semantics of all of these areas. This may cause some existing code – and in particular existing unsafe code – to break or become invalid. Changes of this nature should be treated as soundness changes, meaning that we should attempt to mitigate the impact and ease the transition wherever possible.

Known areas where change is expected include the following:

  • Destructors semantics:
    • We plan to stop zeroing data and instead use marker flags on the stack, as specified in RFC 320. This may affect destructors that rely on overwriting memory or using the unsafe_no_drop_flag attribute.
    • Currently, panicking in a destructor can cause unintentional memory leaks and other poor behavior (see #14875, #16135). We are likely to make panic in a destructor simply abort, but the precise mechanism is not yet decided.
    • Order of dtor execution within a data structure is somewhat inconsistent (see #744).
  • The legal aliasing rules between unsafe pointers is not fully settled (see #19733).
  • The interplay of assoc types and lifetimes is not fully settled and can lead to unsoundness in some cases (see #23442).
  • The trait selection algorithm is expected to be improved and made more complete over time. It is possible that this will affect existing code.
  • Overflow semantics: in particular, we may have missed some cases.
  • Memory allocation in unsafe code is currently unstable. We expect to be defining safe interfaces as part of the work on supporting tracing garbage collectors (see #415).
  • The treatment of hygiene in macros is uneven (see #22462, #24278). In some cases, changes here may be backwards compatible, or may be more appropriate only with explicit opt-in (or perhaps an alternate macro system altogether, such as this proposal).
  • Lints will evolve over time (both the lints that are enabled and the precise cases that lints catch). We expect to introduce a means to limit the effect of these changes on dependencies.
  • Stack overflow is currently detected via a segmented stack check prologue and results in an abort. We expect to experiment with a system based on guard pages in the future.
  • We currently abort the process on OOM conditions (exceeding the heap space, overflowing the stack). We may attempt to panic in such cases instead if possible.
  • Some details of type inference may change. For example, we expect to implement the fallback mechanism described in RFC 213, and we may wish to make minor changes to accommodate overloaded integer literals. In some cases, type inferences changes may be better handled via explicit opt-in.

There are other kinds of changes that can be made in a minor version that may break unsafe code but which are not considered breaking changes, because the unsafe code is relying on things known to be intentionally unspecified. One obvious example is the layout of data structures, which is considered undefined unless they have a #[repr(C)] attribute.

Although it is not directly covered by this RFC, it’s worth noting in passing that some of the CLI flags to the compiler may change in the future as well. The -Z flags are of course explicitly unstable, but some of the -C, rustdoc, and linker-specific flags are expected to evolve over time (see e.g. #24451).

[Amendment] Misuse of accessible(..)

RFC 2523 introduces #[cfg(accessible($path)]. Based on the accessibility of a to-the-current-crate external $path, the flag allows conditional compilation. When combined with #[cfg(feature = "unstable")], this has certain breakage risks. Such breakage due to misuse, as outlined in the RFC, is considered acceptable and not covered by our stability promises. Please see the RFC for more details.

Drawbacks

The primary drawback is that making breaking changes are disruptive, even when done with the best of intentions. The alternatives list some ways that we could avoid breaking changes altogether, and the downsides of each.

Notes on phasing

Alternatives

Rather than simply fixing soundness bugs, we could issue new major releases, or use some sort of opt-in mechanism to fix them conditionally. This was initially considered as an option, but eventually rejected for the following reasons:

  • Opting in to type system changes would cause deep splits between minor versions; it would also create a high maintenance burden in the compiler, since both older and newer versions would have to be supported.
  • It seems likely that all users of Rust will want to know that their code is sound and would not want to be working with unsafe constructs or bugs.
  • We already have several mitigation measures, such as opt-out or temporary deprecation, that can be used to ease the transition around a soundness fix. Moreover, separating out new type rules so that they can be “opted into” can be very difficult and would complicate the compiler internally; it would also make it harder to reason about the type system as a whole.

Unresolved questions

What precisely constitutes “small” impact? This RFC does not attempt to define when the impact of a patch is “small” or “not small”. We will have to develop guidelines over time based on precedent. One of the big unknowns is how indicative the breakage we observe on crates.io will be of the total breakage that will occur: it is certainly possible that all crates on crates.io work fine, but the change still breaks a large body of code we do not have access to.

What attribute should we use to “opt out” of soundness changes? The section on breaking changes indicated that it may sometimes be appropriate to include an “opt out” that people can use to temporarily revert to older, unsound type rules, but did not specify precisely what that opt-out should look like. Ideally, we would identify a specific attribute in advance that will be used for such purposes. In the past, we have simply created ad-hoc attributes (e.g., #[old_orphan_check]), but because custom attributes are forbidden by stable Rust, this has the unfortunate side-effect of meaning that code which opts out of the newer rules cannot be compiled on older compilers (even though it’s using the older type system rules). If we introduce an attribute in advance we will not have this problem.

Are there any other circumstances in which we might perform a breaking change? In particular, it may happen from time to time that we wish to alter some detail of a stable component. If we believe that this change will not affect anyone, such a change may be worth doing, but we’ll have to work out more precise guidelines. RFC 1156 is an example.

Summary

Introduce the method split_at(&self, mid: usize) -> (&str, &str) on str, to divide a slice into two, just like we can with [T].

Motivation

Adding split_at is a measure to provide a method from [T] in a version that makes sense for str.

Once used to [T], users might even expect that split_at is present on str.

It is a simple method with an obvious implementation, but it provides convenience while working with string segmentation manually, which we already have ample tools for (for example the method find that returns the first matching byte offset).

Using split_at can lead to less repeated bounds checks, since it is easy to use cumulatively, splitting off a piece at a time.

This feature is requested in rust-lang/rust#18063

Detailed design

Introduce the method split_at(&self, mid: usize) -> (&str, &str) on str, to divide a slice into two.

mid will be a byte offset from the start of the string, and it must be on a character boundary. Both 0 and self.len() are valid splitting points.

split_at will be an inherent method on str where possible, and will be available from libcore and the layers above it.

The following is a working implementation, implemented as a trait just for illustration and to be testable as a custom extension:

trait SplitAt {
    fn split_at(&self, mid: usize) -> (&Self, &Self);
}

impl SplitAt for str {
    /// Divide one string slice into two at an index.
    ///
    /// The index `mid` is a byte offset from the start of the string
    /// that must be on a character boundary.
    ///
    /// Return slices `&self[..mid]` and `&self[mid..]`.
    ///
    /// # Panics
    ///
    /// Panics if `mid` is beyond the last character of the string,
    /// or if it is not on a character boundary.
    ///
    /// # Examples
    /// ```
    /// let s = "Löwe 老虎 Léopard";
    /// let first_space = s.find(' ').unwrap_or(s.len());
    /// let (a, b) = s.split_at(first_space);
    ///
    /// assert_eq!(a, "Löwe");
    /// assert_eq!(b, " 老虎 Léopard");
    /// ```
    fn split_at(&self, mid: usize) -> (&str, &str) {
        (&self[..mid], &self[mid..])
    }
}

split_at will use a byte offset (a.k.a byte index) to be consistent with slicing and the offset used by interrogator methods such as find or iterators such as char_indices. Byte offsets are our standard lightweight position indicators that we use to support efficient operations on string slices.

Implementing split_at_mut is not relevant for str at this time.

Drawbacks

  • split_at panics on 1) index out of bounds 2) index not on character boundary.
  • Possible name confusion with other str methods like .split()
  • According to our developing API evolution and semver guidelines this is a breaking change but a (very) minor change. Adding methods is something we expect to be able to. (See RFC PR #1105).

Alternatives

  • Recommend other splitting methods, like the split iterators.
  • Stick to writing (&foo[..mid], &foo[mid..])

Unresolved questions

  • None

Summary

Provide a pair of intrinsic functions for hinting the likelihood of branches being taken.

Motivation

Branch prediction can have significant effects on the running time of some code. Especially tight inner loops which may be run millions of times. While in general programmers aren’t able to effectively provide hints to the compiler, there are cases where the likelihood of some branch being taken can be known.

For example: in arbitrary-precision arithmetic, operations are often performed in a base that is equal to 2^word_size. The most basic division algorithm, “Schoolbook Division”, has a step that will be taken in 2/B cases (where B is the base the numbers are in), given random input. On a 32-bit processor that is approximately one in two billion cases, for 64-bit it’s one in 18 quintillion cases.

Detailed design

Implement a pair of intrinsics likely and unlikely, both with signature fn(bool) -> bool which hint at the probability of the passed value being true or false. Specifically, likely hints to the compiler that the passed value is likely to be true, and unlikely hints that it is likely to be false. Both functions simply return the value they are passed.

The primary reason for this design is that it reflects common usage of this general feature in many C and C++ projects, most of which define simple LIKELY and UNLIKELY macros around the gcc __builtin_expect intrinsic. It also provides the most flexibility, allowing branches on any condition to be hinted at, even if the process that produced the branched-upon value is complex. For why an equivalent to __builtin_expect is not being exposed, see the Alternatives section.

There are no observable changes in behaviour from use of these intrinsics. It is valid to implement these intrinsics simply as the identity function. Though it is expected that the intrinsics provide information to the optimizer, that information is not guaranteed to change the decisions the optimiser makes.

Drawbacks

The intrinsics cannot be used to hint at arms in match expressions. However, given that hints would need to be variants, a simple intrinsic would not be sufficient for those purposes.

Alternatives

Expose an expect intrinsic. This is what gcc/clang does with __builtin_expect. However there is a restriction that the second argument be a constant value, a requirement that is not easily expressible in Rust code. The split into likely and unlikely intrinsics reflects the strategy we have used for similar restrictions like the ordering constraint of the atomic intrinsics.

Unresolved questions

None.

Summary

Allow equality, but not order, comparisons between fat raw pointers of the same type.

Motivation

Currently, fat raw pointers can’t be compared via either PartialEq or PartialOrd (currently this causes an ICE). It seems to me that a primitive type like a fat raw pointer should implement equality in some way.

However, there doesn’t seem to be a sensible way to order raw fat pointers unless we take vtable addresses into account, which is relatively weird.

Detailed design

Implement PartialEq/Eq for fat raw pointers, defined as comparing both the unsize-info and the address. This means that these are true:

    &s as &fmt::Debug as *const _ == &s as &fmt::Debug as *const _ // of course
    &s.first_field as &fmt::Debug as *const _
        != &s as &fmt::Debug as *const _ // these are *different* (one
	                                 // prints only the first field,
					 // the other prints all fields).

But

    &s.first_field as &fmt::Debug as *const _ as *const () ==
        &s as &fmt::Debug as *const _ as *const () // addresses are equal

Drawbacks

Order comparisons may be useful for putting fat raw pointers into ordering-based data structures (e.g. BinaryTree).

Alternatives

@nrc suggested to implement heterogeneous comparisons between all thin raw pointers and all fat raw pointers. I don’t like this because equality between fat raw pointers of different traits is false most of the time (unless one of the traits is a supertrait of the other and/or the only difference is in free lifetimes), and anyway you can always compare by casting both pointers to a common type.

It is also possible to implement ordering too, either in unsize -> addr lexicographic order or addr -> unsize lexicographic order.

Unresolved questions

What form of ordering should be adopted, if any?

Summary

Add some methods that already exist on slices to strings. Specifically, the following methods should be added:

  • str::into_string
  • String::into_boxed_str

Motivation

Conceptually, strings and slices are similar types. Many methods are already shared between the two types due to their similarity. However, not all methods are shared between the types, even though many could be. This is a little unexpected and inconsistent. Because of that, this RFC proposes to remedy this by adding a few methods to strings to even out these two types’ available methods.

Specifically, it is currently very difficult to construct a Box<str>, while it is fairly simple to make a Box<[T]> by using Vec::into_boxed_slice. This RFC proposes a means of creating a Box<str> by converting a String.

Detailed design

Add the following method to str, presumably as an inherent method:

  • into_string(self: Box<str>) -> String: Returns self as a String. This is equivalent to [T]’s into_vec.

Add the following method to String as an inherent method:

  • into_boxed_str(self) -> Box<str>: Returns self as a Box<str>, reallocating to cut off any excess capacity if needed. This is required to provide a safe means of creating Box<str>. This is equivalent to Vec<T>’s into_boxed_slice.

Drawbacks

None, yet.

Alternatives

  • The original version of this RFC had a few extra methods:
    • str::chunks(&self, n: usize) -> Chunks: Returns an iterator that yields the characters (not bytes) of the string in groups of n at a time. Iterator element type: &str.

    • str::windows(&self, n: usize) -> Windows: Returns an iterator over all contiguous windows of character length n. Iterator element type: &str.

      This and str::chunks aren’t really useful without proper treatment of graphemes, so they were removed from the RFC.

    • <[T]>::subslice_offset(&self, inner: &[T]) -> usize: Returns the offset (in elements) of an inner slice relative to an outer slice. Panics of inner is not contained within self.

      str::subslice_offset isn’t yet stable and its usefulness is dubious, so this method was removed from the RFC.

Unresolved questions

None.

Summary

Adjust the object default bound algorithm for cases like &'x Box<Trait> and &'x Arc<Trait>. The existing algorithm would default to &'x Box<Trait+'x>. The proposed change is to default to &'x Box<Trait+'static>.

Note: This is a BREAKING CHANGE. The change has been implemented and its impact has been evaluated. It was found to cause no root regressions on crates.io. Nonetheless, to minimize impact, this RFC proposes phasing in the change as follows:

  • In Rust 1.2, a warning will be issued for code which will break when the defaults are changed. This warning can be disabled by using explicit bounds. The warning will only be issued when explicit bounds would be required in the future anyway.
  • In Rust 1.3, the change will be made permanent. Any code that has not been updated by that time will break.

Motivation

When we instituted default object bounds, RFC 599 specified that &'x Box<Trait> (and &'x mut Box<Trait>) should expand to &'x Box<Trait+'x> (and &'x mut Box<Trait+'x>). This is in contrast to a Box type that appears outside of a reference (e.g., Box<Trait>), which defaults to using 'static (Box<Trait+'static>). This decision was made because it meant that a function written like so would accept the broadest set of possible objects:

fn foo(x: &Box<Trait>) {
}

In particular, under the current defaults, foo can be supplied an object which references borrowed data. Given that foo is taking the argument by reference, it seemed like a good rule. Experience has shown otherwise (see below for some of the problems encountered).

This RFC proposes changing the default object bound rules so that the default is drawn from the innermost type that encloses the trait object. If there is no such type, the default is 'static. The type is a reference (e.g., &'r Trait), then the default is the lifetime 'r of that reference. Otherwise, the type must in practice be some user-declared type, and the default is derived from the declaration: if the type declares a lifetime bound, then this lifetime bound is used, otherwise 'static is used. This means that (e.g.) &'r Box<Trait> would default to &'r Box<Trait+'static>, and &'r Ref<'q, Trait> (from RefCell) would default to &'r Ref<'q, Trait+'q>.

Problems with the current default.

Same types, different expansions. One problem is fairly predictable: the current default means that identical types differ in their interpretation based on where they appear. This is something we have striven to avoid in general. So, as an example, this code will not type-check:

trait Trait { }

struct Foo {
    field: Box<Trait>
}

fn do_something(f: &mut Foo, x: &mut Box<Trait>) {
    mem::swap(&mut f.field, &mut *x);
}

Even though x is a reference to a Box<Trait> and the type of field is a Box<Trait>, the expansions differ. x expands to &'x mut Box<Trait+'x> and the field expands to Box<Trait+'static>. In general, we have tried to ensure that if the type is typed precisely the same in a type definition and a fn definition, then those two types are equal (note that fn definitions allow you to omit things that cannot be omitted in types, so some types that you can enter in a fn definition, like &i32, cannot appear in a type definition).

Now, the same is of course true for the type Trait itself, which appears identically in different contexts and is expanded in different ways. This is not a problem here because the type Trait is unsized, which means that it cannot be swapped or moved, and hence the main sources of type mismatches are avoided.

Mental model. In general the mental model of the newer rules seems simpler: once you move a trait object into the heap (via Box, or Arc), you must explicitly indicate whether it can contain borrowed data or not. So long as you manipulate by reference, you don’t have to. In contrast, the current rules are more subtle, since objects in the heap may still accept borrowed data, if you have a reference to the box.

Poor interaction with the dropck rules. When implementing the newer dropck rules specified by RFC 769, we found a rather subtle problem that would arise with the current defaults. The precise problem is spelled out in appendix below, but the TL;DR is that if you wish to pass an array of boxed objects, the current defaults can be actively harmful, and hence force you to specify explicit lifetimes, whereas the newer defaults do something reasonable.

Detailed design

The rules for user-defined types from RFC 599 are altered as follows (text that is not changed is italicized):

  • If SomeType contains a single where-clause like T:'a, where T is some type parameter on SomeType and 'a is some lifetime, then the type provided as value of T will have a default object bound of 'a. An example of this is std::cell::Ref: a usage like Ref<'x, X> would change the default for object types appearing in X to be 'a.
  • If SomeType contains no where-clauses of the form T:'a, then the “base default” is used. The base default depends on the overall context:
    • in a fn body, the base default is a fresh inference variable.
    • outside of a fn body, such in a fn signature, the base default is 'static. Hence Box<X> would typically be a default of 'static for X, regardless of whether it appears underneath an & or not. (Note that in a fn body, the inference is strong enough to adopt 'static if that is the necessary bound, or a looser bound if that would be helpful.)
  • If SomeType contains multiple where-clauses of the form T:'a, then the default is cleared and explicit lifetiem bounds are required. There are no known examples of this in the standard library as this situation arises rarely in practice.

Timing and breaking change implications

This is a breaking change, and hence it behooves us to evaluate the impact and describe a procedure for making the change as painless as possible. One nice property of this change is that it only affects defaults, which means that it is always possible to write code that compiles both before and after the change by avoiding defaults in those cases where the new and old compiler disagree.

The estimated impact of this change is very low, for two reasons:

  • A recent test of crates.io found no regressions caused by this change (however, a previous run (from before Rust 1.0) found 8 regressions).
  • This feature was only recently stabilized as part of Rust 1.0 (and was only added towards the end of the release cycle), so there hasn’t been time for a large body of dependent code to arise outside of crates.io.

Nonetheless, to minimize impact, this RFC proposes phasing in the change as follows:

  • In Rust 1.2, a warning will be issued for code which will break when the defaults are changed. This warning can be disabled by using explicit bounds. The warning will only be issued when explicit bounds would be required in the future anyway.
    • Specifically, types that were written &Box<Trait> where the (boxed) trait object may contain references should now be written &Box<Trait+'a> to disable the warning.
  • In Rust 1.3, the change will be made permanent. Any code that has not been updated by that time will break.

Drawbacks

The primary drawback is that this is a breaking change, as discussed in the previous section.

Alternatives

Keep the current design, with its known drawbacks.

Unresolved questions

None.

Appendix: Details of the dropck problem

This appendix goes into detail about the sticky interaction with dropck that was uncovered. The problem arises if you have a function that wishes to take a mutable slice of objects, like so:

fn do_it(x: &mut [Box<FnMut()>]) { ... }

Here, &mut [..] is used because the objects are FnMut objects, and hence require &mut self to call. This function in turn is expanded to:

fn do_it<'x>(x: &'x mut [Box<FnMut()+'x>]) { ... }

Now callers might try to invoke the function as so:

do_it(&mut [Box::new(val1), Box::new(val2)])

Unfortunately, this code fails to compile – in fact, it cannot be made to compile without changing the definition of do_it, due to a sticky interaction between dropck and variance. The problem is that dropck requires that all data in the box strictly outlives the lifetime of the box’s owner. This is to prevent cyclic content. Therefore, the type of the objects must be Box<FnMut()+'R> where 'R is some region that strictly outlives the array itself (as the array is the owner of the objects). However, the signature of do_it demands that the reference to the array has the same lifetime as the trait objects within (and because this is an &mut reference and hence invariant, no approximation is permitted). This implies that the array must live for at least the region 'R. But we defined the region 'R to be some region that outlives the array, so we have a quandry.

The solution is to change the definition of do_it in one of two ways:

// Use explicit lifetimes to make it clear that the reference is not
// required to have the same lifetime as the objects themselves:
fn do_it1<'a,'b>(x: &'a mut [Box<FnMut()+'b>]) { ... }

// Specifying 'static is easier, but then the closures cannot
// capture the stack:
fn do_it2(x: &'a mut [Box<FnMut()+'static>]) { ... }

Under the proposed RFC, do_it2 would be the default. If one wanted to use lifetimes, then one would have to use explicit lifetime overrides as shown in do_it1. This is consistent with the mental model of “once you box up an object, you must add annotations for it to contain borrowed data”.

Summary

Introduce and implement IntoRaw{Fd, Socket, Handle} traits to complement the existing AsRaw{Fd, Socket, Handle} traits already in the standard library.

Motivation

The FromRaw{Fd, Socket, Handle} traits each take ownership of the provided handle, however, the AsRaw{Fd, Socket, Handle} traits do not give up ownership. Thus, converting from one handle wrapper to another (for example converting an open fs::File to a process::Stdio) requires the caller to either manually dup the handle, or mem::forget the wrapper, which is unergonomic and can be prone to mistakes.

Traits such as IntoRaw{Fd, Socket, Handle} will allow for easily transferring ownership of OS handles, and it will allow wrappers to perform any cleanup/setup as they find necessary.

Detailed design

The IntoRaw{Fd, Socket, Handle} traits will behave exactly like their AsRaw{Fd, Socket, Handle} counterparts, except they will consume the wrapper before transferring ownership of the handle.

Note that these traits should not have a blanket implementation over T: AsRaw{Fd, Socket, Handle}: these traits should be opt-in so that implementors can decide if leaking through mem::forget is acceptable or another course of action is required.

// Unix
pub trait IntoRawFd {
    fn into_raw_fd(self) -> RawFd;
}

// Windows
pub trait IntoRawSocket {
    fn into_raw_socket(self) -> RawSocket;
}

// Windows
pub trait IntoRawHandle {
    fn into_raw_handle(self) -> RawHandle;
}

Drawbacks

This adds three new traits and methods which would have to be maintained.

Alternatives

Instead of defining three new traits we could instead use the std::convert::Into<T> trait over the different OS handles. However, this approach will not offer a duality between methods such as as_raw_fd()/into_raw_fd(), but will instead be as_raw_fd()/into().

Another possibility is defining both the newly proposed traits as well as the Into<T> trait over the OS handles letting the caller choose what they prefer.

Unresolved questions

None at the moment.

Summary

Add support to the compiler to override the default allocator, allowing a different allocator to be used by default in Rust programs. Additionally, also switch the default allocator for dynamic libraries and static libraries to using the system malloc instead of jemalloc.

Note: this RFC has been superseded by RFC 1974.

Motivation

Note that this issue was discussed quite a bit in the past, and the meat of this RFC draws from Niko’s post.

Currently all Rust programs by default use jemalloc for an allocator because it is a fairly reasonable default as it is commonly much faster than the default system allocator. This is not desirable, however, when embedding Rust code into other runtimes. Using jemalloc implies that Rust will be using one allocator while the host application (e.g. Ruby, Firefox, etc) will be using a separate allocator. Having two allocators in one process generally hurts performance and is not recommended, so the Rust toolchain needs to provide a method to configure the allocator.

In addition to using an entirely separate allocator altogether, some Rust programs may want to simply instrument allocations or shim in additional functionality (such as memory tracking statistics). This is currently quite difficult to do, and would be accommodated with a custom allocation scheme.

Detailed design

The high level design can be found in this gist, but this RFC intends to expound on the idea to make it more concrete in terms of what the compiler implementation will look like. A sample implementation is available of this section.

High level design

The design of this RFC from 10,000 feet (referred to below), which was previously outlined looks like:

  1. Define a set of symbols which correspond to the APIs specified in alloc::heap. The liballoc library will call these symbols directly. Note that this means that each of the symbols take information like the size of allocations and such.
  2. Create two shim libraries which implement these allocation-related functions. Each shim is shipped with the compiler in the form of a static library. One shim will redirect to the system allocator, the other shim will bundle a jemalloc build along with Rust shims to redirect to jemalloc.
  3. Intermediate artifacts (rlibs) do not resolve this dependency, they’re just left dangling.
  4. When producing a “final artifact”, rustc by default links in one of two shims:
    • If we’re producing a staticlib or a dylib, link the system shim.
    • If we’re producing an exe and all dependencies are rlibs link the jemalloc shim.

The final link step will be optional, and one could link in any compliant allocator at that time if so desired.

New Attributes

Two new unstable attributes will be added to the compiler:

  • #![needs_allocator] indicates that a library requires the “allocation symbols” to link successfully. This attribute will be attached to liballoc and no other library should need to be tagged as such. Additionally, most crates don’t need to worry about this attribute as they’ll transitively link to liballoc.
  • #![allocator] indicates that a crate is an allocator crate. This is currently also used for tagging FFI functions as an “allocation function” to leverage more LLVM optimizations as well.

All crates implementing the Rust allocation API must be tagged with #![allocator] to get properly recognized and handled.

New Crates

Two new unstable crates will be added to the standard distribution:

  • alloc_system is a crate that will be tagged with #![allocator] and will redirect allocation requests to the system allocator.
  • alloc_jemalloc is another allocator crate that will bundle a static copy of jemalloc to redirect allocations to.

Both crates will be available to link to manually, but they will not be available in stable Rust to start out.

Allocation functions

Each crate tagged #![allocator] is expected to provide the full suite of allocation functions used by Rust, defined as:

extern {
    fn __rust_allocate(size: usize, align: usize) -> *mut u8;
    fn __rust_deallocate(ptr: *mut u8, old_size: usize, align: usize);
    fn __rust_reallocate(ptr: *mut u8, old_size: usize, size: usize,
                         align: usize) -> *mut u8;
    fn __rust_reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize,
                                 align: usize) -> usize;
    fn __rust_usable_size(size: usize, align: usize) -> usize;
}

The exact API of all these symbols is considered unstable (hence the leading __). This otherwise currently maps to what liballoc expects today. The compiler will not currently typecheck #![allocator] crates to ensure these symbols are defined and have the correct signature.

Also note that to define the above API in a Rust crate it would look something like:

#[no_mangle]
pub extern fn __rust_allocate(size: usize, align: usize) -> *mut u8 {
    /* ... */
}

Limitations of #![allocator]

Allocator crates (those tagged with #![allocator]) are not allowed to transitively depend on a crate which is tagged with #![needs_allocator]. This would introduce a circular dependency which is difficult to link and is highly likely to otherwise just lead to infinite recursion.

The compiler will also not immediately verify that crates tagged with #![allocator] do indeed define an appropriate allocation API, and vice versa if a crate defines an allocation API the compiler will not verify that it is tagged with #![allocator]. This means that the only meaning #![allocator] has to the compiler is to signal that the default allocator should not be linked.

Default allocator specifications

Target specifications will be extended with two keys: lib_allocation_crate and exe_allocation_crate, describing the default allocator crate for these two kinds of artifacts for each target. The compiler will by default have all targets redirect to alloc_system for both scenarios, but alloc_jemalloc will be used for binaries on OSX, Bitrig, DragonFly, FreeBSD, Linux, OpenBSD, and GNU Windows. MSVC will notably not use jemalloc by default for binaries (we don’t currently build jemalloc on MSVC).

Injecting an allocator

As described above, the compiler will inject an allocator if necessary into the current compilation. The compiler, however, cannot blindly do so as it can easily lead to link errors (or worse, two allocators), so it will have some heuristics for only injecting an allocator when necessary. The steps taken by the compiler for any particular compilation will be:

  • If no crate in the dependency graph is tagged with #![needs_allocator], then the compiler does not inject an allocator.
  • If only an rlib is being produced, no allocator is injected.
  • If any crate tagged with #[allocator] has been explicitly linked to (e.g. via an extern crate statement directly or transitively) then no allocator is injected.
  • If two allocators have been linked to explicitly an error is generated.
  • If only a binary is being produced, then the target’s exe_allocation_crate value is injected, otherwise the lib_allocation_crate is injected.

The compiler will also record that the injected crate is injected, so later compilations know that rlibs don’t actually require the injected crate at runtime (allowing it to be overridden).

Allocators in practice

Most libraries written in Rust wouldn’t interact with the scheme proposed in this RFC at all as they wouldn’t explicitly link with an allocator and generally are compiled as rlibs. If a Rust dynamic library is used as a dependency, then its original choice of allocator is propagated throughout the crate graph, but this rarely happens (except for the compiler itself, which will continue to use jemalloc).

Authors of crates which are embedded into other runtimes will start using the system allocator by default with no extra annotation needed. If they wish to funnel Rust allocations to the same source as the host application’s allocations then a crate can be written and linked in.

Finally, providers of allocators will simply provide a crate to do so, and then applications and/or libraries can make explicit use of the allocator by depending on it as usual.

Drawbacks

A significant amount of API surface area is being added to the compiler and standard distribution as part of this RFC, but it is possible for it to all enter as #[unstable], so we can take our time stabilizing it and perhaps only stabilize a subset over time.

The limitation of an allocator crate not being able to link to the standard library (or libcollections) may be a somewhat significant hit to the ergonomics of defining an allocator, but allocators are traditionally a very niche class of library and end up defining their own data structures regardless.

Libraries on crates.io may accidentally link to an allocator and not actually use any specific API from it (other than the standard allocation symbols), forcing transitive dependants to silently use that allocator.

This RFC does not specify the ability to swap out the allocator via the command line, which is certainly possible and sometimes more convenient than modifying the source itself.

It’s possible to define an allocator API (e.g. define the symbols) but then forget the #![allocator] annotation, causing the compiler to wind up linking two allocators, which may cause link errors that are difficult to debug.

Alternatives

The compiler’s knowledge about allocators could be simplified quite a bit to the point where a compiler flag is used to just turn injection on/off, and then it’s the responsibility of the application to define the necessary symbols if the flag is turned off. The current implementation of this RFC, however, is not seen as overly invasive and the benefits of “everything’s just a crate” seems worth it for the mild amount of complexity in the compiler.

Many of the names (such as alloc_system) have a number of alternatives, and the naming of attributes and functions could perhaps follow a stronger convention.

Unresolved questions

Does this enable jemalloc to be built without a prefix on Linux? This would enable us to direct LLVM allocations to jemalloc, which would be quite nice!

Should BSD-like systems use Rust’s jemalloc by default? Many of them have jemalloc as the system allocator and even the special APIs we use from jemalloc.

Updates since being accepted

Note: this RFC has been superseded by RFC 1974.

Summary

Tweak the #![no_std] attribute, add a new #![no_core] attribute, and pave the way for stabilizing the libcore library.

Motivation

Currently all stable Rust programs must link to the standard library (libstd), and it is impossible to opt out of this. The standard library is not appropriate for use cases such as kernels, embedded development, or some various niche cases in userspace. For these applications Rust itself is appropriate, but the compiler does not provide a stable interface compiling in this mode.

The standard distribution provides a library, libcore, which is “the essence of Rust” as it provides many language features such as iterators, slice methods, string methods, etc. The defining feature of libcore is that it has 0 dependencies, unlike the standard library which depends on many I/O APIs, for example. The purpose of this RFC is to provide a stable method to access libcore.

Applications which do not want to use libstd still want to use libcore 99% of the time, but unfortunately the current #![no_std] attribute does not do a great job in facilitating this. When moving into the realm of not using the standard library, the compiler should make the use case as ergonomic as possible, so this RFC proposes different behavior than today’s #![no_std].

Finally, the standard library defines a number of language items which must be defined when libstd is not used. These language items are:

  • panic_fmt
  • eh_personality
  • stack_exhausted

To be able to usefully leverage #![no_std] in stable Rust these lang items must be available in a stable fashion.

Detailed Design

This RFC proposes a number of changes:

  • Tweak the #![no_std] attribute slightly.
  • Introduce a #![no_core] attribute.
  • Pave the way to stabilize the core module.

no_std

The #![no_std] attribute currently provides two pieces of functionality:

  • The compiler no longer injects extern crate std at the top of a crate.
  • The prelude (use std::prelude::v1::*) is no longer injected at the top of every module.

This RFC proposes adding the following behavior to the #![no_std] attribute:

  • The compiler will inject extern crate core at the top of a crate.
  • The libcore prelude will be injected at the top of every module.

Most uses of #![no_std] already want behavior along these lines as they want to use libcore, just not the standard library.

no_core

A new attribute will be added to the compiler, #![no_core], which serves two purposes:

  • This attribute implies the #![no_std] attribute (no std prelude/crate injection).
  • This attribute will prevent core prelude/crate injection.

Users of #![no_std] today who do not use libcore would migrate to moving this attribute instead of #![no_std].

Stabilization of libcore

This RFC does not yet propose a stabilization path for the contents of libcore, but it proposes readying to stabilize the name core for libcore, paving the way for the rest of the library to be stabilized. The exact method of stabilizing its contents will be determined with a future RFC or pull requests.

Stabilizing lang items

As mentioned above, there are three separate lang items which are required by the libcore library to link correctly. These items are:

  • panic_fmt
  • stack_exhausted
  • eh_personality

This RFC does not attempt to stabilize these lang items for a number of reasons:

  • The exact set of these lang items is somewhat nebulous and may change over time.
  • The signatures of each of these lang items can either be platform-specific or it’s just “too weird” to stabilize.
  • These items are pretty obscure and it’s not very widely known what they do or how they should be implemented.

Stabilization of these lang items (in any form) will be considered in a future RFC.

Drawbacks

The current distribution provides precisely one library, the standard library, for general consumption of Rust programs. Adding a new one (libcore) is adding more surface area to the distribution (in addition to adding a new #![no_core] attribute). This surface area is greatly desired, however.

When using #![no_std] the experience of Rust programs isn’t always the best as there are some pitfalls that can be run into easily. For example, macros and plugins sometimes hardcode ::std paths, but most ones in the standard distribution have been updated to use ::core in the case that #![no_std] is present. Another example is that common utilities like vectors, pointers, and owned strings are not available without liballoc, which will remain an unstable library. This means that users of #![no_std] will have to reimplement all of this functionality themselves.

This RFC does not yet pave a way forward for using #![no_std] and producing an executable because the #[start] item is required, but remains feature gated. This RFC just enables creation of Rust static or dynamic libraries which don’t depend on the standard library in addition to Rust libraries (rlibs) which do not depend on the standard library.

In stabilizing the #![no_std] attribute it’s likely that a whole ecosystem of crates will arise which work with #![no_std], but in theory all of these crates should also interoperate with the rest of the ecosystem using std. Unfortunately, however, there are known cases where this is not possible. For example if a macro is exported from a #![no_std] crate which references items from core it won’t work by default with a std library.

Alternatives

Most of the strategies taken in this RFC have some minor variations on what can happen:

  • The #![no_std] attribute could be stabilized as-is without adding a #![no_core] attribute, requiring users to write extern crate core and import the core prelude manually. The burden of adding #![no_core] to the compiler, however, is seen as not-too-bad compared to the increase in ergonomics of using #![no_std].
  • Another stable crate could be provided by the distribution which provides definitions of these lang items which are all wired to abort. This has the downside of selecting a name for this crate, however, and also inflating the crates in our distribution again.

Unresolved Questions

  • How important/common are #![no_std] executables? Should this RFC attempt to stabilize that as well?
  • When a staticlib is emitted should the compiler guarantee that a #![no_std] one will link by default? This precludes us from ever adding future require language items for features like unwinding or stack exhaustion by default. For example if a new security feature is added to LLVM and we’d like to enable it by default, it may require that a symbol or two is defined somewhere in the compilation.

Summary

Add a high-level intermediate representation (HIR) to the compiler. This is basically a new (and additional) AST more suited for use by the compiler.

This is purely an implementation detail of the compiler. It has no effect on the language.

Note that adding a HIR does not preclude adding a MIR or LIR in the future.

Motivation

Currently the AST is used by libsyntax for syntactic operations, by the compiler for pretty much everything, and in syntax extensions. I propose splitting the AST into a libsyntax version that is specialised for syntactic operation and will eventually be stabilised for use by syntax extensions and tools, and the HIR which is entirely internal to the compiler.

The benefit of this split is that each AST can be specialised to its task and we can separate the interface to the compiler (the AST) from its implementation (the HIR). Specific changes I see that could happen are more ids and spans in the AST, the AST adhering more closely to the surface syntax, the HIR becoming more abstract (e.g., combining structs and enums), and using resolved names in the HIR (i.e., performing name resolution as part of the AST->HIR lowering).

Not using the AST in the compiler means we can work to stabilise it for syntax extensions and tools: it will become part of the interface to the compiler.

I also envisage all syntactic expansion of language constructs (e.g., for loops, if let) moving to the lowering step from AST to HIR, rather than being AST manipulations. That should make both error messages and tool support better for such constructs. It would be nice to move lifetime elision to the lowering step too, in order to make the HIR as explicit as possible.

Detailed design

Initially, the HIR will be an (almost) identical copy of the AST and the lowering step will simply be a copy operation. Since some constructs (macros, for loops, etc.) are expanded away in libsyntax, these will not be part of the HIR. Tools such as the AST visitor will need to be duplicated.

The compiler will be changed to use the HIR throughout (this should mostly be a matter of change the imports). Incrementally, I expect to move expansion of language constructs to the lowering step. Further in the future, the HIR should get more abstract and compact, and the AST should get closer to the surface syntax.

Drawbacks

Potentially slower compilations and higher memory use. However, this should be offset in the long run by making improvements to the compiler easier by having a more appropriate data structure.

Alternatives

Leave things as they are.

Skip the HIR and lower straight to a MIR later in compilation. This has advantages which adding a HIR does not have, however, it is a far more complex refactoring and also misses some benefits of the HIR, notably being able to stabilise the AST for tools and syntax extensions without locking in the compiler.

Unresolved questions

How to deal with spans and source code. We could keep the AST around and reference back to it from the HIR. Or we could copy span information to the HIR (I plan on doing this initially). Possibly some other solution like keeping the span info in a side table (note that we need less span info in the compiler than we do in libsyntax, which is in turn less than tools want).

Summary

Allow a x...y expression to create an inclusive range.

Motivation

There are several use-cases for inclusive ranges, that semantically include both end-points. For example, iterating from 0_u8 up to and including some number n can be done via for _ in 0..n + 1 at the moment, but this will fail if n is 255. Furthermore, some iterable things only have a successor operation that is sometimes sensible, e.g., 'a'..'{' is equivalent to the inclusive range 'a'...'z': there’s absolutely no reason that { is after z other than a quirk of the representation.

The ... syntax mirrors the current .. used for exclusive ranges: more dots means more elements.

Detailed design

std::ops defines

pub struct RangeInclusive<T> {
    pub start: T,
    pub end: T,
}

pub struct RangeToInclusive<T> {
    pub end: T,
}

Writing a...b in an expression desugars to std::ops::RangeInclusive { start: a, end: b }. Writing ...b in an expression desugars to std::ops::RangeToInclusive { end: b }.

RangeInclusive implements the standard traits (Clone, Debug etc.), and implements Iterator.

The use of ... in a pattern remains as testing for inclusion within that range, not a struct match.

The author cannot forsee problems with breaking backward compatibility. In particular, one tokenisation of syntax like 1... now would be 1. .. i.e. a floating point number on the left, however, fortunately, it is actually tokenised like 1 ..., and is hence an error with the current compiler.

This struct definition is maximally consistent with the existing Range. a..b and a...b are the same size and have the same fields, just with the expected difference in semantics.

The range a...b contains all x where a <= x && x <= b. As such, an inclusive range is non-empty iff a <= b. When the range is iterable, a non-empty range will produce at least one item when iterated. Because T::MAX...T::MAX is a non-empty range, the iteration needs extra handling compared to a half-open Range. As such, .next() on an empty range y...y will produce the value y and adjust the range such that !(start <= end). Providing such a range is not a burden on the T type as any such range is acceptable, and only PartialOrd is required so it can be satisfied with an incomparable value n with !(n <= n). A caller must not, in general, expect any particular start or end after iterating, and is encouraged to detect empty ranges with ExactSizeIterator::is_empty instead of by observing fields directly.

Note that because ranges are not required to be well-formed, they have a much stronger bound than just needing successor function: they require a b is-reachable-from a predicate (as a <= b). Providing that efficiently for a DAG walk, or even a simpler forward list walk, is a substantially harder thing to do than providing a pair (x, y) such that !(x <= y).

Implementation note: For currently-iterable types, the initial implementation of this will have the range become 1...0 after yielding the final value, as that can be done using the replace_one and replace_zero methods on the existing (but unstable) Step trait. It’s expected, however, that the trait will change to allow more type-appropriate impls. For example, a num::BigInt may rather become empty by incrementing start, as Range does, since it doesn’t to need to worry about overflow. Even for primitives, it could be advantageous to choose a different implementation, perhaps using .overflowing_add(1) and swapping on overflow, or a...a could become (a+1)...a where possible and a...(a-1) otherwise.

Drawbacks

There’s a mismatch between pattern-... and expression-..., in that the former doesn’t undergo the same desugaring as the latter. (Although they represent essentially the same thing semantically.)

The ... vs. .. distinction is the exact inversion of Ruby’s syntax.

This proposal makes the post-iteration values of the start and end fields constant, and thus useless. Some of the alternatives would expose the last value returned from the iteration, through a more complex interface.

Alternatives

An alternate syntax could be used, like ..=. There has been discussion, but there wasn’t a clear winner.

This RFC proposes single-ended syntax with only an end, ...b, but not with only a start (a...) or unconstrained .... This balance could be reevaluated for usefulness and conflicts with other proposed syntax.

  • RangeInclusive could be a struct including a finished field. This makes it easier for the struct to always be iterable, as the extra field is set once the ends match. But having the extra field in a language-level desugaring, catering to one library use-case is a little non-“hygienic”. It is especially strange that the field isn’t consistent across the different ... desugarings. And the presence of the public field encourages checkinging it, which can be misleading as r.finished == false does not guarantee that r.count() > 0.
  • RangeInclusive could be an enum with Empty and NonEmpty variants. This is cleaner than the finished field, but still has the problem that there’s no invariant maintained: while an Empty range is definitely empty, a NonEmpty range might actually be empty. And requiring matching on every use of the type is less ergonomic. For example, the clamp RFC would naturally use a RangeInclusive parameter, but because it still needs to assert!(start <= end) in the NonEmpty arm, the noise of the Empty vs NonEmpty match provides it no value.
  • a...b only implements IntoIterator, not Iterator, by converting to a different type that does have the field. However, this means that a.. .b behaves differently to a..b, so (a...b).map(|x| ...) doesn’t work (the .. version of that is used reasonably often, in the author’s experience)
  • The name of the end field could be different, perhaps last, to reflect its different (inclusive) semantics from the end (exclusive) field on the other ranges.

Unresolved questions

None so far.

Amendments

  • In rust-lang/rfcs#1320, this RFC was amended to change the RangeInclusive type from a struct with a finished field to an enum.
  • In rust-lang/rfcs#1980, this RFC was amended to change the RangeInclusive type from an enum to a struct with just start and end fields.

Summary

Add a new flag to the compiler, --cap-lints, which set the maximum possible lint level for the entire crate (and cannot be overridden). Cargo will then pass --cap-lints allow to all upstream dependencies when compiling code.

Motivation

Note: this RFC represents issue #1029

Currently any modification to a lint in the compiler is strictly speaking a breaking change. All crates are free to place #![deny(warnings)] at the top of their crate, turning any new warnings into compilation errors. This means that if a future version of Rust starts to emit new warnings it may fail to compile some previously written code (a breaking change).

We would very much like to be able to modify lints, however. For example rust-lang/rust#26473 updated the missing_docs lint to also look for missing documentation on const items. This ended up breaking some crates in the ecosystem due to their usage of #![deny(missing_docs)].

The mechanism proposed in this RFC is aimed at providing a method to compile upstream dependencies in a way such that they are resilient to changes in the behavior of the standard lints in the compiler. A new lint warning or error will never represent a memory safety issue (otherwise it’d be a real error) so it should be safe to ignore any new instances of a warning that didn’t show up before.

Detailed design

There are two primary changes propsed by this RFC, the first of which is a new flag to the compiler:

    --cap-lints LEVEL   Set the maximum lint level for this compilation, cannot
                        be overridden by other flags or attributes.

For example when --cap-lints allow is passed, all instances of #[warn], #[deny], and #[forbid] are ignored. If, however --cap-lints warn is passed only deny and forbid directives are ignored.

The acceptable values for LEVEL will be allow, warn, deny, or forbid.

The second change proposed is to have Cargo pass --cap-lints allow to all upstream dependencies. Cargo currently passes -A warnings to all upstream dependencies (allow all warnings by default), so this would just be guaranteeing that no lints could be fired for upstream dependencies.

With these two pieces combined together it is now possible to modify lints in the compiler in a backwards compatible fashion. Modifications to existing lints to emit new warnings will not get triggered, and new lints will also be entirely suppressed only for upstream dependencies.

Cargo Backwards Compatibility

This flag would be first non-1.0 flag that Cargo would be passing to the compiler. This means that Cargo can no longer drive a 1.0 compiler, but only a 1.N+ compiler which has the --cap-lints flag. To handle this discrepancy Cargo will detect whether --cap-lints is a valid flag to the compiler.

Cargo already runs rustc -vV to learn about the compiler (e.g. a “unique string” that’s opaque to Cargo) and it will instead start passing rustc -vV --cap-lints allow to the compiler instead. This will allow Cargo to simultaneously detect whether the flag is valid and learning about the version string. If this command fails and rustc -vV succeeds then Cargo will fall back to the old behavior of passing -A warnings.

Drawbacks

This RFC adds surface area to the command line of the compiler with a relatively obscure option --cap-lints. The option will almost never be passed by anything other than Cargo, so having it show up here is a little unfortunate.

Some crates may inadvertently rely on memory safety through lints, or otherwise very much not want lints to be turned off. For example if modifications to a new lint to generate more warnings caused an upstream dependency to fail to compile, it could represent a serious bug indicating the dependency needs to be updated. This system would paper over this issue by forcing compilation to succeed. This use case seems relatively rare, however, and lints are also perhaps not the best method to ensure the safety of a crate.

Cargo may one day grow configuration to not pass this flag by default (e.g. go back to passing -Awarnings by default), which is yet again more expansion of API surface area.

Alternatives

  • Modifications to lints or additions to lints could be considered backwards-incompatible changes.
  • The meaning of the -A flag could be reinterpreted as “this cannot be overridden”
  • A new “meta lint” could be introduced to represent the maximum cap, for example -A everything. This is semantically different enough from -A foo that it seems worth having a new flag.

Unresolved questions

None yet.

Summary

Add element-recovery methods to the set types in std.

Motivation

Sets are sometimes used as a cache keyed on a certain property of a type, but programs may need to access the type’s other properties for efficiency or functionality. The sets in std do not expose their elements (by reference or by value), making this use-case impossible.

Consider the following example:

use std::collections::HashSet;
use std::hash::{Hash, Hasher};

// The `Widget` type has two fields that are inseparable.
#[derive(PartialEq, Eq, Hash)]
struct Widget {
    foo: Foo,
    bar: Bar,
}

#[derive(PartialEq, Eq, Hash)]
struct Foo(&'static str);

#[derive(PartialEq, Eq, Hash)]
struct Bar(u32);

// Widgets are normally considered equal if all their corresponding fields are equal, but we would
// also like to maintain a set of widgets keyed only on their `bar` field. To this end, we create a
// new type with custom `{PartialEq, Hash}` impls.
struct MyWidget(Widget);

impl PartialEq for MyWidget {
    fn eq(&self, other: &Self) -> bool { self.0.bar == other.0.bar }
}

impl Eq for MyWidget {}

impl Hash for MyWidget {
    fn hash<H: Hasher>(&self, h: &mut H) { self.0.bar.hash(h); }
}

fn main() {
    // In our program, users are allowed to interactively query the set of widgets according to
    // their `bar` field, as well as insert, replace, and remove widgets.

    let mut widgets = HashSet::new();

    // Add some default widgets.
    widgets.insert(MyWidget(Widget { foo: Foo("iron"), bar: Bar(1) }));
    widgets.insert(MyWidget(Widget { foo: Foo("nickel"), bar: Bar(2) }));
    widgets.insert(MyWidget(Widget { foo: Foo("copper"), bar: Bar(3) }));

    // At this point, the user enters commands and receives output like:
    //
    // ```
    // > get 1
    // Some(iron)
    // > get 4
    // None
    // > remove 2
    // removed nickel
    // > add 2 cobalt
    // added cobalt
    // > add 3 zinc
    // replaced copper with zinc
    // ```
    //
    // However, `HashSet` does not expose its elements via its `{contains, insert, remove}`
    // methods,  instead providing only a boolean indicator of the elements's presence in the set,
    // preventing us from implementing the desired functionality.
}

Detailed design

Add the following element-recovery methods to std::collections::{BTreeSet, HashSet}:

impl<T> Set<T> {
    // Like `contains`, but returns a reference to the element if the set contains it.
    fn get<Q: ?Sized>(&self, element: &Q) -> Option<&T>;

    // Like `remove`, but returns the element if the set contained it.
    fn take<Q: ?Sized>(&mut self, element: &Q) -> Option<T>;

    // Like `insert`, but replaces the element with the given one and returns the previous element
    // if the set contained it.
    fn replace(&mut self, element: T) -> Option<T>;
}

Drawbacks

This complicates the collection APIs.

Alternatives

Do nothing.

Summary

Lay the ground work for building powerful SIMD functionality.

Motivation

SIMD (Single-Instruction Multiple-Data) is an important part of performant modern applications. Most CPUs used for that sort of task provide dedicated hardware and instructions for operating on multiple values in a single instruction, and exposing this is an important part of being a low-level language.

This RFC lays the ground-work for building nice SIMD functionality, but doesn’t fill everything out. The goal here is to provide the raw types and access to the raw instructions on each platform.

(An earlier variant of this RFC was discussed as a pre-RFC.)

Where does this code go? Aka. why not in std?

This RFC is focused on building stable, powerful SIMD functionality in external crates, not std.

This makes it much easier to support functionality only “occasionally” available with Rust’s preexisting cfg system. There’s no way for std to conditionally provide an API based on the target features used for the final artifact. Building std in every configuration is certainly untenable. Hence, if it were to be in std, there would need to be some highly delayed cfg system to support that sort of conditional API exposure.

With an external crate, we can leverage cargo’s existing build infrastructure: compiling with some target features will rebuild with those features enabled.

Detailed design

The design comes in three parts, all on the path to stabilisation:

  • types (feature(repr_simd))
  • operations (feature(platform_intrinsics))
  • platform detection (feature(cfg_target_feature))

The general idea is to avoid bad performance cliffs, so that an intrinsic call in Rust maps to preferably one CPU instruction, or, if not, the “optimal” sequence required to do the given operation anyway. This means exposing a lot of platform specific details, since platforms behave very differently: both across architecture families (x86, x86-64, ARM, MIPS, …), and even within a family (x86-64’s Skylake, Haswell, Nehalem, …).

There is definitely a common core of SIMD functionality shared across many platforms, but this RFC doesn’t try to extract that, it is just building tools that can be wrapped into a more uniform API later.

Types

There is a new attribute: repr(simd).

#[repr(simd)]
struct f32x4(f32, f32, f32, f32);

#[repr(simd)]
struct Simd2<T>(T, T);

The simd repr can be attached to a struct and will cause such a struct to be compiled to a SIMD vector. It can be generic, but it is required that any fully monomorphised instance of the type consist of only a single “primitive” type, repeated some number of times.

The repr(simd) may not enforce that any trait bounds exists/does the right thing at the type checking level for generic repr(simd) types. As such, it will be possible to get the code-generator to error out (ala the old transmute size errors), however, this shouldn’t cause problems in practice: libraries wrapping this functionality would layer type-safety on top (i.e. generic repr(simd) types would use some unsafe trait as a bound that is designed to only be implemented by types that will work).

Adding repr(simd) to a type may increase its minimum/preferred alignment, based on platform behaviour. (E.g. x86 wants its 128-bit SSE vectors to be 128-bit aligned.)

Operations

CPU vendors usually offer “standard” C headers for their CPU specific operations, such as arm_neon.h and the ...mmintrin.h headers for x86(-64).

All of these would be exposed as compiler intrinsics with names very similar to those that the vendor suggests (only difference would be some form of manual namespacing, e.g. prefixing with the CPU target), loadable via an extern block with an appropriate ABI. This subset of intrinsics would be on the path to stabilisation (that is, one can “import” them with extern in stable code), and would not be exported by std.

Example:

extern "platform-intrinsic" {
    fn x86_mm_abs_epi16(a: Simd8<i16>) -> Simd8<i16>;
    // ...
}

These all use entirely concrete types, and this is the core interface to these intrinsics: essentially it is just allowing code to exactly specify a CPU instruction to use. These intrinsics only actually work on a subset of the CPUs that Rust targets, and will result in compile time errors if they are called on platforms that do not support them. The signatures are typechecked, but in a “duck-typed” manner: it will just ensure that the types are SIMD vectors with the appropriate length and element type, it will not enforce a specific nominal type.

NB. The structural typing is just for the declaration: if a SIMD intrinsic is declared to take a type X, it must always be called with X, even if other types are structurally equal to X. Also, within a signature, SIMD types that must be structurally equal must be nominally equal. I.e. if the add_... all refer to the same intrinsic to add a SIMD vector of bytes,

// (same length)
struct A(u8, u8, ..., u8);
struct B(u8, u8, ..., u8);

extern "platform-intrinsic" {
    fn add_aaa(x: A, y: A) -> A; // ok
    fn add_bbb(x: B, y: B) -> B; // ok
    fn add_aab(x: A, y: A) -> B; // error, expected B, found A
    fn add_bab(x: B, y: A) -> B; // error, expected A, found B
}

fn double_a(x: A) -> A {
    add_aaa(x, x)
}
fn double_b(x: B) -> B {
    add_aaa(x, x) // error, expected A, found B
}

There would additionally be a small set of cross-platform operations that are either generally efficiently supported everywhere or are extremely useful. These won’t necessarily map to a single instruction, but will be shimmed as efficiently as possible.

  • shuffles and extracting/inserting elements
  • comparisons
  • arithmetic
  • conversions

All of these intrinsics are imported via an extern directive similar to the process for pre-existing intrinsics like transmute, however, the SIMD operations are provided under a special ABI: platform-intrinsic. Use of this ABI (and hence the intrinsics) is initially feature-gated under the platform_intrinsics feature name. Why platform-intrinsic rather than say simd-intrinsic? There are non-SIMD platform-specific instructions that may be nice to expose (for example, Intel defines an _addcarry_u32 intrinsic corresponding to the ADC instruction).

Shuffles & element operations

One of the most powerful features of SIMD is the ability to rearrange data within vectors, giving super-linear speed-ups sometimes. As such, shuffles are exposed generally: intrinsics that represent arbitrary shuffles.

This may violate the “one instruction per intrinsic” principal depending on the shuffle, but rearranging SIMD vectors is extremely useful, and providing a direct intrinsic lets the compiler (a) do the programmers work in synthesising the optimal (short) sequence of instructions to get a given shuffle and (b) track data through shuffles without having to understand all the details of every platform specific intrinsic for shuffling.

extern "platform-intrinsic" {
    fn simd_shuffle2<T, U>(v: T, w: T, idx: [i32; 2]) -> U;
    fn simd_shuffle4<T, U>(v: T, w: T, idx: [i32; 4]) -> U;
    fn simd_shuffle8<T, U>(v: T, w: T, idx: [i32; 8]) -> U;
    fn simd_shuffle16<T, U>(v: T, w: T, idx: [i32; 16]) -> U;
    // ...
}

The raw definitions are only checked for validity at monomorphisation time, ensure that T and U are SIMD vector with the same element type, U has the appropriate length etc. Libraries can use traits to ensure that these will be enforced by the type checker too.

This approach has similar type “safety”/code-generation errors to the vectors themselves.

These operations are semantically:

// vector of double length
let z = concat(v, w);

return [z[idx[0]], z[idx[1]], z[idx[2]], ...]

The index array idx has to be compile time constants. Out of bounds indices yield errors.

Similarly, intrinsics for inserting/extracting elements into/out of vectors are provided, to allow modelling the SIMD vectors as actual CPU registers as much as possible:

extern "platform-intrinsic" {
    fn simd_insert<T, Elem>(v: T, i0: u32, elem: Elem) -> T;
    fn simd_extract<T, Elem>(v: T, i0: u32) -> Elem;
}

The i0 indices do not have to be constant. These are equivalent to v[i0] = elem and v[i0] respectively. They are type checked similarly to the shuffles.

Comparisons

Comparisons are implemented via intrinsics. The raw signatures would look like:

extern "platform-intrinsic" {
    fn simd_eq<T, U>(v: T, w: T) -> U;
    fn simd_ne<T, U>(v: T, w: T) -> U;
    fn simd_lt<T, U>(v: T, w: T) -> U;
    fn simd_le<T, U>(v: T, w: T) -> U;
    fn simd_gt<T, U>(v: T, w: T) -> U;
    fn simd_ge<T, U>(v: T, w: T) -> U;
}

These are type checked during code-generation similarly to the shuffles: ensuring that T and U have the same length, and that U is appropriately “boolean”-y. Libraries can use traits to ensure that these will be enforced by the type checker too.

Arithmetic

Intrinsics will be provided for arithmetic operations like addition and multiplication.

extern "platform-intrinsic" {
    fn simd_add<T>(x: T, y: T) -> T;
    fn simd_mul<T>(x: T, y: T) -> T;
    // ...
}

These will have codegen time checks that the element type is correct:

  • add, sub, mul: any float or integer type
  • div: any float type
  • and, or, xor, shl (shift left), shr (shift right): any integer type

(The integer types are i8, …, i64, u8, …, u64 and the float types are f32 and f64.)

Why not inline asm?

One alternative to providing intrinsics is to instead just use inline-asm to expose each CPU instruction. However, this approach has essentially only one benefit (avoiding defining the intrinsics), but several downsides, e.g.

  • assembly is generally a black-box to optimisers, inhibiting optimisations, like algebraic simplification/transformation,
  • programmers would have to manually synthesise the right sequence of operations to achieve a given shuffle, while having a generic shuffle intrinsic lets the compiler do it (NB. the intention is that the programmer will still have access to the platform specific operations for when the compiler synthesis isn’t quite right),
  • inline assembly is not currently stable in Rust and there’s not a strong push for it to be so in the immediate future (although this could change).

Benefits of manual assembly writing, like instruction scheduling and register allocation don’t apply to the (generally) one-instruction asm! blocks that replace the intrinsics (they need to be designed so that the compiler has full control over register allocation, or else the result will be strictly worse). Those possible advantages of hand written assembly over intrinsics only come in to play when writing longer blocks of raw assembly, i.e. some inner loop might be faster when written as a single chunk of asm rather than as intrinsics.

Platform Detection

The availability of efficient SIMD functionality is very fine-grained, and our current cfg(target_arch = "...") is not precise enough. This RFC proposes a target_feature cfg, that would be set to the features of the architecture that are known to be supported by the exact target e.g.

  • a default x86-64 compilation would essentially only set target_feature = "sse" and target_feature = "sse2"
  • compiling with -C target-feature="+sse4.2" would set target_feature = "sse4.2", target_feature = "sse.4.1", …, target_feature = "sse".
  • compiling with -C target-cpu=native on a modern CPU might set target_feature = "avx2", target_feature = "avx", …

The possible values of target_feature will be a selected whitelist, not necessarily just everything LLVM understands. There are other non-SIMD features that might have target_features set too, such as popcnt and rdrnd on x86/x86-64.)

With a cfg_if! macro that expands to the first cfg that is satisfied (ala @alexcrichton’s cfg-if), code might look like:

cfg_if_else! {
    if #[cfg(target_feature = "avx")] {
        fn foo() { /* use AVX things */ }
    } else if #[cfg(target_feature = "sse4.1")] {
        fn foo() { /* use SSE4.1 things */ }
    } else if #[cfg(target_feature = "sse2")] {
        fn foo() { /* use SSE2 things */ }
    } else if #[cfg(target_feature = "neon")] {
        fn foo() { /* use NEON things */ }
    } else {
        fn foo() { /* universal fallback */ }
    }
}

Extensions

  • scatter/gather operations allow (partially) operating on a SIMD vector of pointers. This would require allowing pointers(/references?) in repr(simd) types.

  • allow (and ignore for everything but type checking) zero-sized types in repr(simd) structs, to allow tagging them with markers

  • the shuffle intrinsics could be made more relaxed in their type checking (i.e. not require that they return their second type parameter), to allow more type safety when combined with generic simd types:

    #[repr(simd)] struct Simd2<T>(T, T);
    extern "platform-intrinsic" {
        fn simd_shuffle2<T, U>(x: T, y: T, idx: [u32; 2]) -> Simd2<U>;
    }
    

    This should be a backwards-compatible generalisation.

Alternatives

  • Intrinsics could instead by namespaced by ABI, extern "x86-intrinsic", extern "arm-intrinsic".

  • There could be more syntactic support for shuffles, either with true syntax, or with a syntax extension. The latter might look like: shuffle![x, y, i0, i1, i2, i3, i4, ...]. However, this requires that shuffles are restricted to a single type only (i.e. Simd4<T> can be shuffled to Simd4<T> but nothing else), or some sort of type synthesis. The compiler has to somehow work out the return value:

    let x: Simd4<u32> = ...;
    let y: Simd4<u32> = ...;
    
    // reverse all the elements.
    let z = shuffle![x, y, 7, 6, 5, 4, 3, 2, 1, 0];

    Presumably z should be Simd8<u32>, but it’s not obvious how the compiler can know this. The repr(simd) approach means there may be more than one SIMD-vector type with the Simd8<u32> shape (or, in fact, there may be zero).

  • With type-level integers, there could be one shuffle intrinsic:

    fn simd_shuffle<T, U, const N: usize>(x: T, y: T, idx: [u32; N]) -> U;

    NB. It is possible to add this as an additional intrinsic (possibly deprecating the simd_shuffleNNN forms) later.

  • Type-level values can be applied more generally: since the shuffle indices have to be compile time constants, the shuffle could be

    fn simd_shuffle<T, U, const N: usize, const IDX: [u32; N]>(x: T, y: T) -> U;
    
  • Instead of platform detection, there could be feature detection (e.g. “platform supports something equivalent to x86’s DPPS”), but there probably aren’t enough cross-platform commonalities for this to be worth it. (Each “feature” would essentially be a platform specific cfg anyway.)

  • Check vector operators in debug mode just like the scalar versions.

  • Make fixed length arrays repr(simd)-able (via just flattening), so that, say, #[repr(simd)] struct u32x4([u32; 4]); and #[repr(simd)] struct f64x8([f64; 4], [f64; 4]); etc works. This will be most useful if/when we allow generic-lengths, #[repr(simd)] struct Simd<T, n>([T; n]);

  • have 100% guaranteed type-safety for generic #[repr(simd)] types and the generic intrinsics. This would probably require a relatively complicated set of traits (with compiler integration).

Unresolved questions

  • Should integer vectors get division automatically? Most CPUs don’t support them for vectors.
  • How should out-of-bounds shuffle and insert/extract indices be handled?

Summary

Add a new subcommand to Cargo, install, which will install [[bin]]-based packages onto the local system in a Cargo-specific directory.

Motivation

There has almost always been a desire to be able to install Cargo packages locally, but it’s been somewhat unclear over time what the precise meaning of this is. Now that we have crates.io and lots of experience with Cargo, however, the niche that cargo install would fill is much clearer.

Fundamentally, however, Cargo is a ubiquitous tool among the Rust community and implementing cargo install would facilitate sharing Rust code among its developers. Simple tasks like installing a new cargo subcommand, installing an editor plugin, etc, would be just a cargo install away. Cargo can manage dependencies and versions itself to make the process as seamless as possible.

Put another way, enabling easily sharing code is one of Cargo’s fundamental design goals, and expanding into binaries is simply an extension of Cargo’s core functionality.

Detailed design

The following new subcommand will be added to Cargo:

Install a crate onto the local system

Installing new crates:
    cargo install [options]
    cargo install [options] [-p CRATE | --package CRATE] [--vers VERS]
    cargo install [options] --git URL [--branch BRANCH | --tag TAG | --rev SHA]
    cargo install [options] --path PATH

Managing installed crates:
    cargo install [options] --list

Options:
    -h, --help              Print this message
    -j N, --jobs N          The number of jobs to run in parallel
    --features FEATURES     Space-separated list of features to activate
    --no-default-features   Do not build the `default` feature
    --debug                 Build in debug mode instead of release mode
    --bin NAME              Only install the binary NAME
    --example EXAMPLE       Install the example EXAMPLE instead of binaries
    -p, --package CRATE     Install this crate from crates.io or select the
                            package in a repository/path to install.
    -v, --verbose           Use verbose output
    --root                  Directory to install packages into

This command manages Cargo's local set of install binary crates. Only packages
which have [[bin]] targets can be installed, and all binaries are installed into
`$HOME/.cargo/bin` by default (or `$CARGO_HOME/bin` if you change the home
directory).

There are multiple methods of installing a new crate onto the system. The
`cargo install` command with no arguments will install the current crate (as
specified by the current directory). Otherwise the `-p`, `--package`, `--git`,
and `--path` options all specify the source from which a crate is being
installed. The `-p` and `--package` options will download crates from crates.io.

Crates from crates.io can optionally specify the version they wish to install
via the `--vers` flags, and similarly packages from git repositories can
optionally specify the branch, tag, or revision that should be installed. If a
crate has multiple binaries, the `--bin` argument can selectively install only
one of them, and if you'd rather install examples the `--example` argument can
be used as well.

The `--list` option will list all installed packages (and their versions).

Installing Crates

Cargo attempts to be as flexible as possible in terms of installing crates from various locations and specifying what should be installed. All binaries will be stored in a cargo-local directory, and more details on where exactly this is located can be found below.

Cargo will not attempt to install binaries or crates into system directories (e.g. /usr) as that responsibility is intended for system package managers.

To use installed crates one just needs to add the binary path to their PATH environment variable. This will be recommended when cargo install is run if PATH does not already look like it’s configured.

Crate Sources

The cargo install command will be able to install crates from any source that Cargo already understands. For example it will start off being able to install from crates.io, git repositories, and local paths. Like with normal dependencies, downloads from crates.io can specify a version, git repositories can specify branches, tags, or revisions.

Sources with multiple crates

Sources like git repositories and paths can have multiple crates inside them, and Cargo needs a way to figure out which one is being installed. If there is more than one crate in a repo (or path), then Cargo will apply the following heuristics to select a crate, in order:

  1. If the -p argument is specified, use that crate.
  2. If only one crate has binaries, use that crate.
  3. If only one crate has examples, use that crate.
  4. Print an error suggesting the -p flag.

Multiple binaries in a crate

Once a crate has been selected, Cargo will by default build all binaries and install them. This behavior can be modified with the --bin or --example flags to configure what’s installed on the local system.

Building a Binary

The cargo install command has some standard build options found on cargo build and friends, but a key difference is that --release is the default for installed binaries so a --debug flag is present to switch this back to debug-mode. Otherwise the --features flag can be specified to activate various features of the crate being installed.

The --target option is omitted as cargo install is not intended for creating cross-compiled binaries to ship to other platforms.

Conflicting Crates

Cargo will not namespace the installation directory for crates, so conflicts may arise in terms of binary names. For example if crates A and B both provide a binary called foo they cannot be both installed at once. Cargo will reject these situations and recommend that a binary is selected via --bin or the conflicting crate is uninstalled.

Placing output artifacts

The cargo install command can be customized where it puts its output artifacts to install packages in a custom location. The root directory of the installation will be determined in a hierarchical fashion, choosing the first of the following that is specified:

  1. The --root argument on the command line.
  2. The environment variable CARGO_INSTALL_ROOT.
  3. The install.root configuration option.
  4. The value of $CARGO_HOME (also determined in an independent and hierarchical fashion).

Once the root directory is found, Cargo will place all binaries in the $INSTALL_ROOT/bin folder. Cargo will also reserve the right to retain some metadata in this folder in order to keep track of what’s installed and what binaries belong to which package.

Managing Installations

If Cargo gives access to installing packages, it should surely provide the ability to manage what’s installed! The first part of this is just discovering what’s installed, and this is provided via cargo install --list.

Removing Crates

To remove an installed crate, another subcommand will be added to Cargo:

Remove a locally installed crate

Usage:
    cargo uninstall [options] SPEC

Options:
    -h, --help              Print this message
    --bin NAME              Only uninstall the binary NAME
    --example EXAMPLE       Only uninstall the example EXAMPLE
    -v, --verbose           Use verbose output

The argument SPEC is a package id specification (see `cargo help pkgid`) to
specify which crate should be uninstalled. By default all binaries are
uninstalled for a crate but the `--bin` and `--example` flags can be used to
only uninstall particular binaries.

Cargo won’t remove the source for uninstalled crates, just the binaries that were installed by Cargo itself.

Non-binary artifacts

Cargo will not currently attempt to manage anything other than a binary artifact of cargo build. For example the following items will not be available to installed crates:

  • Dynamic native libraries built as part of cargo build.
  • Native assets such as images not included in the binary itself.
  • The source code is not guaranteed to exist, and the binary doesn’t know where the source code is.

Additionally, Cargo will not immediately provide the ability to configure the installation stage of a package. There is often a desire for a “pre-install script” which runs various house-cleaning tasks. This is left as a future extension to Cargo.

Drawbacks

Beyond the standard “this is more surface area” and “this may want to aggressively include more features initially” concerns there are no known drawbacks at this time.

Alternatives

System Package Managers

The primary alternative to putting effort behind cargo install is to instead put effort behind system-specific package managers. For example the line between a system package manager and cargo install is a little blurry, and the “official” way to distribute a package should in theory be through a system package manager. This also has the upside of benefiting those outside the Rust community as you don’t have to have Cargo installed to manage a program. This approach is not without its downsides, however:

  • There are many system package managers, and it’s unclear how much effort it would be for Cargo to support building packages for all of them.
  • Actually preparing a package for being packaged in a system package manager can be quite onerous and is often associated with a high amount of overhead.
  • Even once a system package is created, it must be added to an online repository in one form or another which is often different for each distribution.

All in all, even if Cargo invested effort in facilitating creation of system packages, the threshold for distribution a Rust program is still too high. If everything went according to plan it’s just unfortunately inherently complex to only distribute packages through a system package manager because of the various requirements and how diverse they are. The cargo install command provides a cross-platform, easy-to-use, if Rust-specific interface to installing binaries.

It is expected that all major Rust projects will still invest effort into distribution through standard package managers, and Cargo will certainly have room to help out with this, but it doesn’t obsolete the need for cargo install.

Installing Libraries

Another possibility for cargo install is to not only be able to install binaries, but also libraries. The meaning of this however, is pretty nebulous and it’s not clear that it’s worthwhile. For example all Cargo builds will not have access to these libraries (as Cargo retains control over dependencies). It may mean that normal invocations of rustc have access to these libraries (e.g. for small one-off scripts), but it’s not clear that this is worthwhile enough to support installing libraries yet.

Another possible interpretation of installing libraries is that a developer is informing Cargo that the library should be available in a pre-compiled form. If any compile ends up using the library, then it can use the precompiled form instead of recompiling it. This job, however, seems best left to cargo build as it will automatically handle when the compiler version changes, for example. It may also be more appropriate to add the caching layer at the cargo build layer instead of cargo install.

Unresolved questions

None yet

This RFC was previously approved, but later withdrawn

In short this RFC was superseded by RFC 2972. For details see the summary comment.

Summary

Add support for generating naked (prologue/epilogue-free) functions via a new function attribute.

Motivation

Some systems programming tasks require that the programmer have complete control over function stack layout and interpretation, generally in cases where the compiler lacks support for a specific use case. While these cases can be addressed by building the requisite code with external tools and linking with Rust, it is advantageous to allow the Rust compiler to drive the entire process, particularly in that code may be generated via monomorphization or macro expansion.

When writing interrupt handlers for example, most systems require additional state be saved beyond the usual ABI requirements. To avoid corrupting program state, the interrupt handler must save the registers which might be modified before handing control to compiler-generated code. Consider a contrived interrupt handler for x86_64:

unsafe fn isr_nop() {
    asm!("push %rax"
         /* Additional pushes elided */ :::: "volatile");
    let n = 0u64;
    asm!("pop %rax"
         /* Additional pops elided */ :::: "volatile");
}

The generated assembly for this function might resemble the following (simplified for readability):

isr_nop:
    sub $8, %rsp
    push %rax
    movq $0, 0(%rsp)
    pop %rax
    add $8, %rsp
    retq

Here the programmer’s need to save machine state conflicts with the compiler’s assumption that it has complete control over stack layout, with the result that the saved value of rax is clobbered by the compiler. Given that details of stack layout for any given function are not predictable (and may change with compiler version or optimization settings), attempting to predict the stack layout to sidestep this issue is infeasible.

When interacting with FFIs that are not natively supported by the compiler, a similar situation arises where the programmer knows the expected calling convention and can implement a translation between the foreign ABI and one supported by the compiler.

Support for naked functions also allows programmers to write functions that would otherwise be unsafe, such as the following snippet which returns the address of its caller when called with the C ABI on x86.

    mov 4(%ebp), %eax
    ret

Because the compiler depends on a function prologue and epilogue to maintain storage for local variable bindings, it is generally unsafe to write anything but inline assembly inside a naked function. The LLVM language reference describes this feature as having “very system-specific consequences”, which the programmer must be aware of.

Detailed design

Add a new function attribute to the language, #[naked], indicating the function should have prologue/epilogue emission disabled.

Because the calling convention of a naked function is not guaranteed to match any calling convention the compiler is compatible with, calls to naked functions from within Rust code are forbidden unless the function is also declared with a well-defined ABI.

Defining a naked function with the default (Rust) ABI is an error, because the Rust ABI is unspecified and the programmer can never write a function which is guaranteed to be compatible. For example, The function declaration of foo in the following code block is an error.

#[naked]
unsafe fn foo() { }

The following variant is not an error because the C calling convention is well-defined and it is thus possible for the programmer to write a conforming function:

#[naked]
extern "C" fn foo() { }

Because the compiler cannot verify the correctness of code written in a naked function (since it may have an unknown calling convention), naked functions must be declared unsafe or contain no non-unsafe statements in the body. The function error in the following code block is a compile-time error, whereas the functions correct1 and correct2 are permitted.

#[naked]
extern "C" fn error(x: &mut u8) {
    *x += 1;
}

#[naked]
unsafe extern "C" fn correct1(x: &mut u8) {
    *x += 1;
}

#[naked]
extern "C" fn correct2(x: &mut u8) {
    unsafe {
        *x += 1;
    }
}

Example

The following example illustrates the possible use of a naked function for implementation of an interrupt service routine on 32-bit x86.

use std::intrinsics;
use std::sync::atomic::{self, AtomicUsize, Ordering};

#[naked]
#[cfg(target_arch="x86")]
unsafe extern "C" fn isr_3() {
    asm!("pushad
          call increment_breakpoint_count
          popad
          iretd" :::: "volatile");
    intrinsics::unreachable();
}

static bp_count: AtomicUsize = ATOMIC_USIZE_INIT;

#[no_mangle]
pub fn increment_breakpoint_count() {
    bp_count.fetch_add(1, Ordering::Relaxed);
}

fn register_isr(vector: u8, handler: unsafe extern "C" fn() -> ()) { /* ... */ }

fn main() {
    register_isr(3, isr_3);
    // ...
}

Implementation Considerations

The current support for extern functions in rustc generates a minimum of two basic blocks for any function declared in Rust code with a non-default calling convention: a trampoline which translates the declared calling convention to the Rust convention, and a Rust ABI version of the function containing the actual implementation. Calls to the function from Rust code call the Rust ABI version directly.

For naked functions, it is impossible for the compiler to generate a Rust ABI version of the function because the implementation may depend on the calling convention. In cases where calling a naked function from Rust is permitted, the compiler must be able to use the target calling convention directly rather than call the same function with the Rust convention.

Drawbacks

The utility of this feature is extremely limited to most users, and it might be misused if the implications of writing a naked function are not carefully considered.

Alternatives

Do nothing. The required functionality for the use case outlined can be implemented outside Rust code and linked in as needed. Support for additional calling conventions could be added to the compiler as needed, or emulated with external libraries such as libffi.

Unresolved questions

It is easy to quietly generate wrong code in naked functions, such as by causing the compiler to allocate stack space for temporaries where none were anticipated. There is currently no restriction on writing Rust statements inside a naked function, while most compilers supporting similar features either require or strongly recommend that authors write only inline assembly inside naked functions to ensure no code is generated that assumes a particular stack layout. It may be desirable to place further restrictions on what statements are permitted in the body of a naked function, such as permitting only asm! statements.

The unsafe requirement on naked functions may not be desirable in all cases. However, relaxing that requirement in the future would not be a breaking change.

Because a naked function may use a calling convention unknown to the compiler, it may be useful to add a “unknown” calling convention to the compiler which is illegal to call directly. Absent this feature, functions implementing an unknown ABI would need to be declared with a calling convention which is known to be incorrect and depend on the programmer to avoid calling such a function incorrectly since it cannot be prevented statically.

Summary

This RFC proposes a design for specialization, which permits multiple impl blocks to apply to the same type/trait, so long as one of the blocks is clearly “more specific” than the other. The more specific impl block is used in a case of overlap. The design proposed here also supports refining default trait implementations based on specifics about the types involved.

Altogether, this relatively small extension to the trait system yields benefits for performance and code reuse, and it lays the groundwork for an “efficient inheritance” scheme that is largely based on the trait system (described in a forthcoming companion RFC).

Motivation

Specialization brings benefits along several different axes:

  • Performance: specialization expands the scope of “zero cost abstraction”, because specialized impls can provide custom high-performance code for particular, concrete cases of an abstraction.

  • Reuse: the design proposed here also supports refining default (but incomplete) implementations of a trait, given details about the types involved.

  • Groundwork: the design lays the groundwork for supporting “efficient inheritance” through the trait system.

The following subsections dive into each of these motivations in more detail.

Performance

The simplest and most longstanding motivation for specialization is performance.

To take a very simple example, suppose we add a trait for overloading the += operator:

trait AddAssign<Rhs=Self> {
    fn add_assign(&mut self, rhs: Rhs);
}

It’s tempting to provide an impl for any type that you can both Clone and Add:

impl<R, T: Add<R> + Clone> AddAssign<R> for T {
    fn add_assign(&mut self, rhs: R) {
        let tmp = self.clone() + rhs;
        *self = tmp;
    }
}

This impl is especially nice because it means that you frequently don’t have to bound separately by Add and AddAssign; often Add is enough to give you both operators.

However, in today’s Rust, such an impl would rule out any more specialized implementation that, for example, avoids the call to clone. That means there’s a tension between simple abstractions and code reuse on the one hand, and performance on the other. Specialization resolves this tension by allowing both the blanket impl, and more specific ones, to coexist, using the specialized ones whenever possible (and thereby guaranteeing maximal performance).

More broadly, traits today can provide static dispatch in Rust, but they can still impose an abstraction tax. For example, consider the Extend trait:

pub trait Extend<A> {
    fn extend<T>(&mut self, iterable: T) where T: IntoIterator<Item=A>;
}

Collections that implement the trait are able to insert data from arbitrary iterators. Today, that means that the implementation can assume nothing about the argument iterable that it’s given except that it can be transformed into an iterator. That means the code must work by repeatedly calling next and inserting elements one at a time.

But in specific cases, like extending a vector with a slice, a much more efficient implementation is possible – and the optimizer isn’t always capable of producing it automatically. In such cases, specialization can be used to get the best of both worlds: retaining the abstraction of extend while providing custom code for specific cases.

The design in this RFC relies on multiple, overlapping trait impls, so to take advantage for Extend we need to refactor a bit:

pub trait Extend<A, T: IntoIterator<Item=A>> {
    fn extend(&mut self, iterable: T);
}

// The generic implementation
impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A> {
    // the `default` qualifier allows this method to be specialized below
    default fn extend(&mut self, iterable: T) {
        ... // implementation using push (like today's extend)
    }
}

// A specialized implementation for slices
impl<'a, A> Extend<A, &'a [A]> for Vec<A> {
    fn extend(&mut self, iterable: &'a [A]) {
        ... // implementation using ptr::write (like push_all)
    }
}

Other kinds of specialization are possible, including using marker traits like:

unsafe trait TrustedSizeHint {}

that can allow the optimization to apply to a broader set of types than slices, but are still more specific than T: IntoIterator.

Reuse

Today’s default methods in traits are pretty limited: they can assume only the where clauses provided by the trait itself, and there is no way to provide conditional or refined defaults that rely on more specific type information.

For example, consider a different design for overloading + and +=, such that they are always overloaded together:

trait Add<Rhs=Self> {
    type Output;
    fn add(self, rhs: Rhs) -> Self::Output;
    fn add_assign(&mut self, rhs: Rhs);
}

In this case, there’s no natural way to provide a default implementation of add_assign, since we do not want to restrict the Add trait to Clone data.

The specialization design in this RFC also allows for default impls, which can provide specialized defaults without actually providing a full trait implementation:

// the `default` qualifier here means (1) not all items are implied
// and (2) those that are can be further specialized
default impl<T: Clone, Rhs> Add<Rhs> for T {
    fn add_assign(&mut self, rhs: Rhs) {
        let tmp = self.clone() + rhs;
        *self = tmp;
    }
}

This default impl does not mean that Add is implemented for all Clone data, but just that when you do impl Add and Self: Clone, you can leave off add_assign:

#[derive(Copy, Clone)]
struct Complex {
    // ...
}

impl Add<Complex> for Complex {
    type Output = Complex;
    fn add(self, rhs: Complex) {
        // ...
    }
    // no fn add_assign necessary
}

A particularly nice case of refined defaults comes from trait hierarchies: you can sometimes use methods from subtraits to improve default supertrait methods. For example, consider the relationship between size_hint and ExactSizeIterator:

default impl<T> Iterator for T where T: ExactSizeIterator {
    fn size_hint(&self) -> (usize, Option<usize>) {
        (self.len(), Some(self.len()))
    }
}

Supporting efficient inheritance

Finally, specialization can be seen as a form of inheritance, since methods defined within a blanket impl can be overridden in a fine-grained way by a more specialized impl. As we will see, this analogy is a useful guide to the design of specialization. But it is more than that: the specialization design proposed here is specifically tailored to support “efficient inheritance” schemes (like those discussed here) without adding an entirely separate inheritance mechanism.

The key insight supporting this design is that virtual method definitions in languages like C++ and Java actually encompass two distinct mechanisms: virtual dispatch (also known as “late binding”) and implementation inheritance. These two mechanisms can be separated and addressed independently; this RFC encompasses an “implementation inheritance” mechanism distinct from virtual dispatch, and useful in a number of other circumstances. But it can be combined nicely with an orthogonal mechanism for virtual dispatch to give a complete story for the “efficient inheritance” goal that many previous RFCs targeted.

The author is preparing a companion RFC showing how this can be done with a relatively small further extension to the language. But it should be said that the design in this RFC is fully motivated independently of its companion RFC.

Detailed design

There’s a fair amount of material to cover, so we’ll start with a basic overview of the design in intuitive terms, and then look more formally at a specification.

At the simplest level, specialization is about allowing overlap between impl blocks, so long as there is always an unambiguous “winner” for any type falling into the overlap. For example:

impl<T> Debug for T where T: Display {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        <Self as Display>::fmt(self, f)
    }
}

impl Debug for String {
    fn fmt(&self, f: &mut Formatter) -> Result {
        try!(write!(f, "\""));
        for c in self.chars().flat_map(|c| c.escape_default()) {
            try!(write!(f, "{}", c));
        }
        write!(f, "\"")
    }
}

The idea for this pair of impls is that you can rest assured that any type implementing Display will also implement Debug via a reasonable default, but go on to provide more specific Debug implementations when warranted. In particular, the intuition is that a Self type of String is somehow “more specific” or “more concrete” than T where T: Display.

The bulk of the detailed design is aimed at making this intuition more precise. But first, we need to explore some problems that arise when you introduce specialization in any form.

Hazard: interactions with type checking

Consider the following, somewhat odd example of overlapping impls:

trait Example {
    type Output;
    fn generate(self) -> Self::Output;
}

impl<T> Example for T {
    type Output = Box<T>;
    fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

The key point to pay attention to here is the difference in associated types: the blanket impl uses Box<T>, while the impl for bool just uses bool. If we write some code that uses the above impls, we can get into trouble:

fn trouble<T>(t: T) -> Box<T> {
    Example::generate(t)
}

fn weaponize() -> bool {
    let b: Box<bool> = trouble(true);
    *b
}

What’s going on? When type checking trouble, the compiler has a type T about which it knows nothing, and sees an attempt to employ the Example trait via Example::generate(t). Because of the blanket impl, this use of Example is allowed – but furthermore, the associated type found in the blanket impl is now directly usable, so that <T as Example>::Output is known within trouble to be Box<T>, allowing trouble to type check. But during monomorphization, weaponize will actually produce a version of the code that returns a boolean instead, and then attempt to dereference that boolean. In other words, things look different to the typechecker than they do to codegen. Oops.

So what went wrong? It should be fine for the compiler to assume that T: Example for all T, given the blanket impl. But it’s clearly problematic to also assume that the associated types will be the ones given by that blanket impl. Thus, the “obvious” solution is just to generate a type error in trouble by preventing it from assuming <T as Example>::Output is Box<T>.

Unfortunately, this solution doesn’t work. For one thing, it would be a breaking change, since the following code does compile today:

trait Example {
    type Output;
    fn generate(self) -> Self::Output;
}

impl<T> Example for T {
    type Output = Box<T>;
    fn generate(self) -> Box<T> { Box::new(self) }
}

fn trouble<T>(t: T) -> Box<T> {
    Example::generate(t)
}

And there are definitely cases where this pattern is important. To pick just one example, consider the following impl for the slice iterator:

impl<'a, T> Iterator for Iter<'a, T> {
    type Item = &'a T;
    // ...
}

It’s essential that downstream code be able to assume that <Iter<'a, T> as Iterator>::Item is just &'a T, no matter what 'a and T happen to be.

Furthermore, it doesn’t work to say that the compiler can make this kind of assumption unless specialization is being used, since we want to allow downstream crates to add specialized impls. We need to know up front.

Another possibility would be to simply disallow specialization of associated types. But the trouble described above isn’t limited to associated types. Every function/method in a trait has an implicit associated type that implements the closure types, and similar bad assumptions about blanket impls can crop up there. It’s not entirely clear whether they can be weaponized, however. (That said, it may be reasonable to stabilize only specialization of functions/methods to begin with, and wait for strong use cases of associated type specialization to emerge before stabilizing that.)

The solution proposed in this RFC is instead to treat specialization of items in a trait as a per-item opt in, described in the next section.

The default keyword

Many statically-typed languages that allow refinement of behavior in some hierarchy also come with ways to signal whether or not this is allowed:

  • C++ requires the virtual keyword to permit a method to be overridden in subclasses. Modern C++ also supports final and override qualifiers.

  • C# requires the virtual keyword at definition and override at point of overriding an existing method.

  • Java makes things silently virtual, but supports final as an opt out.

Why have these qualifiers? Overriding implementations is, in a way, “action at a distance”. It means that the code that’s actually being run isn’t obvious when e.g. a class is defined; it can change in subclasses defined elsewhere. Requiring qualifiers is a way of signaling that this non-local change is happening, so that you know you need to look more globally to understand the actual behavior of the class.

While impl specialization does not directly involve virtual dispatch, it’s closely-related to inheritance, and it allows some amount of “action at a distance” (modulo, as we’ll see, coherence rules). We can thus borrow directly from these previous designs.

This RFC proposes a “final-by-default” semantics akin to C++ that is backwards-compatible with today’s Rust, which means that the following overlapping impls are prohibited:

impl<T> Example for T {
    type Output = Box<T>;
    fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

The error in these impls is that the first impl is implicitly defining “final” versions of its items, which are thus not allowed to be refined in further specializations.

If you want to allow specialization of an item, you do so via the default qualifier within the impl block:

impl<T> Example for T {
    default type Output = Box<T>;
    default fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

Thus, when you’re trying to understand what code is going to be executed, if you see an impl that applies to a type and the relevant item is not marked default, you know that the definition you’re looking at is the one that will apply. If, on the other hand, the item is marked default, you need to scan for other impls that could apply to your type. The coherence rules, described below, help limit the scope of this search in practice.

This design optimizes for fine-grained control over when specialization is permitted. It’s worth pausing for a moment and considering some alternatives and questions about the design:

  • Why mark default on impls rather than the trait? There are a few reasons to have default apply at the impl level. First of all, traits are fundamentally interfaces, while default is really about implementations. Second, as we’ll see, it’s useful to be able to “seal off” a certain avenue of specialization while leaving others open; doing it at the trait level is an all-or-nothing choice.

  • Why mark default on items rather than the entire impl? Again, this is largely about granularity; it’s useful to be able to pin down part of an impl while leaving others open for specialization. Furthermore, while this RFC doesn’t propose to do it, we could easily add a shorthand later on in which default impl Trait for Type is sugar for adding default to all items in the impl.

  • Won’t default be confused with default methods? Yes! But usefully so: as we’ll see, in this RFC’s design today’s default methods become sugar for tomorrow’s specialization.

Finally, how does default help with the hazards described above? Easy: an associated type from a blanket impl must be treated “opaquely” if it’s marked default. That is, if you write these impls:

impl<T> Example for T {
    default type Output = Box<T>;
    default fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

then the function trouble will fail to typecheck:

fn trouble<T>(t: T) -> Box<T> {
    Example::generate(t)
}

The error is that <T as Example>::Output no longer normalizes to Box<T>, because the applicable blanket impl marks the type as default. The fact that default is an opt in makes this behavior backwards-compatible.

The main drawbacks of this solution are:

  • API evolution. Adding default to an associated type takes away some abilities, which makes it a breaking change to a public API. (In principle, this is probably true for functions/methods as well, but the breakage there is theoretical at most.) However, given the design constraints discussed so far, this seems like an inevitable aspect of any simple, backwards-compatible design.

  • Verbosity. It’s possible that certain uses of the trait system will result in typing default quite a bit. This RFC takes a conservative approach of introducing the keyword at a fine-grained level, but leaving the door open to adding shorthands (like writing default impl ...) in the future, if need be.

Overlapping impls and specialization

What is overlap?

Rust today does not allow any “overlap” between impls. Intuitively, this means that you cannot write two trait impls that could apply to the same “input” types. (An input type is either Self or a type parameter of the trait). For overlap to occur, the input types must be able to “unify”, which means that there’s some way of instantiating any type parameters involved so that the input types are the same. Here are some examples:

trait Foo {}

// No overlap: String and Vec<u8> cannot unify.
impl Foo for String {}
impl Foo for Vec<u8> {}

// No overlap: Vec<u16> and Vec<u8> cannot unify because u16 and u8 cannot unify.
impl Foo for Vec<u16> {}
impl Foo for Vec<u8> {}

// Overlap: T can be instantiated to String.
impl<T> Foo for T {}
impl Foo for String {}

// Overlap: Vec<T> and Vec<u8> can unify because T can be instantiated to u8.
impl<T> Foo for Vec<T> {}
impl Foo for Vec<u8>

// No overlap: String and Vec<T> cannot unify, no matter what T is.
impl Foo for String {}
impl<T> Foo for Vec<T> {}

// Overlap: for any T that is Clone, both impls apply.
impl<T> Foo for Vec<T> where T: Clone {}
impl<T> Foo for Vec<T> {}

// No overlap: implicitly, T: Sized, and since !Foo: Sized, you cannot instantiate T with it.
impl<T> Foo for Box<T> {}
impl Foo for Box<Foo> {}

trait Trait1 {}
trait Trait2 {}

// Overlap: nothing prevents a T such that T: Trait1 + Trait2.
impl<T: Trait1> Foo for T {}
impl<T: Trait2> Foo for T {}

trait Trait3 {}
trait Trait4: Trait3 {}

// Overlap: any T: Trait4 is covered by both impls.
impl<T: Trait3> Foo for T {}
impl<T: Trait4> Foo for T {}

trait Bar<T> {}

// No overlap: *all* input types must unify for overlap to happen.
impl Bar<u8> for u8 {}
impl Bar<u16> for u8 {}

// No overlap: *all* input types must unify for overlap to happen.
impl<T> Bar<u8> for T {}
impl<T> Bar<u16> for T {}

// No overlap: no way to instantiate T such that T == u8 and T == u16.
impl<T> Bar<T> for T {}
impl Bar<u16> for u8 {}

// Overlap: instantiate U as T.
impl<T> Bar<T> for T {}
impl<T, U> Bar<T> for U {}

// No overlap: no way to instantiate T such that T == &'a T.
impl<T> Bar<T> for T {}
impl<'a, T> Bar<&'a T> for T {}

// Overlap: instantiate T = &'a U.
impl<T> Bar<T> for T {}
impl<'a, T, U> Bar<T> for &'a U where U: Bar<T> {}

Permitting overlap

The goal of specialization is to allow overlapping impls, but it’s not as simple as permitting all overlap. There has to be a way to decide which of two overlapping impls to actually use for a given set of input types. The simpler and more intuitive the rule for deciding, the easier it is to write and reason about code – and since dispatch is already quite complicated, simplicity here is a high priority. On the other hand, the design should support as many of the motivating use cases as possible.

The basic intuition we’ve been using for specialization is the idea that one impl is “more specific” than another it overlaps with. Before turning this intuition into a rule, let’s go through the previous examples of overlap and decide which, if any, of the impls is intuitively more specific. Note that since we’re leaving out the body of the impls, you won’t see the default keyword that would be required in practice for the less specialized impls.

trait Foo {}

// Overlap: T can be instantiated to String.
impl<T> Foo for T {}
impl Foo for String {}          // String is more specific than T

// Overlap: Vec<T> and Vec<u8> can unify because T can be instantiated to u8.
impl<T> Foo for Vec<T> {}
impl Foo for Vec<u8>            // Vec<u8> is more specific than Vec<T>

// Overlap: for any T that is Clone, both impls apply.
impl<T> Foo for Vec<T>          // "Vec<T> where T: Clone" is more specific than "Vec<T> for any T"
    where T: Clone {}
impl<T> Foo for Vec<T> {}

trait Trait1 {}
trait Trait2 {}

// Overlap: nothing prevents a T such that T: Trait1 + Trait2
impl<T: Trait1> Foo for T {}    // Neither is more specific;
impl<T: Trait2> Foo for T {}    // there's no relationship between the traits here

trait Trait3 {}
trait Trait4: Trait3 {}

// Overlap: any T: Trait4 is covered by both impls.
impl<T: Trait3> Foo for T {}
impl<T: Trait4> Foo for T {}    // T: Trait4 is more specific than T: Trait3

trait Bar<T> {}

// Overlap: instantiate U as T.
impl<T> Bar<T> for T {}         // More specific since both input types are identical
impl<T, U> Bar<T> for U {}

// Overlap: instantiate T = &'a U.
impl<T> Bar<T> for T {}         // Neither is more specific
impl<'a, T, U> Bar<T> for &'a U
    where U: Bar<T> {}

What are the patterns here?

  • Concrete types are more specific than type variables, e.g.:
    • String is more specific than T
    • Vec<u8> is more specific than Vec<T>
  • More constraints lead to more specific impls, e.g.:
    • T: Clone is more specific than T
    • Bar<T> for T is more specific than Bar<T> for U
  • Unrelated constraints don’t contribute, e.g.:
    • Neither T: Trait1 nor T: Trait2 is more specific than the other.

For many purposes, the above simple patterns are sufficient for working with specialization. But to provide a spec, we need a more general, formal way of deciding precedence; we’ll give one next.

Defining the precedence rules

An impl block I contains basically two pieces of information relevant to specialization:

  • A set of type variables, like T, U in impl<T, U> Bar<T> for U.
    • We’ll call this I.vars.
  • A set of where clauses, like T: Clone in impl<T: Clone> Foo for Vec<T>.
    • We’ll call this I.wc.

We’re going to define a specialization relation <= between impl blocks, so that I <= J means that impl block I is “at least as specific as” impl block J. (If you want to think of this in terms of “size”, you can imagine that the set of types I applies to is no bigger than those J applies to.)

We’ll say that I < J if I <= J and !(J <= I). In this case, I is more specialized than J.

To ensure specialization is coherent, we will ensure that for any two impls I and J that overlap, we have either I < J or J < I. That is, one must be truly more specific than the other. Specialization chooses the “smallest” impl in this order – and the new overlap rule ensures there is a unique smallest impl among those that apply to a given set of input types.

More broadly, while <= is not a total order on all impls of a given trait, it will be a total order on any set of impls that all mutually overlap, which is all we need to determine which impl to use.

One nice thing about this approach is that, if there is an overlap without there being an intersecting impl, the compiler can tell the programmer precisely which impl needs to be written to disambiguate the overlapping portion.

We’ll start with an abstract/high-level formulation, and then build up toward an algorithm for deciding specialization by introducing a number of building blocks.

Abstract formulation

Recall that the input types of a trait are the Self type and all trait type parameters. So the following impl has input types bool, u8 and String:

trait Baz<X, Y> { .. }
// impl I
impl Baz<bool, u8> for String { .. }

If you think of these input types as a tuple, (bool, u8, String) you can think of each trait impl I as determining a set apply(I) of input type tuples that obeys I’s where clauses. The impl above is just the singleton set apply(I) = { (bool, u8, String) }. Here’s a more interesting case:

// impl J
impl<T, U> Baz<T, u8> for U where T: Clone { .. }

which gives the set apply(J) = { (T, u8, U) | T: Clone }.

Two impls I and J overlap if apply(I) and apply(J) intersect.

We can now define the specialization order abstractly: I <= J if apply(I) is a subset of apply(J).

This is true of the two sets above:

apply(I) = { (bool, u8, String) }
  is a strict subset of
apply(J) = { (T, u8, U) | T: Clone }

Here are a few more examples.

Via where clauses:

// impl I
// apply(I) = { T | T a type }
impl<T> Foo for T {}

// impl J
// apply(J) = { T | T: Clone }
impl<T> Foo for T where T: Clone {}

// J < I

Via type structure:

// impl I
// apply(I) = { (T, U) | T, U types }
impl<T, U> Bar<T> for U {}

// impl J
// apply(J) = { (T, T) | T a type }
impl<T> Bar<T> for T {}

// J < I

The same reasoning can be applied to all of the examples we saw earlier, and the reader is encouraged to do so. We’ll look at one of the more subtle cases here:

// impl I
// apply(I) = { (T, T) | T any type }
impl<T> Bar<T> for T {}

// impl J
// apply(J) = { (T, &'a U) | U: Bar<T>, 'a any lifetime }
impl<'a, T, U> Bar<T> for &'a U where U: Bar<T> {}

The claim is that apply(I) and apply(J) intersect, but neither contains the other. Thus, these two impls are not permitted to coexist according to this RFC’s design. (We’ll revisit this limitation toward the end of the RFC.)

Algorithmic formulation

The goal in the remainder of this section is to turn the above abstract definition of <= into something closer to an algorithm, connected to existing mechanisms in the Rust compiler. We’ll start by reformulating <= in a way that effectively “inlines” apply:

I <= J if:

  • For any way of instantiating I.vars, there is some way of instantiating J.vars such that the Self type and trait type parameters match up.

  • For this instantiation of I.vars, if you assume I.wc holds, you can prove J.wc.

It turns out that the compiler is already quite capable of answering these questions, via “unification” and “skolemization”, which we’ll see next.

Unification: solving equations on types

Unification is the workhorse of type inference and many other mechanisms in the Rust compiler. You can think of it as a way of solving equations on types that contain variables. For example, consider the following situation:

fn use_vec<T>(v: Vec<T>) { .. }

fn caller() {
    let v = vec![0u8, 1u8];
    use_vec(v);
}

The compiler ultimately needs to infer what type to use for the T in use_vec within the call in caller, given that the actual argument has type Vec<u8>. You can frame this as a unification problem: solve the equation Vec<T> = Vec<u8>. Easy enough: T = u8!

Some equations can’t be solved. For example, if we wrote instead:

fn caller() {
    let s = "hello";
    use_vec(s);
}

we would end up equating Vec<T> = &str. There’s no choice of T that makes that equation work out. Type error!

Unification often involves solving a series of equations between types simultaneously, but it’s not like high school algebra; the equations involved all have the limited form of type1 = type2.

One immediate way in which unification is relevant to this RFC is in determining when two impls “overlap”: roughly speaking, they overlap if each pair of input types can be unified simultaneously. For example:

// No overlap: String and bool do not unify
impl Foo for String { .. }
impl Foo for bool { .. }

// Overlap: String and T unify
impl Foo for String { .. }
impl<T> Foo for T { .. }

// Overlap: T = U, T = V is trivially solvable
impl<T> Bar<T> for T { .. }
impl<U, V> Bar<U> for V { .. }

// No overlap: T = u8, T = bool not solvable
impl<T> Bar<T> for T { .. }
impl Bar<u8> for bool { .. }

Note the difference in how concrete types and type variables work for unification. When T, U and V are variables, it’s fine to say that T = U, T = V is solvable: we can make the impls overlap by instantiating all three variables with the same type. But asking for e.g. String = bool fails, because these are concrete types, not variables. (The same happens in algebra; consider that 2 = 3 cannot be solved, but x = y and y = z can be.) This distinction may seem obvious, but we’ll next see how to leverage it in a somewhat subtle way.

Skolemization: asking forall/there exists questions

We’ve already rephrased <= to start with a “for all, there exists” problem:

  • For any way of instantiating I.vars, there is some way of instantiating J.vars such that the Self type and trait type parameters match up.

For example:

// impl I
impl<T> Bar<T> for T {}

// impl J
impl<U,V> Bar<U> for V {}

For any choice of T, it’s possible to choose a U and V such that the two impls match – just choose U = T and V = T. But the opposite isn’t possible: if U and V are different (say, String and bool), then no choice of T will make the two impls match up.

This feels similar to a unification problem, and it turns out we can solve it with unification using a scary-sounding trick known as “skolemization”.

Basically, to “skolemize” a type variable is to treat it as if it were a concrete type. So if U and V are skolemized, then U = V is unsolvable, in the same way that String = bool is unsolvable. That’s perfect for capturing the “for any instantiation of I.vars” part of what we want to formalize.

With this tool in hand, we can further rephrase the “for all, there exists” part of <= in the following way:

  • After skolemizing I.vars, it’s possible to unify I and J.

Note that a successful unification through skolemization gives you the same answer as you’d get if you unified without skolemizing.

The algorithmic version

One outcome of running unification on two impls as above is that we can understand both impl headers in terms of a single set of type variables. For example:

// Before unification:
impl<T> Bar<T> for T where T: Clone { .. }
impl<U, V> Bar<U> for Vec<V> where V: Debug { .. }

// After unification:
// T = Vec<W>
// U = Vec<W>
// V = W
impl<W> Bar<Vec<W>> for Vec<W> where Vec<W>: Clone { .. }
impl<W> Bar<Vec<W>> for Vec<W> where W: Debug { .. }

By putting everything in terms of a single set of type params, it becomes possible to do things like compare the where clauses, which is the last piece we need for a final rephrasing of <= that we can implement directly.

Putting it all together, we’ll say I <= J if:

  • After skolemizing I.vars, it’s possible to unify I and J.
  • Under the resulting unification, I.wc implies J.wc

Let’s look at a couple more examples to see how this works:

trait Trait1 {}
trait Trait2 {}

// Overlap: nothing prevents a T such that T: Trait1 + Trait2
impl<T: Trait1> Foo for T {}    // Neither is more specific;
impl<T: Trait2> Foo for T {}    // there's no relationship between the traits here

In comparing these two impls in either direction, we make it past unification and must try to prove that one where clause implies another. But T: Trait1 does not imply T: Trait2, nor vice versa, so neither impl is more specific than the other. Since the impls do overlap, an ambiguity error is reported.

On the other hand:

trait Trait3 {}
trait Trait4: Trait3 {}

// Overlap: any T: Trait4 is covered by both impls.
impl<T: Trait3> Foo for T {}
impl<T: Trait4> Foo for T {}    // T: Trait4 is more specific than T: Trait3

Here, since T: Trait4 implies T: Trait3 but not vice versa, we get

impl<T: Trait4> Foo for T    <    impl<T: Trait3> Foo for T
Key properties

Remember that for each pair of impls I, J, the compiler will check that exactly one of the following holds:

  • I and J do not overlap (a unification check), or else
  • I < J, or else
  • J < I

Recall also that if there is an overlap without there being an intersecting impl, the compiler can tell the programmer precisely which impl needs to be written to disambiguate the overlapping portion.

Since I <= J ultimately boils down to a subset relationship, we get a lot of nice properties for free (e.g., transitivity: if I <= J <= K then I <= K). Together with the compiler check above, we know that at monomorphization time, after filtering to the impls that apply to some concrete input types, there will always be a unique, smallest impl in specialization order. (In particular, if multiple impls apply to concrete input types, those impls must overlap.)

There are various implementation strategies that avoid having to recalculate the ordering during monomorphization, but we won’t delve into those details in this RFC.

Implications for coherence

The coherence rules ensure that there is never an ambiguity about which impl to use when monomorphizing code. Today, the rules consist of the simple overlap check described earlier, and the “orphan” check which limits the crates in which impls are allowed to appear (“orphan” refers to an impl in a crate that defines neither the trait nor the types it applies to). The orphan check is needed, in particular, so that overlap cannot be created accidentally when linking crates together.

The design in this RFC heavily revises the overlap check, as described above, but does not propose any changes to the orphan check (which is described in a blog post). Basically, the change to the overlap check does not appear to change the cases in which orphan impls can cause trouble. And a moment’s thought reveals why: if two sibling crates are unaware of each other, there’s no way that they could each provide an impl overlapping with the other, yet be sure that one of those impls is more specific than the other in the overlapping region.

Interaction with lifetimes

A hard constraint in the design of the trait system is that dispatch cannot depend on lifetime information. In particular, we both cannot, and should not allow specialization based on lifetimes:

  • We can’t, because when the compiler goes to actually generate code (“trans”), lifetime information has been erased – so we’d have no idea what specializations would soundly apply.

  • We shouldn’t, because lifetime inference is subtle and would often lead to counterintuitive results. For example, you could easily fail to get 'static even if it applies, because inference is choosing the smallest lifetime that matches the other constraints.

To be more concrete, here are some scenarios which should not be allowed:

// Not allowed: trans doesn't know if T: 'static:
trait Bad1 {}
impl<T> Bad1 for T {}
impl<T: 'static> Bad1 for T {}

// Not allowed: trans doesn't know if two refs have equal lifetimes:
trait Bad2<U> {}
impl<T, U> Bad2<U> for T {}
impl<'a, T, U> Bad2<&'b U> for &'a T {}

But simply naming a lifetime that must exist, without constraining it, is fine:

// Allowed: specializes based on being *any* reference, regardless of lifetime
trait Good {}
impl<T> Good for T {}
impl<'a, T> Good for &'a T {}

In addition, it’s okay for lifetime constraints to show up as long as they aren’t part of specialization:

// Allowed: *all* impls impose the 'static requirement; the dispatch is happening
// purely based on `Clone`
trait MustBeStatic {}
impl<T: 'static> MustBeStatic for T {}
impl<T: 'static + Clone> MustBeStatic for T {}

Going down the rabbit hole

Unfortunately, we cannot easily rule out the undesirable lifetime-dependent specializations, because they can be “hidden” behind innocent-looking trait bounds that can even cross crates:

////////////////////////////////////////////////////////////////////////////////
// Crate marker
////////////////////////////////////////////////////////////////////////////////

trait Marker {}
impl Marker for u32 {}

////////////////////////////////////////////////////////////////////////////////
// Crate foo
////////////////////////////////////////////////////////////////////////////////

extern crate marker;

trait Foo {
    fn foo(&self);
}

impl<T> Foo for T {
    default fn foo(&self) {
        println!("Default impl");
    }
}

impl<T: marker::Marker> Foo for T {
    fn foo(&self) {
        println!("Marker impl");
    }
}

////////////////////////////////////////////////////////////////////////////////
// Crate bar
////////////////////////////////////////////////////////////////////////////////

extern crate marker;

pub struct Bar<T>(T);
impl<T: 'static> marker::Marker for Bar<T> {}

////////////////////////////////////////////////////////////////////////////////
// Crate client
////////////////////////////////////////////////////////////////////////////////

extern crate foo;
extern crate bar;

fn main() {
    // prints: Marker impl
    0u32.foo();

    // prints: ???
    // the relevant specialization depends on the 'static lifetime
    bar::Bar("Activate the marker!").foo();
}

The problem here is that all of the crates in isolation look perfectly innocent. The code in marker, bar and client is accepted today. It’s only when these crates are plugged together that a problem arises – you end up with a specialization based on a 'static lifetime. And the client crate may not even be aware of the existence of the marker crate.

If we make this kind of situation a hard error, we could easily end up with a scenario in which plugging together otherwise-unrelated crates is impossible.

Proposal: ask forgiveness, rather than permission

So what do we do? There seem to be essentially two avenues:

  1. Be maximally permissive in the impls you can write, and then just ignore lifetime information in dispatch. We can generate a warning when this is happening, though in cases like the above, it may be talking about traits that the client is not even aware of. The assumption here is that these “missed specializations” will be extremely rare, so better not to impose a burden on everyone to rule them out.

  2. Try, somehow, to prevent you from writing impls that appear to dispatch based on lifetimes. The most likely way of doing that is to somehow flag a trait as “lifetime-dependent”. If a trait is lifetime-dependent, it can have lifetime-sensitive impls (like ones that apply only to 'static data), but it cannot be used when writing specialized impls of another trait.

The downside of (2) is that it’s an additional knob that all trait authors have to think about. That approach is sketched in more detail in the Alternatives section.

What this RFC proposes is to follow approach (1), at least during the initial experimentation phase. That’s the easiest way to gain experience with specialization and see to what extent lifetime-dependent specializations accidentally arise in practice. If they are indeed rare, it seems much better to catch them via a lint then to force the entire world of traits to be explicitly split in half.

To begin with, this lint should be an error by default; we want to get feedback as to how often this is happening before any stabilization.

What this means for the programmer

Ultimately, the goal of the “just ignore lifetimes for specialization” approach is to reduce the number of knobs in play. The programmer gets to use both lifetime bounds and specialization freely.

The problem, of course, is that when using the two together you can get surprising dispatch results:

trait Foo {
    fn foo(&self);
}

impl<T> Foo for T {
    default fn foo(&self) {
        println!("Default impl");
    }
}

impl Foo for &'static str {
    fn foo(&self) {
        println!("Static string slice: {}", self);
    }
}

fn main() {
    // prints "Default impl", but generates a lint saying that
    // a specialization was missed due to lifetime dependence.
    "Hello, world!".foo();
}

Specialization is refusing to consider the second impl because it imposes lifetime constraints not present in the more general impl. We don’t know whether these constraints hold when we need to generate the code, and we don’t want to depend on them because of the subtleties of region inference. But we alert the programmer that this is happening via a lint.

Sidenote: for such simple intracrate cases, we could consider treating the impls themselves more aggressively, catching that the &'static str impl will never be used and refusing to compile it.

In the more complicated multi-crate example we saw above, the line

bar::Bar("Activate the marker!").foo();

would likewise print Default impl and generate a warning. In this case, the warning may be hard for the client crate author to understand, since the trait relevant for specialization – marker::Marker – belongs to a crate that hasn’t even been imported in client. Nevertheless, this approach seems friendlier than the alternative (discussed in Alternatives).

An algorithm for ignoring lifetimes in dispatch

Although approach (1) may seem simple, there are some subtleties in handling cases like the following:

trait Foo { ... }
impl<T: 'static> Foo for T { ... }
impl<T: 'static + Clone> Foo for T { ... }

In this “ignore lifetimes for specialization” approach, we still want the above specialization to work, because all impls in the specialization family impose the same lifetime constraints. The dispatch here purely comes down to T: Clone or not. That’s in contrast to something like this:

trait Foo { ... }
impl<T> Foo for T { ... }
impl<T: 'static + Clone> Foo for T { ... }

where the difference between the impls includes a nontrivial lifetime constraint (the 'static bound on T). The second impl should effectively be dead code: we should never dispatch to it in favor of the first impl, because that depends on lifetime information that we don’t have available in trans (and don’t want to rely on in general, due to the way region inference works). We would instead lint against it (probably error by default).

So, how do we tell these two scenarios apart?

  • First, we evaluate the impls normally, winnowing to a list of applicable impls.

  • Then, we attempt to determine specialization. For any pair of applicable impls Parent and Child (where Child specializes Parent), we do the following:

    • Introduce as assumptions all of the where clauses of Parent

    • Attempt to prove that Child definitely applies, using these assumptions. Crucially, we do this test in a special mode: lifetime bounds are only considered to hold if they (1) follow from general well-formedness or (2) are directly assumed from Parent. That is, a constraint in Child that T: 'static has to follow either from some basic type assumption (like the type &'static T) or from a similar clause in Parent.

    • If the Child impl cannot be shown to hold under these more stringent conditions, then we have discovered a lifetime-sensitive specialization, and can trigger the lint.

    • Otherwise, the specialization is valid.

Let’s do this for the two examples above.

Example 1

trait Foo { ... }
impl<T: 'static> Foo for T { ... }
impl<T: 'static + Clone> Foo for T { ... }

Here, if we think both impls apply, we’ll start by assuming that T: 'static holds, and then we’ll evaluate whether T: 'static and T: Clone hold. The first evaluation succeeds trivially from our assumption. The second depends on T, as you’d expect.

Example 2

trait Foo { ... }
impl<T> Foo for T { ... }
impl<T: 'static + Clone> Foo for T { ... }

Here, if we think both impls apply, we start with no assumption, and then evaluate T: 'static and T: Clone. We’ll fail to show the former, because it’s a lifetime-dependent predicate, and we don’t have any assumption that immediately yields it.

This should scale to less obvious cases, e.g. using T: Any rather than T: 'static – because when trying to prove T: Any, we’ll find we need to prove T: 'static, and then we’ll end up using the same logic as above. It also works for cases like the following:

trait SometimesDep {}

impl SometimesDep for i32 {}
impl<T: 'static> SometimesDep for T {}

trait Spec {}
impl<T> Spec for T {}
impl<T: SometimesDep> Spec for T {}

Using Spec on i32 will not trigger the lint, because the specialization is justified without any lifetime constraints.

Default impls

An interesting consequence of specialization is that impls need not (and in fact sometimes cannot) provide all of the items that a trait specifies. Of course, this is already the case with defaulted items in a trait – but as we’ll see, that mechanism can be seen as just a way of using specialization.

Let’s start with a simple example:

trait MyTrait {
    fn foo(&self);
    fn bar(&self);
}

impl<T: Clone> MyTrait for T {
    default fn foo(&self) { ... }
    default fn bar(&self) { ... }
}

impl MyTrait for String {
    fn bar(&self) { ... }
}

Here, we’re acknowledging that the blanket impl has already provided definitions for both methods, so the impl for String can opt to just re-use the earlier definition of foo. This is one reason for the choice of the keyword default. Viewed this way, items defined in a specialized impl are optional overrides of those in overlapping blanket impls.

And, in fact, if we’d written the blanket impl differently, we could force the String impl to leave off foo:

impl<T: Clone> MyTrait for T {
    // now `foo` is "final"
    fn foo(&self) { ... }

    default fn bar(&self) { ... }
}

Being able to leave off items that are covered by blanket impls means that specialization is close to providing a finer-grained version of defaulted items in traits – one in which the defaults can become ever more refined as more is known about the input types to the traits (as described in the Motivation section). But to fully realize this goal, we need one other ingredient: the ability for the blanket impl itself to leave off some items. We do this by using the default keyword at the impl level:

trait Add<Rhs=Self> {
    type Output;
    fn add(self, rhs: Rhs) -> Self::Output;
    fn add_assign(&mut self, rhs: Rhs);
}

default impl<T: Clone, Rhs> Add<Rhs> for T {
    fn add_assign(&mut self, rhs: Rhs) {
        let tmp = self.clone() + rhs;
        *self = tmp;
    }
}

A subsequent overlapping impl of Add where Self: Clone can choose to leave off add_assign, “inheriting” it from the partial impl above.

A key point here is that, as the keyword suggests, a partial impl may be incomplete: from the above code, you cannot assume that T: Add<T> for any T: Clone, because no such complete impl has been provided.

Defaulted items in traits are just sugar for a default blanket impl:

trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>;

    fn size_hint(&self) -> (usize, Option<usize>) {
        (0, None)
    }
    // ...
}

// desugars to:

trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>;
    fn size_hint(&self) -> (usize, Option<usize>);
    // ...
}

default impl<T> Iterator for T {
    fn size_hint(&self) -> (usize, Option<usize>) {
        (0, None)
    }
    // ...
}

Default impls are somewhat akin to abstract base classes in object-oriented languages; they provide some, but not all, of the materials needed for a fully concrete implementation, and thus enable code reuse but cannot be used concretely.

Note that the semantics of default impls and defaulted items in traits is that both are implicitly marked default – that is, both are considered specializable. This choice gives a coherent mental model: when you choose not to employ a default, and instead provide your own definition, you are in effect overriding/specializing that code. (Put differently, you can think of default impls as abstract base classes).

There are a few important details to nail down with the design. This RFC proposes starting with the conservative approach of applying the general overlap rule to default impls, same as with complete ones. That ensures that there is always a clear definition to use when providing subsequent complete impls. It would be possible, though, to relax this constraint and allow arbitrary overlap between default impls, requiring then whenever a complete impl overlaps with them, for each item, there is either a unique “most specific” default impl that applies, or else the complete impl provides its own definition for that item. Such a relaxed approach is much more flexible, probably easier to work with, and can enable more code reuse – but it’s also more complicated, and backwards-compatible to add on top of the proposed conservative approach.

Limitations

One frequent motivation for specialization is broader “expressiveness”, in particular providing a larger set of trait implementations than is possible today.

For example, the standard library currently includes an AsRef trait for “as-style” conversions:

pub trait AsRef<T> where T: ?Sized {
    fn as_ref(&self) -> &T;
}

Currently, there is also a blanket implementation as follows:

impl<'a, T: ?Sized, U: ?Sized> AsRef<U> for &'a T where T: AsRef<U> {
    fn as_ref(&self) -> &U {
        <T as AsRef<U>>::as_ref(*self)
    }
}

which allows these conversions to “lift” over references, which is in turn important for making a number of standard library APIs ergonomic.

On the other hand, we’d also like to provide the following very simple blanket implementation:

impl<'a, T: ?Sized> AsRef<T> for T {
    fn as_ref(&self) -> &T {
        self
    }
}

The current coherence rules prevent having both impls, however, because they can in principle overlap:

AsRef<&'a T> for &'a T where T: AsRef<&'a T>

Another examples comes from the Option type, which currently provides two methods for unwrapping while providing a default value for the None case:

impl<T> Option<T> {
    fn unwrap_or(self, def: T) -> T { ... }
    fn unwrap_or_else<F>(self, f: F) -> T where F: FnOnce() -> T { .. }
}

The unwrap_or method is more ergonomic but unwrap_or_else is more efficient in the case that the default is expensive to compute. The original collections reform RFC proposed a ByNeed trait that was rendered unworkable after unboxed closures landed:

trait ByNeed<T> {
    fn compute(self) -> T;
}

impl<T> ByNeed<T> for T {
    fn compute(self) -> T {
        self
    }
}

impl<F, T> ByNeed<T> for F where F: FnOnce() -> T {
    fn compute(self) -> T {
        self()
    }
}

impl<T> Option<T> {
    fn unwrap_or<U>(self, def: U) where U: ByNeed<T> { ... }
    ...
}

The trait represents any value that can produce a T on demand. But the above impls fail to compile in today’s Rust, because they overlap: consider ByNeed<F> for F where F: FnOnce() -> F.

There are also some trait hierarchies where a subtrait completely subsumes the functionality of a supertrait. For example, consider PartialOrd and Ord:

trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
    fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>;
}

trait Ord: Eq + PartialOrd<Self> {
    fn cmp(&self, other: &Self) -> Ordering;
}

In cases like this, it’s somewhat annoying to have to provide an impl for both Ord and PartialOrd, since the latter can be trivially derived from the former. So you might want an impl like this:

impl<T> PartialOrd<T> for T where T: Ord {
    fn partial_cmp(&self, other: &T) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

But this blanket impl would conflict with a number of others that work to “lift” PartialOrd and Ord impls over various type constructors like references and tuples, e.g.:

impl<'a, A: ?Sized> Ord for &'a A where A: Ord {
    fn cmp(&self, other: & &'a A) -> Ordering { Ord::cmp(*self, *other) }
}

impl<'a, 'b, A: ?Sized, B: ?Sized> PartialOrd<&'b B> for &'a A where A: PartialOrd<B> {
    fn partial_cmp(&self, other: &&'b B) -> Option<Ordering> {
        PartialOrd::partial_cmp(*self, *other)
    }

The case where they overlap boils down to:

PartialOrd<&'a T> for &'a T where &'a T: Ord
PartialOrd<&'a T> for &'a T where T: PartialOrd

and there is no implication between either of the where clauses.

There are many other examples along these lines.

Unfortunately, none of these examples are permitted by the revised overlap rule in this RFC, because in none of these cases is one of the impls fully a “subset” of the other; the overlap is always partial.

It’s a shame to not be able to address these cases, but the benefit is a specialization rule that is very intuitive and accepts only very clear-cut cases. The Alternatives section sketches some different rules that are less intuitive but do manage to handle cases like those above.

If we allowed “relaxed” partial impls as described above, one could at least use that mechanism to avoid having to give a definition directly in most cases. (So if you had T: Ord you could write impl PartialOrd for T {}.)

Possible extensions

It’s worth briefly mentioning a couple of mechanisms that one could consider adding on top of specialization.

Inherent impls

It has long been folklore that inherent impls can be thought of as special, anonymous traits that are:

  • Automatically in scope;
  • Given higher dispatch priority than normal traits.

It is easiest to make this idea work out if you think of each inherent item as implicitly defining and implementing its own trait, so that you can account for examples like the following:

struct Foo<T> { .. }

impl<T> Foo<T> {
    fn foo(&self) { .. }
}

impl<T: Clone> Foo<T> {
    fn bar(&self) { .. }
}

In this example, the availability of each inherent item is dependent on a distinct where clause. A reasonable “desugaring” would be:

#[inherent] // an imaginary attribute turning on the "special" treatment of inherent impls
trait Foo_foo<T> {
    fn foo(&self);
}

#[inherent]
trait Foo_bar<T> {
    fn bar(&self);
}

impl<T> Foo_foo<T> for Foo<T> {
    fn foo(&self) { .. }
}

impl<T: Clone> Foo_bar<T> for Foo<T> {
    fn bar(&self) { .. }
}

With this idea in mind, it is natural to expect specialization to work for inherent impls, e.g.:

impl<T, I> Vec<T> where I: IntoIterator<Item = T> {
    default fn extend(iter: I) { .. }
}

impl<T> Vec<T> {
    fn extend(slice: &[T]) { .. }
}

We could permit such specialization at the inherent impl level. The semantics would be defined in terms of the folklore desugaring above.

(Note: this example was chosen purposefully: it’s possible to use specialization at the inherent impl level to avoid refactoring the Extend trait as described in the Motivation section.)

There are more details about this idea in the appendix.

Super

Continuing the analogy between specialization and inheritance, one could imagine a mechanism like super to access and reuse less specialized implementations when defining more specialized ones. While there’s not a strong need for this mechanism as part of this RFC, it’s worth checking that the specialization approach is at least compatible with super.

Fortunately, it is. If we take super to mean “the most specific impl overlapping with this one”, there is always a unique answer to that question, because all overlapping impls are totally ordered with respect to each other via specialization.

Extending HRTBs

In the Motivation we mentioned the need to refactor the Extend trait to take advantage of specialization. It’s possible to work around that need by using specialization on inherent impls (and having the trait impl defer to the inherent one), but of course that’s a bit awkward.

For reference, here’s the refactoring:

// Current definition
pub trait Extend<A> {
    fn extend<T>(&mut self, iterable: T) where T: IntoIterator<Item=A>;
}

// Refactored definition
pub trait Extend<A, T: IntoIterator<Item=A>> {
    fn extend(&mut self, iterable: T);
}

One problem with this kind of refactoring is that you lose the ability to say that a type T is extendable by an arbitrary iterator, because every use of the Extend trait has to say precisely what iterator is supported. But the whole point of this exercise is to have a blanket impl of Extend for any iterator that is then specialized later.

This points to a longstanding limitation: the trait system makes it possible to ask for any number of specific impls to exist, but not to ask for a blanket impl to exist – except in the limited case of lifetimes, where higher-ranked trait bounds allow you to do this:

trait Trait { .. }
impl<'a> Trait for &'a MyType { .. }

fn use_all<T>(t: T) where for<'a> &'a T: Trait { .. }

We could extend this mechanism to cover type parameters as well, so that you could write:

fn needs_extend_all<T>(t: T) where for<I: IntoIterator<Item=u8>> T: Extend<u8, I> { .. }

Such a mechanism is out of scope for this RFC.

Refining bounds on associated types

The design with default makes specialization of associated types an all-or-nothing affair, but it would occasionally be useful to say that all further specializations will at least guarantee some additional trait bound on the associated type. This is particularly relevant for the “efficient inheritance” use case. Such a mechanism can likely be added, if needed, later on.

Drawbacks

Many of the more minor tradeoffs have been discussed in detail throughout. We’ll focus here on the big picture.

As with many new language features, the most obvious drawback of this proposal is the increased complexity of the language – especially given the existing complexity of the trait system. Partly for that reason, the RFC errs on the side of simplicity in the design wherever possible.

One aspect of the design that mitigates its complexity somewhat is the fact that it is entirely opt in: you have to write default in an impl in order for specialization of that item to be possible. That means that all the ways we have of reasoning about existing code still hold good. When you do opt in to specialization, the “obviousness” of the specialization rule should mean that it’s easy to tell at a glance which of two impls will be preferred.

On the other hand, the simplicity of this design has its own drawbacks:

  • You have to lift out trait parameters to enable specialization, as in the Extend example above. Of course, this lifting can be hidden behind an additional trait, so that the end-user interface remains idiomatic. The RFC mentions a few other extensions for dealing with this limitation – either by employing inherent item specialization, or by eventually generalizing HRTBs.

  • You can’t use specialization to handle some of the more “exotic” cases of overlap, as described in the Limitations section above. This is a deliberate trade, favoring simple rules over maximal expressiveness.

Finally, if we take it as a given that we want to support some form of “efficient inheritance” as at least a programming pattern in Rust, the ability to use specialization to do so, while also getting all of its benefits, is a net simplifier. The full story there, of course, depends on the forthcoming companion RFC.

Alternatives

Alternatives to specialization

The main alternative to specialization in general is an approach based on negative bounds, such as the one outlined in an earlier RFC. Negative bounds make it possible to handle many of the examples this proposal can’t (the ones in the Limitations section). But negative bounds are also fundamentally closed: they make it possible to perform a certain amount of specialization up front when defining a trait, but don’t easily support downstream crates further specializing the trait impls.

Alternative specialization designs

The “lattice” rule

The rule proposed in this RFC essentially says that overlapping impls must form chains, in which each one is strictly more specific than the last.

This approach can be generalized to lattices, in which partial overlap between impls is allowed, so long as there is an additional impl that covers precisely the area of overlap (the intersection). Such a generalization can support all of the examples mentioned in the Limitations section. Moving to the lattice rule is backwards compatible.

Unfortunately, the lattice rule (or really, any generalization beyond the proposed chain rule) runs into a nasty problem with our lifetime strategy. Consider the following:

trait Foo {}
impl<T, U> Foo for (T, U) where T: 'static {}
impl<T, U> Foo for (T, U) where U: 'static {}
impl<T, U> Foo for (T, U) where T: 'static, U: 'static {}

The problem is, if we allow this situation to go through typeck, by the time we actually generate code in trans, there is no possible impl to choose. That is, we do not have enough information to specialize, but we also don’t know which of the (overlapping) unspecialized impls actually applies. We can address this problem by making the “lifetime dependent specialization” lint issue a hard error for such intersection impls, but that means that certain compositions will simply not be allowed (and, as mentioned before, these compositions might involve traits, types, and impls that the programmer is not even aware of).

The limitations that the lattice rule addresses are fairly secondary to the main goals of specialization (as laid out in the Motivation), and so, since the lattice rule can be added later, the RFC sticks with the simple chain rule for now.

Explicit ordering

Another, perhaps more palatable alternative would be to take the specialization rule proposed in this RFC, but have some other way of specifying precedence when that rule can’t resolve it – perhaps by explicit priority numbering. That kind of mechanism is usually noncompositional, but due to the orphan rule, it’s a least a crate-local concern. Like the alternative rule above, it could be added backwards compatibly if needed, since it only enables new cases.

Singleton non-default wins

@pnkfelix suggested the following rule, which allows overlap so long as there is a unique non-default item.

For any given type-based lookup, either:

  1. There are no results (error)

  2. There is only one lookup result, in which case we’re done (regardless of whether it is tagged as default or not),

  3. There is a non-empty set of results with defaults, where exactly one result is non-default – and then that non-default result is the answer, or

  4. There is a non-empty set of results with defaults, where 0 or >1 results are non-default (and that is an error).

This rule is arguably simpler than the one proposed in this RFC, and can accommodate the examples we’ve presented throughout. It would also support some of the cases this RFC cannot, because the default/non-default distinction can be used to specify an ordering between impls when the subset ordering fails to do so. For that reason, it is not forward-compatible with the main proposal in this RFC.

The downsides are:

  • Because actual dispatch occurs at monomorphization, errors are generated quite late, and only at use sites, not impl sites. That moves traits much more in the direction of C++ templates.

  • It’s less scalable/compositional: this alternative design forces the “specialization hierarchy” to be flat, in particular ruling out multiple levels of increasingly-specialized blanket impls.

Alternative handling of lifetimes

This RFC proposes a laissez faire approach to lifetimes: we let you write whatever impls you like, then warn you if some of them are being ignored because the specialization is based purely on lifetimes.

The main alternative approach is to make a more “principled” distinction between two kinds of traits: those that can be used as constraints in specialization, and those whose impls can be lifetime dependent. Concretely:

#[lifetime_dependent]
trait Foo {}

// Only allowed to use 'static here because of the lifetime_dependent attribute
impl Foo for &'static str {}

trait Bar { fn bar(&self); }
impl<T> Bar for T {
    // Have to use `default` here to allow specialization
    default fn bar(&self) {}
}

// CANNOT write the following impl, because `Foo` is lifetime_dependent
// and Bar is not.
//
// NOTE: this is what I mean by *using* a trait in specialization;
// we are trying to say a specialization applies when T: Foo holds
impl<T: Foo> Bar for T {
    fn bar(&self) { ... }
}

// CANNOT write the following impl, because `Bar` is not lifetime_dependent
impl Bar for &'static str {
    fn bar(&self) { ... }
}

There are several downsides to this approach:

  • It forces trait authors to consider a rather subtle knob for every trait they write, choosing between two forms of expressiveness and dividing the world accordingly. The last thing the trait system needs is another knob.

  • Worse still, changing the knob in either direction is a breaking change:

    • If a trait gains a lifetime_dependent attribute, any impl of a different trait that used it to specialize would become illegal.

    • If a trait loses its lifetime_dependent attribute, any impl of that trait that was lifetime dependent would become illegal.

  • It hobbles specialization for some existing traits in std.

For the last point, consider From (which is tied to Into). In std, we have the following important “boxing” impl:

impl<'a, E: Error + 'a> From<E> for Box<Error + 'a>

This impl would necessitate From (and therefore, Into) being marked lifetime_dependent. But these traits are very likely to be used to describe specializations (e.g., an impl that applies when T: Into<MyType>).

There does not seem to be any way to consider such impls as lifetime-independent, either, because of examples like the following:

// If we consider this innocent...
trait Tie {}
impl<'a, T: 'a> Tie for (T, &'a u8)

// ... we get into trouble here
trait Foo {}
impl<'a, T> Foo for (T, &'a u8)
impl<'a, T> Foo for (T, &'a u8) where (T, &'a u8): Tie

All told, the proposed laissez faire seems a much better bet in practice, but only experience with the feature can tell us for sure.

Unresolved questions

All questions from the RFC discussion and prototype have been resolved.

Appendix

More details on inherent impls

One tricky aspect for specializing inherent impls is that, since there is no explicit trait definition, there is no general signature that each definition of an inherent item must match. Thinking about Vec above, for example, notice that the two signatures for extend look superficially different, although it’s clear that the first impl is the more general of the two.

It’s workable to use a very simple-minded conceptual desugaring: each item desugars into a distinct trait, with type parameters for e.g. each argument and the return type. All concrete type information then emerges from desugaring into impl blocks. Thus, for example:

impl<T, I> Vec<T> where I: IntoIterator<Item = T> {
    default fn extend(iter: I) { .. }
}

impl<T> Vec<T> {
    fn extend(slice: &[T]) { .. }
}

// Desugars to:

trait Vec_extend<Arg, Result> {
    fn extend(Arg) -> Result;
}

impl<T, I> Vec_extend<I, ()> for Vec<T> where I: IntoIterator<Item = T> {
    default fn extend(iter: I) { .. }
}

impl<T> Vec_extend<&[T], ()> for Vec<T> {
    fn extend(slice: &[T]) { .. }
}

All items of a given name must desugar to the same trait, which means that the number of arguments must be consistent across all impl blocks for a given Self type. In addition, we’d require that all of the impl blocks overlap (meaning that there is a single, most general impl). Without these constraints, we would implicitly be permitting full-blown overloading on both arity and type signatures. For the time being at least, we want to restrict overloading to explicit uses of the trait system, as it is today.

This “desugaring” semantics has the benefits of allowing inherent item specialization, and also making it actually be the case that inherent impls are really just implicit traits – unifying the two forms of dispatch. Note that this is a breaking change, since examples like the following are (surprisingly!) allowed today:

struct Foo<A, B>(A, B);

impl<A> Foo<A,A> {
    fn foo(&self, _: u32) {}
}

impl<A,B> Foo<A,B> {
    fn foo(&self, _: bool) {}
}

fn use_foo<A, B>(f: Foo<A,B>) {
    f.foo(true)
}

As has been proposed elsewhere, this “breaking change” could be made available through a feature flag that must be used even after stabilization (to opt in to specialization of inherent impls); the full details will depend on pending revisions to RFC 1122.

Summary

Introduce a “mid-level IR” (MIR) into the compiler. The MIR desugars most of Rust’s surface representation, leaving a simpler form that is well-suited to type-checking and translation.

Motivation

The current compiler uses a single AST from the initial parse all the way to the final generation of bitcode. While this has some advantages, there are also a number of distinct downsides.

  1. The complexity of the compiler is increased because all passes must be written against the full Rust language, rather than being able to consider a reduced subset. The MIR proposed here is radically simpler than the surface Rust syntax – for example, it contains no “match” statements, and converts both ref bindings and & expressions into a single form.

    a. There are numerous examples of “desugaring” in Rust. In principle, desugaring one language feature into another should make the compiler simpler, but in our current implementation, it tends to make things more complex, because every phase must simulate the desugaring anew. The most prominent example are closure expressions (|| ...), which desugar to a fresh struct instance, but other examples abound: for loops, if let and while let, box expressions, overloaded operators (which desugar to method calls), method calls (which desugar to UFCS notation).

    b. There are a number of features which are almost infeasible to implement today but which should be much easier given a MIR representation. Examples include box patterns and non-lexical lifetimes.

  2. Reasoning about fine-grained control-flow in an AST is rather difficult. The right tool for this job is a control-flow graph (CFG). We currently construct a CFG that lives “on top” of the AST, which allows the borrow checking code to be flow sensitive, but it is awkward to work with. Worse, because this CFG is not used by trans, it is not necessarily the case that the control-flow as seen by the analyses corresponds to the code that will be generated. The MIR is based on a CFG, resolving this situation.

  3. The reliability of safety analyses is reduced because the gap between what is being analyzed (the AST) and what is being executed (bitcode) is very wide. The MIR is very low-level and hence the translation to bitcode should be straightforward.

  4. The reliability of safety proofs, when we have some, would be reduced because the formal language we are modeling is so far from the full compiler AST. The MIR is simple enough that it should be possible to (eventually) make safety proofs based on the MIR itself.

  5. Rust-specific optimizations, and optimizing trans output, are very challenging. There are numerous cases where it would be nice to be able to do optimizations before translating to bitcode, or to take advantage of Rust-specific knowledge of which a backend may be unaware. Currently, we are forced to do these optimizations as part of lowering to bitcode, which can get quite complex. Having an intermediate form improves the situation because:

    a. In some cases, we can do the optimizations in the MIR itself before translation.

    b. In other cases, we can do analyses on the MIR to easily determine when the optimization would be safe.

    c. In all cases, whatever we can do on the MIR will be helpful for other targets beyond existing backends (see next bullet).

  6. Migrating away from LLVM is nearly impossible. Since so much of the semantics of Rust itself are embedded in the trans step which converts to LLVM IR. Under the MIR design, those semantics are instead described in the translation from AST to MIR, and the LLVM step itself simply applies optimizations.

Given the numerous benefits of a MIR, you may wonder why we have not taken steps in this direction earlier. In fact, we have a number of structures in the compiler that simulate the effect of a MIR:

  1. Adjustments. Every expression can have various adjustments, like autoderefs and so forth. These are computed by the type-checker and then read by later analyses. This is a form of MIR, but not a particularly convenient one.
  2. The CFG. The CFG tries to model the flow of execution as a graph rather than a tree, to help analyses in dealing with complex control-flow formed by things like loops, break, continue, etc. This CFG is however inferior to the MIR in that it is only an approximation of control-flow and does not include all the information one would need to actually execute the program (for example, for an if expression, the CFG would indicate that two branches are possible, but would not contain enough information to decide which branch to take).
  3. ExprUseVisitor. The ExprUseVisitor is designed to work in conjunction with the CFG. It walks the AST and highlights actions of interest to later analyses, such as borrows or moves. For each such action, the analysis gets a callback indicating the point in the CFG where the action occurred along with what happened. Overloaded operators, method calls, and so forth are “desugared” into their more primitive operations. This is effectively a kind of MIR, but it is not complete enough to do translation, since it focuses purely on borrows, moves, and other things of interest to the safety checker.

Each of these things were added in order to try and cope with the complexity of working directly on the AST. The CFG for example consolidates knowledge about control-flow into one piece of code, producing a data structure that can be easily interpreted. Similarly, the ExprUseVisitor consolidates knowledge of how to walk and interpret the current compiler representation.

Goals

It is useful to think about what “knowledge” the MIR should encapsulate. Here is a listing of the kinds of things that should be explicit in the MIR and thus that downstream code won’t have to re-encode in the form of repeated logic:

  • Precise ordering of control-flow. The CFG makes this very explicit, and the individual statements and nodes in the MIR are very small and detailed and hence nothing “interesting” happens in the middle of an individual node with respect to control-flow.
  • What needs to be dropped and when. The set of data that needs to be dropped and when is a fairly complex thing to calculate: you have to know what’s in scope, including temporary values and so forth. In the MIR, all drops are explicit, including those that result from panics and unwinding.
  • How matches are desugared. Reasoning about matches has been a traditional source of complexity. Matches combine traversing types with borrows, moves, and all sorts of other things, depending on the precise patterns in use. This is all vastly simplified and explicit in MIR.

One thing the current MIR does not make as explicit as it could is when something is moved. For by-value uses of a value, the code must still consult the type of the value to decide if that is a move or not. This could be made more explicit in the IR.

Which analyses are well-suited to the MIR?

Some analyses are better suited to the AST than to a MIR. The following is a list of work the compiler does that would benefit from using a MIR:

  • liveness checking: this is used to issue warnings about unused assignments and the like. The MIR is perfect for this sort of data-flow analysis.
  • borrow and move checking: the borrow checker already uses a combination of the CFG and ExprUseVisitor to try and achieve a similarly low-level of detail.
  • translation to IR: the MIR is much closer than the AST to the desired bitcode end-product.

Some other passes would probably work equally well on the MIR or an AST, but they will likely find the MIR somewhat easier to work with than the current AST simply because it is, well, simpler:

  • rvalue checking, which checks that things are Sized which need to be.
  • reachability and death checking.

These items are likely ill-suited to the MIR as designed:

  • privacy checking, since it relies on explicit knowledge of paths that is not necessarily present in the MIR.
  • lint checking, since it is often dependent on the sort of surface details we are seeking to obscure.

For some passes, the impact is not entirely clear. In particular, match exhaustiveness checking could easily be subsumed by the MIR construction process, which must do a similar analysis during the lowering process. However, once the MIR is built, the match is completely desugared into more primitive switches and so forth, so we will need to leave some markers in order to know where to check for exhaustiveness and to reconstruct counter examples.

Detailed design

What is really being proposed here?

The rest of this section goes into detail on a particular MIR design. However, the true purpose of this RFC is not to nail down every detail of the MIR – which are expected to evolve and change over time anyway – but rather to establish some high-level principles which drive the rest of the design:

  1. We should indeed lower the representation from an AST to something else that will drive later analyses, and this representation should be based on a CFG, not a tree.
  2. This representation should be explicitly minimal and not attempt to retain the original syntactic structure, though it should be possible to recover enough of it to make quality error messages.
  3. This representation should encode drops, panics, and other scope-dependent items explicitly.
  4. This representation does not have to be well-typed Rust, though it should be possible to type-check it using a tweaked variant on the Rust type system.

Prototype

The MIR design being described can be found here. In particular, this module defines the MIR representation, and this build module contains the code to create a MIR representation from an AST-like form.

For increased flexibility, as well as to make the code simpler, the prototype is not coded directly against the compiler’s AST, but rather against an idealized representation defined by the HAIR trait. Note that this HAIR trait is entirely independent from the HIR discussed by nrc in RFC 1191 – you can think of it as an abstract trait that any high-level Rust IR could implement, including our current AST. Moreover, it’s just an implementation detail and not part of the MIR being proposed here per se. Still, if you want to read the code, you have to understand its design.

The HAIR trait contains a number of opaque associated types for the various aspects of the compiler. For example, the type H::Expr represents an expression. In order to find out what kind of expression it is, the mirror method is called, which converts an H::Expr into an Expr<H> mirror. This mirror then contains embedded ExprRef<H> nodes to refer to further subexpressions; these may either be mirrors themselves, or else they may be additional H::Expr nodes. This allows the tree that is exported to differ in small ways from the actual tree within the compiler; the primary intention is to use this to model “adjustments” like autoderef. The code to convert from our current AST to the HAIR is not yet complete, but it can be found here.

Note that the HAIR mirroring system is an experiment and not really part of the MIR itself. It does however present an interesting option for (eventually) stabilizing access to the compiler’s internals.

Overview of the MIR

The proposed MIR always describes the execution of a single fn. At the highest level it consists of a series of declarations regarding the stack storage that will be required and then a set of basic blocks:

MIR = fn({TYPE}) -> TYPE {
    {let [mut] B: TYPE;}  // user-declared bindings and their types
    {let TEMP: TYPE;}     // compiler-introduced temporary
    {BASIC_BLOCK}         // control-flow graph
};

The storage declarations are broken into two categories. User-declared bindings have a 1-to-1 relationship with the variables specified in the program. Temporaries are introduced by the compiler in various cases. For example, borrowing an lvalue (e.g., &foo()) will introduce a temporary to store the result of foo(). Similarly, discarding a value foo(); is translated to something like let tmp = foo(); drop(tmp);). Temporaries are single-assignment, but because they can be borrowed they may be mutated after this assignment and hence they differ somewhat from variables in a pure SSA representation.

The proposed MIR takes the form of a graph where each node is a basic block. A basic block is a standard compiler term for a continuous sequence of instructions with a single entry point. All interesting control-flow happens between basic blocks. Each basic block has an id BB and consists of a sequence of statements and a terminator:

BASIC_BLOCK = BB: {STATEMENT} TERMINATOR

A STATEMENT can have one of three forms:

STATEMENT = LVALUE "=" RVALUE        // assign rvalue into lvalue
          | Drop(DROP_KIND, LVALUE)  // drop value if needed
DROP_KIND = SHALLOW                  // (see discussion below)
          | DEEP

The following sections dives into these various kinds of statements in more detail.

The TERMINATOR for a basic block describes how it connects to subsequent blocks:

TERMINATOR = GOTO(BB)              // normal control-flow
           | PANIC(BB)             // initiate unwinding, branching to BB for cleanup
           | IF(LVALUE, BB0, BB1)  // test LVALUE and branch to BB0 if true, else BB1
           | SWITCH(LVALUE, BB...) // load discriminant from LVALUE (which must be an enum),
                                   // and branch to BB... depending on which variant it is
           | CALL(LVALUE0 = LVALUE1(LVALUE2...), BB0, BB1)
                                   // call LVALUE1 with LVALUE2... as arguments. Write
                                   // result into LVALUE0. Branch to BB0 if it returns
                                   // normally, BB1 if it is unwinding.
           | DIVERGE               // return to caller, unwinding
           | RETURN                // return to caller normally

Most of the terminators should be fairly obvious. The most interesting part is the handling of unwinding. This aligns fairly close with how LLVM works: there is one terminator, PANIC, that initiates unwinding. It immediately branches to a handler (BB) which will perform cleanup and (eventually) reach a block that has a DIVERGE terminator. DIVERGE causes unwinding to continue up the stack.

Because calls to other functions can always (or almost always) panic, calls are themselves a kind of terminator. If we can determine that some function we are calling cannot unwind, we can always modify the IR to make the second basic block optional. (We could also add an RVALUE to represent calls, but it’s probably easiest to keep the call as a terminator unless the memory savings of consolidating basic blocks are found to be worthwhile.)

It’s worth pointing out that basic blocks are just a kind of compile-time and memory-use optimization; there is no semantic difference between a single block and two blocks joined by a GOTO terminator.

Assignments, values, and rvalues

The primary kind of statement is an assignment:

LVALUE "=" RVALUE

The semantics of this operation are to first evaluate the RVALUE and then store it into the LVALUE (which must represent a memory location of suitable type).

An LVALUE represents a path to a memory location. This is the basic “unit” analyzed by the borrow checker. It is always possible to evaluate an LVALUE without triggering any side-effects (modulo dereferences of unsafe pointers, which naturally can trigger arbitrary behavior if the pointer is not valid).

LVALUE = B                   // reference to a user-declared binding
       | TEMP                // a temporary introduced by the compiler
       | ARG                 // a formal argument of the fn
       | STATIC              // a reference to a static or static mut
       | RETURN              // the return pointer of the fn
       | LVALUE.f            // project a field or tuple field, like x.f or x.0
       | *LVALUE             // dereference a pointer
       | LVALUE[LVALUE]      // index into an array (see disc. below about bounds checks)
       | (LVALUE as VARIANT) // downcast to a specific variant of an enum,
                             // see the section on desugaring matches below

An RVALUE represents a computation that yields a result. This result must be stored in memory somewhere to be accessible. The MIR does not contain any kind of nested expressions: everything is flattened out, going through lvalues as intermediaries.

RVALUE = Use(LVALUE)                // just read an lvalue
       | [LVALUE; LVALUE]
       | &'REGION LVALUE
       | &'REGION mut LVALUE
       | LVALUE as TYPE
       | LVALUE <BINOP> LVALUE
       | <UNOP> LVALUE
       | Struct { f: LVALUE0, ... } // aggregates, see section below
       | (LVALUE...LVALUE)
       | [LVALUE...LVALUE]
       | CONSTANT
       | LEN(LVALUE)                // load length from a slice, see section below
       | BOX                        // malloc for builtin box, see section below
BINOP = + | - | * | / | ...         // excluding && and ||
UNOP = ! | -                        // note: no `*`, as that is part of LVALUE

One thing worth pointing out is that the binary and unary operators are only the builtin form, operating on scalar values. Overloaded operators will be desugared to trait calls. Moreover, all method calls are desugared into normal calls via UFCS form.

Constants

Constants are a subset of rvalues that can be evaluated at compilation time:

CONSTANT = INT
         | UINT
         | FLOAT
         | BOOL
         | BYTES
         | STATIC_STRING
         | ITEM<SUBSTS>                 // reference to an item or constant etc
         | <P0 as TRAIT<P1...Pn>>       // projection
         | CONSTANT(CONSTANT...)        // 
         | CAST(CONSTANT, TY)           // foo as bar
         | Struct { (f: CONSTANT)... }  // aggregates...
         | (CONSTANT...)                //
         | [CONSTANT...]                //

Aggregates and further lowering

The set of rvalues includes “aggregate” expressions like (x, y) or Foo { f: x, g: y }. This is a place where the MIR (somewhat) departs from what will be generated compilation time, since (often) an expression like f = (x, y, z) will wind up desugared into a series of piecewise assignments like:

f.0 = x;
f.1 = y;
f.2 = z;

However, there are good reasons to include aggregates as first-class rvalues. For one thing, if we break down each aggregate into the specific assignments that would be used to construct the value, then zero-sized types are never assigned, since there is no data to actually move around at runtime. This means that the compiler couldn’t distinguish uninitialized variables from initialized ones. That is, code like this:

let x: (); // note: never initialized
use(x)

and this:

let x: () = ();
use(x);

would desugar to the same MIR. That is a problem, particularly with respect to destructors: imagine that instead of the type (), we used a type like struct Foo; where Foo implements Drop.

Another advantage is that building aggregates in a two-step way assures the proper execution order when unwinding occurs before the complete value is constructed. In particular, we want to drop the intermediate results in the order that they appear in the source, not in the order in which the fields are specified in the struct definition.

A final reason to include aggregates is that, at runtime, the representation of an aggregate may indeed fit within a single word, in which case making a temporary and writing the fields piecemeal may in fact not be the correct representation.

In any case, after the move and correctness checking is done, it is easy enough to remove these aggregate rvalues and replace them with assignments. This could potentially be done during lowering, or as a pre-pass that transforms MIR statements like:

x = ...x;
y = ...y;
z = ...z;
f = (x, y, z)

to:

x = ...x;
y = ...y;
z = ...z;
f.0 = x;
f.1 = y;
f.2 = z;

combined with another pass that removes temporaries that are only used within a single assignment (and nowhere else):

f.0 = ...x;
f.1 = ...y;
f.2 = ...z;

Going further, once type-checking is done, it is plausible to do further lowering within the MIR purely for optimization purposes. For example, we could introduce intermediate references to cache the results of common lvalue computations and so forth.

Bounds checking

Because bounds checks are fallible, it’s important to encode them in the MIR whenever we do indexing. Otherwise the trans code would have to figure out on its own how to do unwinding at that point. Because the MIR doesn’t “desugar” fat pointers, we include a special rvalue LEN that extracts the length from an array value whose type matches [T] or [T;n] (in the latter case, it yields a constant). Using this, we desugar an array reference like y = arr[x] as follows:

let len: usize;
let idx: usize;
let lt: bool;

B0: {
  len = len(arr);
  idx = x;
  lt = idx < len;
  if lt { B1 } else { B2 }
}

B1: {
  y = arr[idx]
  ...
}

B2: {
  <panic>
}

The key point here is that we create a temporary (idx) capturing the value that we bounds checked and we ensure that there is a comparison against the length.

Overflow checking

Similarly, since overflow checks can trigger a panic, they ought to be exposed in the MIR as well. This is handled by having distinct binary operators for “add with overflow” and so forth, analogous to the LLVM intrinsics. These operators yield a tuple of (result, overflow), so result = left + right might be translated like:

let tmp: (u32, bool);

B0: {
  tmp = left + right;
  if(tmp.1, B2, B1)
}

B1: {
  result = tmp.0
  ...
}

B2: {
  <panic>
}

Matches

One of the goals of the MIR is to desugar matches into something much more primitive, so that we are freed from reasoning about their complexity. This is primarily achieved through a combination of SWITCH terminators and downcasts. To get the idea, consider this simple match statement:

match foo() {
    Some(ref v) => ...0,
    None => ...1
}

This would be converted into MIR as follows (leaving out the unwinding support):

BB0 {
    call(tmp = foo(), BB1, ...);
}

BB1 {
    switch(tmp, BB2, BB3) // two branches, corresponding to the Some and None variants resp.
}

BB2 {
    v = &(tmp as Option::Some).0;
    ...0
}

BB3 {
    ...1
}

There are some interesting cases that arise from matches that are worth examining.

Vector patterns. Currently, (unstable) Rust supports vector patterns which permit borrows that would not otherwise be legal:

let mut vec = [1, 2];
match vec {
    [ref mut p, ref mut q] => { ... }
}

If this code were written using p = &mut vec[0], q = &mut vec[1], the borrow checker would complain. This is because it does not attempt to reason about indices being disjoint, even if they are constant (this is a limitation we may wish to consider lifting at some point in the future, however).

To accommodate these, we plan to desugar such matches into lvalues using the special “constant index” form. The borrow checker would be able to reason that two constant indices are disjoint but it could consider “variable indices” to be (potentially) overlapping with all constant indices. This is a fairly straightforward thing to do (and in fact the borrow checker already includes similar logic, since the ExprUseVisitor encounters a similar dilemma trying to resolve borrows).

Drops

The Drop(DROP_KIND, LVALUE) instruction is intended to represent “automatic” compiler-inserted drops. The semantics of a Drop is that it drops “if needed”. This means that the compiler can insert it everywhere that a Drop would make sense (due to scoping), and assume that instrumentation will be done as needed to prevent double drops. Currently, this signaling is done by zeroing out memory at runtime, but we are in the process of introducing stack flags for this purpose: the MIR offers the opportunity to reify those flags if we wanted, and rewrite drops to be more narrow.

To illustrate how drop works, let’s work through a simple example. Imagine that we have a snippet of code like:

{
  let x = Box::new(22);
  send(x);
}

The compiler would generate a drop for x at the end of the block, but the value x would also be moved as part of the call to send. A later analysis could easily strip out this Drop since it is evident that the value is always used on all paths that lead to Drop.

Shallow drops and Box

The MIR includes the distinction between “shallow” and “deep” drop. Deep drop is the normal thing, but shallow drop is used when partially initializing boxes. This is tied to the box keyword. For example, an assignment like the following:

let x = box Foo::new();

would be translated to something like the following:

let tmp: Box<Foo>;

B0: {
  tmp = BOX;
  f = Foo::new; // constant reference
  call(*tmp, f, B1, B2);
} 

B1: { // successful return of the call
  x = use(tmp); // move of tmp
  ...
}

B2: { // calling Foo::new() panic'd
  drop(Shallow, tmp);
  diverge;
}

The interesting part here is the block B2, which indicates the case that Foo::new() invoked unwinding. In that case, we have to free the box that we allocated, but we only want to free the box itself, not its contents (it is not yet initialized).

Note that having this kind of builtin box code is a legacy thing. The more generalized protocol that RFC 809 specifies works in more-or-less exactly the same way: when that is adopted uniformly, the need for shallow drop and the Box rvalue will go away.

Phasing

Ideally, the translation to MIR would be done during type checking, but before “region checking”. This is because we would like to implement non-lexical lifetimes eventually, and doing that well would requires access to a control-flow graph. Given that we do very limited reasoning about regions at present, this should not be a problem.

Representing scopes

Lexical scopes in Rust play a large role in terms of when destructors run and how the reasoning about lifetimes works. However, they are completely erased by the graph format. For the most part, this is not an issue, since drops are encoded explicitly into the control-flow where needed. However, one place that we still need to reason about scopes (at least in the short term) is in region checking, because currently regions are encoded in terms of scopes, and we have to be able to map that to a region in the graph. The MIR therefore includes extra information mapping every scope to a SEME region (single-entry, multiple-exit). If/when we move to non-lexical lifetimes, regions would be defined in terms of the graph itself, and the need to retain scoping information should go away.

Monomorphization

Currently, we do monomorphization at translation time. If we ever chose to do it at a MIR level, that would be fine, but one thing to be careful of is that we may be able to elide Drop nodes based on the specific types.

Unchecked assertions

There are various bits of the MIR that are not trivially type-checked. In general, these are properties which are assured in Rust by construction in the high-level syntax, and thus we must be careful not to do any transformation that would endanger them after the fact.

  • Bounds-checking. We introduce explicit bounds checks into the IR that guard all indexing lvalues, but there is no explicit connection between this check and the later accesses.
  • Downcasts to a specific variant. We test variants with a SWITCH opcode but there is no explicit connection between this test and later downcasts.

This need for unchecked operations results form trying to lower and simplify the representation as much as possible, as well as trying to represent all panics explicitly. We believe the tradeoff to be worthwhile, particularly since:

  1. the existing analyses can continue to generally assume that these properties hold (e.g., that all indices are in bounds and all downcasts are safe); and,
  2. it would be trivial to implement a static dataflow analysis checking that bounds and downcasts only occur downstream of a relevant check.

Drawbacks

Converting from AST to a MIR will take some compilation time. Expectations are that constructing the MIR will be quite fast, and that follow-on code (such as trans and borrowck) will execute faster, because they will operate over a simpler and more compact representation. However, this needs to be measured.

More effort is required to make quality error messages. Because the representation the compiler is working with is now quite different from what the user typed, we have to put in extra effort to make sure that we bridge this gap when reporting errors. We have some precedent for dealing with this, however. For example, the ExprUseVisitor (and mem_categorization) includes extra annotations and hints to tell the borrow checker when a reference was introduced as part of a closure versus being explicit in the source code. The current prototype doesn’t have much in this direction, but it should be relatively straightforward to add. Hints like those, in addition to spans, should be enough to bridge the error message gap.

Alternatives

Use SSA. In the proposed MIR, temporaries are single-assignment but can be borrowed, making them more analogous to allocas than SSA values. This is helpful to analyses like the borrow checker, because it means that the program operates directly on paths through memory, versus having the stack modeled as allocas. The current model is also helpful for generating debuginfo.

SSA representation can be helpful for more sophisticated backend optimizations. However, it makes more sense to have the MIR be based on lvalues. There are some cases where it might make sense to do analyses on the MIR that would benefit from SSA, such as bounds check elision. In those cases, we could either quickly identify those temporaries that are not mutably borrowed (and which therefore act like SSA variables); or, further lower into a LIR, (which would be an SSA form); or else simply perform the analyses on the MIR using standard techniques like def-use chains. (CSE and so forth are straightforward both with and without SSA, honestly.)

Exclude unwinding. Excluding unwinding from the MIR would allow us to elide annoying details like bounds and overflow checking. These are not particularly interesting to borrowck, so that is somewhat appealing. But that would mean that consumers of MIR would have to reconstruct the order of drops and so forth on unwinding paths, which would require them reasoning about scopes and other rather complex bits of information. Moreover, having all drops fully exposed in the MIR is likely helpful for better handling of dynamic drop and also for the rules collectively known as dropck, though all details there have not been worked out.

Expand the set of operands. The proposed MIR forces all rvalue operands to be lvalues. This means that integer constants and other “simple” things will wind up introducing a temporary. For example, translating x = 2+2 will generate code like:

tmp0 = 2
tmp1 = 2
x = tmp0 + tmp1

A more common case will be calls to statically known functions like x = foo(3), which desugars to a temporary and a constant reference:

tmp0 = foo;
tmp1 = 3
x = tmp0(tmp1)

There is no particular harm in such constants: it would be very easy to optimize them away when reducing to bitcode, and if we do not do so, a backend may do it. However, we could also expand the scope of operands to include both lvalues and some simple rvalues like constants. The main advantage of this is that it would reduce the total number of statements and hence might help with memory consumption.

Totally safe MIR. This MIR includes operations whose safety is not trivially type-checked (see the section on unchecked assertions above). We might design a higher-level MIR where those properties held by construction, or modify the MIR to thread “evidence” of some form that makes it easier to check that the properties hold. The former would make downstream code accommodate more complexity. The latter remains an option in the future but doesn’t seem to offer much practical advantage.

Unresolved questions

What additional info is needed to provide for good error messages? Currently the implementation only has spans on statements, not lvalues or rvalues. We’ll have to experiment here. I expect we will probably wind up placing “debug info” on all lvalues, which includes not only a span but also a “translation” into terms the user understands. For example, in a closure, a reference to an by-reference upvar foo will be translated to something like *self.foo, and we would like that to be displayed to the user as just foo.

What additional info is needed for debuginfo? It may be that to generate good debuginfo we want to include additional information about control-flow or scoping.

Unsafe blocks. Should we layer unsafe in the MIR so that effect checking can be done on the CFG? It’s not the most natural way to do it, but it would make it fairly easy to support (e.g.) autoderef on unsafe pointers, since all the implicit operations are made explicit in the MIR. My hunch is that we can improve our HIR instead.

Summary

Change all functions dealing with reading “lines” to treat both ‘\n’ and ‘\r\n’ as a valid line-ending.

Motivation

The current behavior of these functions is to treat only ‘\n’ as line-ending. This is surprising for programmers experienced in other languages. Many languages open files in a “text-mode” per default, which means when they iterate over the lines, they don’t have to worry about the two kinds of line-endings. Such programmers will be surprised to learn that they have to take care of such details themselves in Rust. Some may not even have heard of the distinction between two styles of line-endings.

The current design also violates the “do what I mean” principle. Both ‘\r\n’ and ‘\n’ are widely used as line-separators. By talking about the concept of “lines”, it is clear that the current file (or buffer, really) is considered to be in text format. It is thus very reasonable to expect “lines” to apply to both kinds of encoding lines in binary format.

In particular, if the crate is developed on Linux or Mac, the programmer will probably have most of his input encoded with only ‘\n’ for the line-endings. He may use the functions talking about “lines”, and they will work all right. It is only when someone runs this crate on input that contains ‘\r\n’ that the bug will be uncovered. The editor has personally run into this issue when reading line-by-line from stdin, with the program suddenly failing on Windows.

Detailed design

The following functions will have to be changed: BufRead::lines and str::lines. They both should treat ‘\r\n’ as marking the end of a line. This can be implemented, for example, by first splitting at ‘\n’ like now and then removing a trailing ‘\r’ right before returning data to the caller.

Furthermore, str::lines_any (the only function currently dealing with both kinds of line-endings) is deprecated, as it is then functionally equivalent with str::lines.

Drawbacks

This is a semantics-breaking change, changing the behavior of released, stable API. However, as argued above, the new behavior is much less surprising than the old one - so one could consider this fixing a bug in the original implementation. There are alternatives available for the case that one really wants to split at ‘\n’ only, namely BufRead::split and str::split. However, BufRead:split does not iterate over String, but rather over Vec<u8>, so users have to insert an additional explicit call to String::from_utf8.

Alternatives

There’s the obvious alternative of not doing anything. This leaves a gap in the features Rust provides to deal with text files, making it hard to treat both kinds of line-endings uniformly.

The second alternative is to add BufRead::lines_any which works similar to str::lines_any in that it deals with both ‘\n’ and ‘\r\n’. This provides all the necessary functionality, but it still leaves people with the need to choose one of the two functions - and potentially choosing the wrong one. In particular, the functions with the shorter, nicer name (the existing ones) will almost always not be the right choice.

Unresolved questions

None I can think of.

Summary

Type system changes to address the outlives relation with respect to projections, and to better enforce that all types are well-formed (meaning that they respect their declared bounds). The current implementation can be both unsound (#24622), inconvenient (#23442), and surprising (#21748, #25692). The changes are as follows:

  • Simplify the outlives relation to be syntactically based.
  • Specify improved rules for the outlives relation and projections.
  • Specify more specifically where WF bounds are enforced, covering several cases missing from the implementation.

The proposed changes here have been tested and found to cause only a modest number of regressions (about two dozen root regressions were previously found on crates.io; however, that run did not yet include all the provisions from this RFC; updated numbers coming soon). In order to minimize the impact on users, the plan is to first introduce the changes in two stages:

  1. Initially, warnings will be issued for cases that violate the rules specified in this RFC. These warnings are not lints and cannot be silenced except by correcting the code such that it type-checks under the new rules.
  2. After one release cycle, those warnings will become errors.

Note that although the changes do cause regressions, they also cause some code (like that in #23442) which currently gets errors to compile successfully.

Motivation

TL;DR

This is a long detailed RFC that is attempting to specify in some detail aspects of the type system that were underspecified or buggily implemented before. This section just summarizes the effect on existing Rust code in terms of changes that may be required.

Warnings first, errors later. Although the changes described in this RFC are necessary for soundness (and many of them are straight-up bugfixes), there is some impact on existing code. Therefore the plan is to first issue warnings for a release cycle and then transition to hard errors, so as to ease the migration.

Associated type projections and lifetimes work more smoothly. The current rules for relating associated type projections (like T::Foo) and lifetimes are somewhat cumbersome. The newer rules are more flexible, so that e.g. we can deduce that T::Foo: 'a if T: 'a, and similarly that T::Foo is well-formed if T is well-formed. As a bonus, the new rules are also sound. ;)

Simpler outlives relation. The older definition for the outlives relation T: 'a was rather subtle. The new rule basically says that if all type/lifetime parameters appearing in the type T must outlive 'a, then T: 'a (though there can also be other ways for us to decide that T: 'a is valid, such as in-scope where clauses). So for example fn(&'x X): 'a if 'x: 'a and X: 'a (presuming that X is a type parameter). The older rules were based on what kind of data was actually reachable, and hence accepted this type (since no data of &'x X is reachable from a function pointer). This change primarily affects struct declarations, since they may now require additional outlives bounds:

// OK now, but after this RFC requires `X: 'a`:
struct Foo<'a, X> {
    f: fn(&'a X) // (because of this field)
}

More types are sanity checked. Generally Rust requires that if you have a type like SomeStruct<T>, then whatever where clauses are declared on SomeStruct must hold for T (this is called being “well-formed”). For example, if SomeStruct is declared like so:

struct SomeStruct<T:Eq> { .. }

then this implies that SomeStruct<f32> is ill-formed, since f32 does not implement Eq (just PartialEq). However, the current compiler doesn’t check this in associated type definitions:

impl Iterator for SomethingElse {
    type Item = SomeStruct<f32>; // accepted now, not after this RFC
}

Similarly, WF checking was skipped for trait object types and fn arguments. This means that fn(SomeStruct<f32>) would be considered well-formed today, though attempting to call the function would be an error. Under this RFC, that fn type is not well-formed (though sometimes when there are higher-ranked regions, WF checking may still be deferred until the point where the fn is called).

There are a few other places where similar requirements were being overlooked before but will now be enforced. For example, a number of traits like the following were found in the wild:

trait Foo {
    // currently accepted, but should require that Self: Sized
    fn method(&self, value: Option<Self>);
}

To be well-formed, an Option<T> type requires that T: Sized. In this case, though T=Self, and Self is not Sized by default. Therefore, this trait should be declared trait Foo: Sized to be legal. The compiler is currently attempting to enforce these rules, but many cases were overlooked in practice.

Impact on crates.io

This RFC has been largely implemented and tested against crates.io. A total of 43 (root) crates are affected by the changes. Interestingly, the vast majority of warnings/errors that occur are not due to new rules introduced by this RFC, but rather due to older rules being more correctly enforced.

Of the affected crates, 40 are receiving future compatibility warnings and hence continue to build for the time being. In the remaining three cases, it was not possible to isolate the effects of the new rules, and hence the compiler reports an error rather than a future compatibility warning.

What follows is a breakdown of the reason that crates on crates.io are receiving errors or warnings. Each row in the table corresponds to one of the explanations above.

ProblemFuture-compat. warningsErrors
More types are sanity checked353
Simpler outlives relation5

As you can see, by far the largest source of problems is simply that we are now sanity checking more types. This was always the intent, but there were bugs in the compiler that led to it either skipping checking altogether or only partially applying the rules. It is interesting to drill down a bit further into the 38 warnings/errors that resulted from more types being sanity checked in order to see what kinds of mistakes are being caught:

CaseProblemNumber
1Self: Sized required26
2Foo: Bar required11
3Not object safe1

An example of each case follows:

Cases 1 and 2. In the compiler today, types appearing in trait methods are incompletely checked. This leads to a lot of traits with insufficient bounds. By far the most common example was that the Self parameter would appear in a context where it must be sized, usually when it is embedded within another type (e.g., Option<Self>). Here is an example:

trait Test {
    fn test(&self) -> Option<Self>;
    //                ~~~~~~~~~~~~
    //            Incorrectly permitted before.
}

Because Option<T> requires that T: Sized, this trait should be declared as follows:

trait Test: Sized {
    fn test(&self) -> Option<Self>;
}

Case 2. Case 2 is the same as case 1, except that the missing bound is some trait other than Sized, or in some cases an outlives bound like T: 'a.

Case 3. The compiler currently permits non-object-safe traits to be used as types, even if objects could never actually be created (#21953).

Projections and the outlives relation

RFC 192 introduced the outlives relation T: 'a and described the rules that are used to decide when one type outlives a lifetime. In particular, the RFC describes rules that govern how the compiler determines what kind of borrowed data may be “hidden” by a generic type. For example, given this function signature:

fn foo<'a,I>(x: &'a I)
    where I: Iterator
{ ... }

the compiler is able to use implied region bounds (described more below) to automatically determine that:

  • all borrowed content in the type I outlives the function body;
  • all borrowed content in the type I outlives the lifetime 'a.

When associated types were introduced in RFC 195, some new rules were required to decide when an “outlives relation” involving a projection (e.g., I::Item: 'a) should hold. The initial rules were very conservative. This led to the rules from RFC 192 being adapted to cover associated type projections like I::Item. Unfortunately, these adapted rules are not ideal, and can still lead to annoying errors in some situations. Finding a better solution has been on the agenda for some time.

Simultaneously, we realized in #24622 that the compiler had a bug that caused it to erroneously assume that every projection like I::Item outlived the current function body, just as it assumes that type parameters like I outlive the current function body. This bug can lead to unsound behavior. Unfortunately, simply implementing the naive fix for #24622 exacerbates the shortcomings of the current rules for projections, causing widespread compilation failures in all sorts of reasonable and obviously correct code.

This RFC describes modifications to the type system that both restore soundness and make working with associated types more convenient in some situations. The changes are largely but not completely backwards compatible.

Well-formed types

A type is considered well-formed (WF) if it meets some simple correctness criteria. For builtin types like &'a T or [T], these criteria are built into the language. For user-defined types like a struct or an enum, the criteria are declared in the form of where clauses. In general, all types that appear in the source and elsewhere should be well-formed.

For example, consider this type, which combines a reference to a hashmap and a vector of additional key/value pairs:

struct DeltaMap<'a, K, V> where K: Hash + 'a, V: 'a {
    base_map: &'a mut HashMap<K,V>,
    additional_values: Vec<(K,V)>
}

Here, the WF criteria for DeltaMap<K,V> are as follows:

  • K: Hash, because of the where-clause,
  • K: 'a, because of the where-clause,
  • V: 'a, because of the where-clause
  • K: Sized, because of the implicit Sized bound
  • V: Sized, because of the implicit Sized bound

Let’s look at those K:'a bounds a bit more closely. If you leave them out, you will find that the structure definition above does not type-check. This is due to the requirement that the types of all fields in a structure definition must be well-formed. In this case, the field base_map has the type &'a mut HashMap<K,V>, and this type is only valid if K: 'a and V: 'a hold. Since we don’t know what K and V are, we have to surface this requirement in the form of a where-clause, so that users of the struct know that they must maintain this relationship in order for the struct to be internally coherent.

An aside: explicit WF requirements on types

You might wonder why you have to write K:Hash and K:'a explicitly. After all, they are obvious from the types of the fields. The reason is that we want to make it possible to check whether a type like DeltaMap<'foo,T,U> is well-formed without having to inspect the types of the fields – that is, in the current design, the only information that we need to use to decide if DeltaMap<'foo,T,U> is well-formed is the set of bounds and where-clauses.

This has real consequences on usability. It would be possible for the compiler to infer bounds like K:Hash or K:'a, but the origin of the bound might be quite remote. For example, we might have a series of types like:

struct Wrap1<'a,K>(Wrap2<'a,K>);
struct Wrap2<'a,K>(Wrap3<'a,K>);
struct Wrap3<'a,K>(DeltaMap<'a,K,K>);

Now, for Wrap1<'foo,T> to be well-formed, T:'foo and T:Hash must hold, but this is not obvious from the declaration of Wrap1. Instead, you must trace deeply through its fields to find out that this obligation exists.

Implied lifetime bounds

To help avoid undue annotation, Rust relies on implied lifetime bounds in certain contexts. Currently, this is limited to fn bodies. The idea is that for functions, we can make callers do some portion of the WF validation, and let the callees just assume it has been done already. (This is in contrast to the type definition, where we required that the struct itself declares all of its requirements up front in the form of where-clauses.)

To see this in action, consider a function that uses a DeltaMap:

fn foo<'a,K:Hash,V>(d: DeltaMap<'a,K,V>) { ... }

You’ll notice that there are no K:'a or V:'a annotations required here. This is due to implied lifetime bounds. Unlike structs, a function’s caller must examine not only the explicit bounds and where-clauses, but also the argument and return types. When there are generic type/lifetime parameters involved, the caller is in charge of ensuring that those types are well-formed. (This is in contrast with type definitions, where the type is in charge of figuring out its own requirements and listing them in one place.)

As the name “implied lifetime bounds” suggests, we currently limit implied bounds to region relationships. That is, we will implicitly derive a bound like K:'a or V:'a, but not K:Hash – this must still be written manually. It might be a good idea to change this, but that would be the topic of a separate RFC.

Currently, implied bound are limited to fn bodies. This RFC expands the use of implied bounds to cover impl definitions as well, since otherwise the annotation burden is quite painful. More on this in the next section.

NB. There is an additional problem concerning the interaction of implied bounds and contravariance (#25860). To better separate the issues, this will be addressed in a follow-up RFC that should appear shortly.

Missing WF checks

Unfortunately, the compiler currently fails to enforce WF in several important cases. For example, the following program is accepted:

struct MyType<T:Copy> { t: T }

trait ExampleTrait {
    type Output;
}

struct ExampleType;

impl ExampleTrait for ExampleType {
    type Output = MyType<Box<i32>>;
    //            ~~~~~~~~~~~~~~~~
    //                   |
    //   Note that `Box<i32>` is not `Copy`!
}

However, if we simply naively add the requirement that associated types must be well-formed, this results in a large annotation burden (see e.g. PR 25701). For example, in practice, many iterator implementation break due to region relationships:

impl<'a, T> IntoIterator for &'a LinkedList<T> {
   type Item = &'a T;
   ...
}

The problem here is that for &'a T to be well-formed, T: 'a must hold, but that is not specified in the where clauses. This RFC proposes using implied bounds to address this concern – specifically, every impl is permitted to assume that all types which appear in the impl header (trait reference) are well-formed, and in turn each “user” of an impl will validate this requirement whenever they project out of a trait reference (e.g., to do a method call, or normalize an associated type).

Detailed design

This section dives into detail on the proposed type rules.

A little type grammar

We extend the type grammar from RFC 192 with projections and slice types:

T = scalar (i32, u32, ...)        // Boring stuff
  | X                             // Type variable
  | Id<P0..Pn>                    // Nominal type (struct, enum)
  | &r T                          // Reference (mut doesn't matter here)
  | O0..On+r                      // Object type
  | [T]                           // Slice type
  | for<r..> fn(T1..Tn) -> T0     // Function pointer
  | <P0 as Trait<P1..Pn>>::Id     // Projection
P = r                             // Region name
  | T                             // Type
O = for<r..> TraitId<P1..Pn>      // Object type fragment
r = 'x                            // Region name

We’ll use this to describe the rules in detail.

A quick note on terminology: an “object type fragment” is part of an object type: so if you have Box<FnMut()+Send>, FnMut() and Send are object type fragments. Object type fragments are identical to full trait references, except that they do not have a self type (no P0).

Syntactic definition of the outlives relation

The outlives relation is defined in purely syntactic terms as follows. These are inference rules written in a primitive ASCII notation. :) As part of defining the outlives relation, we need to track the set of lifetimes that are bound within the type we are looking at. Let’s call that set R=<r0..rn>. Initially, this set R is empty, but it will grow as we traverse through types like fns or object fragments, which can bind region names via for<..>.

Simple outlives rules

Here are the rules covering the simple cases, where no type parameters or projections are involved:

OutlivesScalar:
  --------------------------------------------------
  R ⊢ scalar: 'a

OutlivesNominalType:
  ∀i. R ⊢ Pi: 'a
  --------------------------------------------------
  R ⊢ Id<P0..Pn>: 'a

OutlivesReference:
  R ⊢ 'x: 'a
  R ⊢ T: 'a
  --------------------------------------------------
  R ⊢ &'x T: 'a

OutlivesObject:
  ∀i. R ⊢ Oi: 'a
  R ⊢ 'x: 'a
  --------------------------------------------------
  R ⊢ O0..On+'x: 'a

OutlivesFunction:
  ∀i. R,r.. ⊢ Ti: 'a
  --------------------------------------------------
  R ⊢ for<r..> fn(T1..Tn) -> T0: 'a

OutlivesFragment:
  ∀i. R,r.. ⊢ Pi: 'a
  --------------------------------------------------
  R ⊢ for<r..> TraitId<P0..Pn>: 'a

Outlives for lifetimes

The outlives relation for lifetimes depends on whether the lifetime in question was bound within a type or not. In the usual case, we decide the relationship between two lifetimes by consulting the environment, or using the reflexive property. Lifetimes representing scopes within the current fn have a relationship derived from the code itself, while lifetime parameters have relationships defined by where-clauses and implied bounds.

OutlivesRegionEnv:
  'x ∉ R               // not a bound region
  ('x: 'a) in Env      // derivable from where-clauses etc
  --------------------------------------------------
  R ⊢ 'x: 'a

OutlivesRegionReflexive:
  --------------------------------------------------
  R ⊢ 'a: 'a

OutlivesRegionTransitive:
  R ⊢ 'a: 'c
  R ⊢ 'c: 'b
  --------------------------------------------------
  R ⊢ 'a: 'b

For higher-ranked lifetimes, we simply ignore the relation, since the lifetime is not yet known. This means for example that for<'a> fn(&'a i32): 'x holds, even though we do not yet know what region 'a is (and in fact it may be instantiated many times with different values on each call to the fn).

OutlivesRegionBound:
  'x ∈ R               // bound region
  --------------------------------------------------
  R ⊢ 'x: 'a

Outlives for type parameters

For type parameters, the only way to draw “outlives” conclusions is to find information in the environment (which is being threaded implicitly here, since it is never modified). In terms of a Rust program, this means both explicit where-clauses and implied bounds derived from the signature (discussed below).

OutlivesTypeParameterEnv:
  X: 'a in Env
  --------------------------------------------------
  R ⊢ X: 'a

Outlives for projections

Projections have the most possibilities. First, we may find information in the in-scope where clauses, as with type parameters, but we can also consult the trait definition to find bounds (consider an associated type declared like type Foo: 'static). These rule only apply if there are no higher-ranked lifetimes in the projection; for simplicity’s sake, we encode that by requiring an empty list of higher-ranked lifetimes. (This is somewhat stricter than necessary, but reflects the behavior of my prototype implementation.)

OutlivesProjectionEnv:
  <P0 as Trait<P1..Pn>>::Id: 'b in Env
  <> ⊢ 'b: 'a
  --------------------------------------------------
  <> ⊢ <P0 as Trait<P1..Pn>>::Id: 'a

OutlivesProjectionTraitDef:
  WC = [Xi => Pi] WhereClauses(Trait)
  <P0 as Trait<P1..Pn>>::Id: 'b in WC
  <> ⊢ 'b: 'a
  --------------------------------------------------
  <> ⊢ <P0 as Trait<P1..Pn>>::Id: 'a

All the rules covered so far already exist today. This last rule, however, is not only new, it is the crucial insight of this RFC. It states that if all the components in a projection’s trait reference outlive 'a, then the projection must outlive 'a:

OutlivesProjectionComponents:
  ∀i. R ⊢ Pi: 'a
  --------------------------------------------------
  R ⊢ <P0 as Trait<P1..Pn>>::Id: 'a

Given the importance of this rule, it’s worth spending a bit of time discussing it in more detail. The following explanation is fairly informal. A more detailed look can be found in the appendix.

Let’s begin with a concrete example of an iterator type, like std::vec::Iter<'a,T>. We are interested in the projection of Iterator::Item:

<Iter<'a,T> as Iterator>::Item

or, in the more succinct (but potentially ambiguous) form:

Iter<'a,T>::Item

Since I’m going to be talking a lot about this type, let’s just call it <PROJ> for now. We would like to determine whether <PROJ>: 'x holds.

Now, the easy way to solve <PROJ>: 'x would be to normalize <PROJ> by looking at the relevant impl:

impl<'b,U> Iterator for Iter<'b,U> {
    type Item = &'b U;
    ...
}

From this impl, we can conclude that <PROJ> == &'a T, and thus reduce <PROJ>: 'x to &'a T: 'x, which in turn holds if 'a: 'x and T: 'x (from the rule OutlivesReference).

But often we are in a situation where we can’t normalize the projection (for example, a projection like I::Item where we only know that I: Iterator). What can we do then? The rule OutlivesProjectionComponents says that if we can conclude that every lifetime/type parameter Pi to the trait reference outlives 'x, then we know that a projection from those parameters outlives 'x. In our example, the trait reference is <Iter<'a,T> as Iterator>, so that means that if the type Iter<'a,T> outlives 'x, then the projection <PROJ> outlives 'x. Now, you can see that this trivially reduces to the same result as the normalization, since Iter<'a,T>: 'x holds if 'a: 'x and T: 'x (from the rule OutlivesNominalType).

OK, so we’ve seen that applying the rule OutlivesProjectionComponents comes up with the same result as normalizing (at least in this case), and that’s a good sign. But what is the basis of the rule?

The basis of the rule comes from reasoning about the impl that we used to do normalization. Let’s consider that impl again, but this time hide the actual type that was specified:

impl<'b,U> Iterator for Iter<'b,U> {
    type Item = /* <TYPE> */;
    ...
}

So when we normalize <PROJ>, we obtain the result by applying some substitution Θ to <TYPE>. This substitution is a mapping from the lifetime/type parameters on the impl to some specific values, such that <PROJ> == Θ <Iter<'b,U> as Iterator>::Item. In this case, that means Θ would be ['b => 'a, U => T] (and of course <TYPE> would be &'b U, but we’re not supposed to rely on that).

The key idea for the OutlivesProjectionComponents is that the only way that <TYPE> can fail to outlive 'x is if either:

  • it names some lifetime parameter 'p where 'p: 'x does not hold; or,
  • it names some type parameter X where X: 'x does not hold.

Now, the only way that <TYPE> can refer to a parameter P is if it is brought in by the substitution Θ. So, if we can just show that all the types/lifetimes that in the range of Θ outlive 'x, then we know that Θ <TYPE> outlives 'x.

Put yet another way: imagine that you have an impl with no parameters, like:

impl Iterator for Foo {
    type Item = /* <TYPE> */;
}

Clearly, whatever <TYPE> is, it can only refer to the lifetime 'static. So <Foo as Iterator>::Item: 'static holds. We know this is true without ever knowing what <TYPE> is – we just need to see that the trait reference <Foo as Iterator> doesn’t have any lifetimes or type parameters in it, and hence the impl cannot refer to any lifetime or type parameters.

Implementation complications

The current region inference code only permits constraints of the form:

C = r0: r1
  | C AND C

This is convenient because a simple fixed-point iteration suffices to find the minimal regions which satisfy the constraints.

Unfortunately, this constraint model does not scale to the outlives rules for projections. Consider a trait reference like <T as Trait<'X>>::Item: 'Y, where 'X and 'Y are both region variables whose value is being inferred. At this point, there are several inference rules which could potentially apply. Let us assume that there is a where-clause in the environment like <T as Trait<'a>>::Item: 'b. In that case, if 'X == 'a and 'b: 'Y, then we could employ the OutlivesProjectionEnv rule. This would correspond to a constraint set like:

C = 'X:'a AND 'a:'X AND 'b:'Y

Otherwise, if T: 'a and 'X: 'Y, then we could use the OutlivesProjectionComponents rule, which would require a constraint set like:

C = C1 AND 'X:'Y

where C1 is the constraint set for T:'a.

As you can see, these two rules yielded distinct constraint sets. Ideally, we would combine them with an OR constraint, but no such constraint is available. Adding such a constraint complicates how inference works, since a fixed-point iteration is no longer sufficient.

This complication is unfortunate, but to a large extent already exists with where-clauses and trait matching (see e.g. #21974). (Moreover, it seems to be inherent to the concept of associated types, since they take several inputs (the parameters to the trait) which may or may not be related to the actual type definition in question.)

For the time being, the current implementation takes a pragmatic approach based on heuristics. It first examines whether any region bounds are declared in the trait and, if so, prefers to use those. Otherwise, if there are region variables in the projection, then it falls back to the OutlivesProjectionComponents rule. This is always sufficient but may be stricter than necessary. If there are no region variables in the projection, then it can simply run inference to completion and check each of the other two rules in turn. (It is still necessary to run inference because the bound may be a region variable.) So far this approach has sufficed for all situations encountered in practice. Eventually, we should extend the region inferencer to a richer model that includes “OR” constraints.

The WF relation

This section describes the “well-formed” relation. In previous RFCs, this was combined with the outlives relation. We separate it here for reasons that shall become clear when we discuss WF conditions on impls.

The WF relation is really pretty simple: it just says that a type is “self-consistent”. Typically, this would include validating scoping (i.e., that you don’t refer to a type parameter X if you didn’t declare one), but we’ll take those basic conditions for granted.

WfScalar:
  --------------------------------------------------
  R ⊢ scalar WF

WfParameter:
  --------------------------------------------------
  R ⊢ X WF                  // where X is a type parameter

WfTuple:
  ∀i. R ⊢ Ti WF
  ∀i<n. R ⊢ Ti: Sized       // the *last* field may be unsized
  --------------------------------------------------
  R ⊢ (T0..Tn) WF

WfNominalType:
  ∀i. R ⊢ Pi Wf             // parameters must be WF,
  C = WhereClauses(Id)      // and the conditions declared on Id must hold...
  R ⊢ [P0..Pn] C            // ...after substituting parameters, of course
  --------------------------------------------------
  R ⊢ Id<P0..Pn> WF

WfReference:
  R ⊢ T WF                  // T must be WF
  R ⊢ T: 'x                 // T must outlive 'x
  --------------------------------------------------
  R ⊢ &'x T WF

WfSlice:
  R ⊢ T WF
  R ⊢ T: Sized
  --------------------------------------------------
  [T] WF

WfProjection:
  ∀i. R ⊢ Pi WF             // all components well-formed
  R ⊢ <P0: Trait<P1..Pn>>   // the projection itself is valid
  --------------------------------------------------
  R ⊢ <P0 as Trait<P1..Pn>>::Id WF

WF checking and higher-ranked types

There are two places in Rust where types can introduce lifetime names into scope: fns and trait objects. These have somewhat different rules than the rest, simply because they modify the set R of bound lifetime names. Let’s start with the rule for fn types:

WfFn:
  ∀i. R, r.. ⊢ Ti WF
  --------------------------------------------------
  R ⊢ for<r..> fn(T1..Tn) -> T0 WF

Basically, this rule adds the bound lifetimes to the set R and then checks whether the argument and return type are well-formed. We’ll see in the next section that means that any requirements on those types which reference bound identifiers are just assumed to hold, but the remainder are checked. For example, if we have a type HashSet<K> which requires that K: Hash, then fn(HashSet<NoHash>) would be illegal since NoHash: Hash does not hold, but for<'a> fn(HashSet<&'a NoHash>) would be legal, since &'a NoHash: Hash involves a bound region 'a. See the “Checking Conditions” section for details.

Note that fn types do not require that T0..Tn be Sized. This is intentional. The limitation that only sized values can be passed as argument (or returned) is enforced at the time when a fn is actually called, as well as in actual fn definitions, but is not considered fundamental to fn types themselves. There are several reasons for this. For one thing, it’s forwards compatible with passing DST by value. For another, it means that non-defaulted trait methods to do not have to show that their argument types are Sized (this will be checked in the implementations, where more types are known). Since the implicit Self type parameter is not Sized by default (RFC 546), requiring that argument types be Sized in trait definitions proves to be an annoying annotation burden.

The object type rule is similar, though it includes an extra clause:

WfObject:
  rᵢ = union of implied region bounds from Oi
  ∀i. rᵢ: r
  ∀i. R ⊢ Oi WF
  --------------------------------------------------
  R ⊢ O0..On+r WF

The first two clauses here state that the explicit lifetime bound r must be an approximation for the implicit bounds rᵢ derived from the trait definitions. That is, if you have a trait definition like

trait Foo: 'static { ... }

and a trait object like Foo+'x, when we require that 'static: 'x (which is true, clearly, but in some cases the implicit bounds from traits are not 'static but rather some named lifetime).

The next clause states that all object type fragments must be WF. An object type fragment is WF if its components are WF:

WfObjectFragment:
  ∀i. R, r.. ⊢ Pi
  TraitId is object safe
  --------------------------------------------------
  R ⊢ for<r..> TraitId<P1..Pn>

Note that we don’t check the where clauses declared on the trait itself. These are checked when the object is created. The reason not to check them here is because the Self type is not known (this is an object, after all), and hence we can’t check them in general. (But see unresolved questions.)

WF checking a trait reference

In some contexts, we want to check a trait reference, such as the ones that appear in where clauses or type parameter bounds. The rules for this are given here:

WfTraitReference:
  ∀i. R, r.. ⊢ Pi
  C = WhereClauses(Id)      // and the conditions declared on Id must hold...
  R, r0...rn ⊢ [P0..Pn] C   // ...after substituting parameters, of course
  --------------------------------------------------
  R ⊢ for<r..> P0: TraitId<P1..Pn>

The rules are fairly straightforward. The components must be well formed, and any where-clauses declared on the trait itself much hold.

Checking conditions

In various rules above, we have rules that declare that a where-clause must hold, which have the form R ̣⊢ WhereClause. Here, R represents the set of bound regions. It may well be that WhereClause does not use any of the regions in R. In that case, we can ignore the bound-regions and simple check that WhereClause holds. But if WhereClause does refer to regions in R, then we simply consider R ⊢ WhereClause to hold. Those conditions will be checked later when the bound lifetimes are instantiated (either through a call or a projection).

In practical terms, this means that if I have a type like:

struct Iterator<'a, T:'a> { ... }

and a function type like for<'a> fn(i: Iterator<'a, T>) then this type is considered well-formed without having to show that T: 'a holds. In terms of the rules, this is because we would wind up with a constraint like 'a ⊢ T: 'a.

However, if I have a type like

struct Foo<'a, T:Eq> { .. }

and a function type like for<'a> fn(f: Foo<'a, T>), I still must show that T: Eq holds for that function to be well-formed. This is because the condition which is generated will be 'a ⊢ T: Eq, but 'a is not referenced there.

Implied bounds

Implied bounds can be derived from the WF and outlives relations. The implied bounds from a type T are given by expanding the requirements that T: WF. Since we currently limit ourselves to implied region bounds, we we are interesting in extracting requirements of the form:

  • 'a:'r, where two regions must be related;
  • X:'r, where a type parameter X outlives a region; or,
  • <T as Trait<..>>::Id: 'r, where a projection outlives a region.

Some caution is required around projections when deriving implied bounds. If we encounter a requirement that e.g. X::Id: 'r, we cannot for example deduce that X: 'r must hold. This is because while X: 'r is sufficient for X::Id: 'r to hold, it is not necessary for X::Id: 'r to hold. So we can only conclude that X::Id: 'r holds, and not X: 'r.

When should we check the WF relation and under what conditions?

Currently the compiler performs WF checking in a somewhat haphazard way: in some cases (such as impls), it omits checking WF, but in others (such as fn bodies), it checks WF when it should not have to. Partly that is due to the fact that the compiler currently connects the WF and outlives relationship into one thing, rather than separating them as described here.

Constants/statics. The type of a constant or static can be checked for WF in an empty environment.

Struct/enum declarations. In a struct/enum declaration, we should check that all field types are WF, given the bounds and where-clauses from the struct declaration. Also check that where-clauses are well-formed.

Function items. For function items, the environment consists of all the where-clauses from the fn, as well as implied bounds derived from the fn’s argument types. These are then used to check that the following are well-formed:

  • argument types;
  • return type;
  • where clauses;
  • types of local variables.

These WF requirements are imposed at each fn or associated fn definition (as well as within trait items).

Trait impls. In a trait impl, we assume that all types appearing in the impl header are well-formed. This means that the initial environment for an impl consists of the impl where-clauses and implied bounds derived from its header. Example: Given an impl like impl<'a,T> SomeTrait for &'a T, the environment would be T: Sized (explicit where-clause) and T: 'a (implied bound derived from &'a T). This environment is used as the starting point for checking the items:

  • Where-clauses declared on the trait must be WF.
  • Associated types must be WF in the trait environment.
  • The types of associated constants must be WF in the trait environment.
  • Associated fns are checked just like regular function items, but with the additional implied bounds from the impl signature.

Inherent impls. In an inherent impl, we can assume that the self type is well-formed, but otherwise check the methods as if they were normal functions. We must check that all items are well-formed, along with the where clauses declared on the impl.

Trait declarations. Trait declarations (and defaults) are checked in the same fashion as impls, except that there are no implied bounds from the impl header. We must check that all items are well-formed, along with the where clauses declared on the trait.

Type aliases. Type aliases are currently not checked for WF, since they are considered transparent to type-checking. It’s not clear that this is the best policy, but it seems harmless, since the WF rules will still be applied to the expanded version. See the Unresolved Questions for some discussion on the alternatives here.

Several points in the list above made use of implied bounds based on assuming that various types were WF. We have to ensure that those bounds are checked on the reciprocal side, as follows:

Fns being called. Before calling a fn, we check that its argument and return types are WF. This check takes place after all higher-ranked lifetimes have been instantiated. Checking the argument types ensures that the implied bounds due to argument types are correct. Checking the return type ensures that the resulting type of the call is WF.

Method calls, “UFCS” notation for fns and constants. These are the two ways to project a value out of a trait reference. A method call or UFCS resolution will require that the trait reference is WF according to the rules given above.

Normalizing associated type references. Whenever a projection type like T::Foo is normalized, we will require that the trait reference is WF.

Drawbacks

N/A

Alternatives

I’m not aware of any appealing alternatives.

Unresolved questions

Best policy for type aliases. The current policy is not to check type aliases, since they are transparent to type-checking, and hence their expansion can be checked instead. This is coherent, though somewhat confusing in terms of the interaction with projections, since we frequently cannot resolve projections without at least minimal bounds (i.e., type IteratorAndItem<T:Iterator> = (T::Item, T)). Still, full-checking of WF on type aliases seems to just mean more annotation with little benefit. It might be nice to keep the current policy and later, if/when we adopt a more full notion of implied bounds, rationalize it by saying that the suitable bounds for a type alias are implied by its expansion.

For trait object type fragments, should we check WF conditions when we can? For example, if you have:

trait HashSet<K:Hash>

should an object like Box<HashSet<NotHash>> be illegal? It seems like that would be inline with our “best effort” approach to bound regions, so probably yes.

Appendix

The informal explanation glossed over some details. This appendix tries to be a bit more thorough with how it is that we can conclude that a projection outlives 'a if its inputs outlive 'a. To start, let’s specify the projection <PROJ> as:

<P0 as Trait<P1...Pn>>::Id

where P can be a lifetime or type parameter as appropriate.

Then we know that there exists some impl of the form:

impl<X0..Xn> Trait<Q1..Qn> for Q0 {
    type Id = T;
}

Here again, X can be a lifetime or type parameter name, and Q can be any lifetime or type parameter.

Let Θ be a suitable substitution [Xi => Ri] such that ∀i. Θ Qi == Pi (in other words, so that the impl applies to the projection). Then the normalized form of <PROJ> is Θ T. Note that because trait matching is invariant, the types must be exactly equal.

RFC 447 and #24461 require that a parameter Xi can only appear in T if it is constrained by the trait reference <Q0 as Trait<Q1..Qn>>. The full definition of constrained appears below, but informally it means roughly that Xi appears in Q0..Qn somewhere outside of a projection. Let’s call the constrained set of parameters Constrained(Q0..Qn).

Recall the rule OutlivesProjectionComponents:

OutlivesProjectionComponents:
  ∀i. R ⊢ Pi: 'a
  --------------------------------------------------
  R ⊢ <P0 as Trait<P1..Pn>>::Id: 'a

We aim to show that ∀i. R ⊢ Pi: 'a implies R ⊢ (Θ T): 'a, which implies that this rule is a sound approximation for normalization. The argument follows from two lemmas (“proofs” for these lemmas are sketched below):

  1. First, we show that if R ⊢ Pi: 'a, then every “subcomponent” P' of Pi outlives 'a. The idea here is that each variable Xi from the impl will match against and extract some subcomponent P' of Pi, and we wish to show that the subcomponent P' extracted by Xi outlives 'a.
  2. Then we will show that the type θ T outlives 'a if, for each of the in-scope parameters Xi, Θ Xi: 'a.

Definition 1. Constrained(T) defines the set of type/lifetime parameters that are constrained by a type. This set is found just by recursing over and extracting all subcomponents except for those found in a projection. This is because a type like X::Foo does not constrain what type X can take on, rather it uses X as an input to compute a result:

Constrained(scalar) = {}
Constrained(X) = {X}
Constrained(&'x T) = {'x} | Constrained(T)
Constrained(O0..On+'x) = Union(Constrained(Oi)) | {'x}
Constrained([T]) = Constrained(T),
Constrained(for<..> fn(T1..Tn) -> T0) = Union(Constrained(Ti))
Constrained(<P0 as Trait<P1..Pn>>::Id) = {} // empty set

Definition 2. Constrained('a) = {'a}. In other words, a lifetime reference just constraints itself.

Lemma 1: Given R ⊢ P: 'a, P = [X => P'] Q, and X ∈ Constrained(Q), then R ⊢ P': 'a. Proceed by induction and by cases over the form of P:

  1. If P is a scalar or parameter, there are no subcomponents, so P'=P.
  2. For nominal types, references, objects, and function types, either P'=P or P' is some subcomponent of P. The appropriate “outlives” rules all require that all subcomponents outlive 'a, and hence the conclusion follows by induction.
  3. If P' is a projection, that implies that P'=P.
    • Otherwise, Q must be a projection, and in that case, Constrained(Q) would be the empty set.

Lemma 2: Given that FV(T) ∈ X, ∀i. Ri: 'a, then [X => R] T: 'a. In other words, if all the type/lifetime parameters that appear in a type outlive 'a, then the type outlives 'a. Follows by inspection of the outlives rules.

Edit History

RFC1592 - amend to require that tuple fields be sized

Summary

Promote ! to be a full-fledged type equivalent to an enum with no variants.

Motivation

To understand the motivation for this it’s necessary to understand the concept of empty types. An empty type is a type with no inhabitants, ie. a type for which there is nothing of that type. For example consider the type enum Never {}. This type has no constructors and therefore can never be instantiated. It is empty, in the sense that there are no values of type Never. Note that Never is not equivalent to () or struct Foo {} each of which have exactly one inhabitant. Empty types have some interesting properties that may be unfamiliar to programmers who have not encountered them before.

  • They never exist at runtime. Because there is no way to create one.

  • They have no logical machine-level representation. One way to think about this is to consider the number of bits required to store a value of a given type. A value of type bool can be in two possible states (true and false). Therefore to specify which state a bool is in we need log2(2) ==> 1 bit of information. A value of type () can only be in one possible state (()). Therefore to specify which state a () is in we need log2(1) ==> 0 bits of information. A value of type Never has no possible states it can be in. Therefore to ask which of these states it is in is a meaningless question and we have log2(0) ==> undefined (or -∞). Having no representation is not problematic as safe code never has reason nor ability to handle data of an empty type (as such data can never exist). In practice, Rust currently treats empty types as having size 0.

  • Code that handles them never executes. Because there is no value that it could execute with. Therefore, having a Never in scope is a static guarantee that a piece of code will never be run.

  • They represent the return type of functions that don’t return. For a function that never returns, such as exit, the set of all values it may return is the empty set. That is to say, the type of all values it may return is the type of no inhabitants, ie. Never or anything isomorphic to it. Similarly, they are the logical type for expressions that never return to their caller such as break, continue and return.

  • They can be converted to any other type. To specify a function A -> B we need to specify a return value in B for every possible argument in A. For example, an expression that converts bool -> T needs to specify a return value for both possible arguments true and false:

    let foo: &'static str = match x {
      true  => "some_value",
      false => "some_other_value",
    };

    Likewise, an expression to convert () -> T needs to specify one value, the value corresponding to ():

    let foo: &'static str = match x {
      ()  => "some_value",
    };

    And following this pattern, to convert Never -> T we need to specify a T for every possible Never. Of which there are none:

    let foo: &'static str = match x {
    };

    Reading this, it may be tempting to ask the question “what is the value of foo then?”. Remember that this depends on the value of x. As there are no possible values of x it’s a meaningless question and besides, the fact that x has type Never gives us a static guarantee that the match block will never be executed.

Here’s some example code that uses Never. This is legal rust code that you can run today.

use std::process::exit;

// Our empty type
enum Never {}

// A diverging function with an ordinary return type
fn wrap_exit() -> Never {
    exit(0);
}

// we can use a `Never` value to diverge without using unsafe code or calling
// any diverging intrinsics
fn diverge_from_never(n: Never) -> ! {
    match n {
    }
}

fn main() {
    let x: Never = wrap_exit();
    // `x` is in scope, everything below here is dead code.

    let y: String = match x {
        // no match cases as `Never` has no variants
    };

    // we can still use `y` though
    println!("Our string is: {}", y);

    // we can use `x` to diverge
    diverge_from_never(x)
}

This RFC proposes that we allow ! to be used directly, as a type, rather than using Never (or equivalent) in its place. Under this RFC, the above code could more simply be written.

use std::process::exit;

fn main() {
    let x: ! = exit(0);
    // `x` is in scope, everything below here is dead code.

    let y: String = match x {
        // no match cases as `Never` has no variants
    };

    // we can still use `y` though
    println!("Our string is: {}", y);

    // we can use `x` to diverge
    x
}

So why do this? AFAICS there are 3 main reasons

  • It removes one superfluous concept from the language and allows diverging functions to be used in generic code.

    Currently, Rust’s functions can be divided into two kinds: those that return a regular type and those that use the -> ! syntax to mark themselves as diverging. This division is unnecessary and means that functions of the latter kind don’t play well with generic code.

    For example: you want to use a diverging function where something expects a Fn() -> T

    fn foo() -> !;
    fn call_a_fn<T, F: Fn() -> T>(f: F) -> T;
    
    call_a_fn(foo) // ERROR!

    Or maybe you want to use a diverging function to implement a trait method that returns an associated type:

    trait Zog {
        type Output
        fn zog() -> Output;
    };
    
    impl Zog for T {
        type Output = !;                    // ERROR!
        fn zog() -> ! { panic!("aaah!") };  // ERROR!
    }

    The workaround in these cases is to define a type like Never and use it in place of !. You can then define functions wrap_foo and unwrap_zog similar to the functions wrap_exit and diverge_from_never defined earlier. It would be nice if this workaround wasn’t necessary.

  • It creates a standard empty type for use throughout rust code.

    Empty types are useful for more than just marking functions as diverging. When used in an enum variant they prevent the variant from ever being instantiated. One major use case for this is if a method needs to return a Result<T, E> to satisfy a trait but we know that the method will always succeed.

    For example, here’s a saner implementation of FromStr for String than currently exists in libstd.

    impl FromStr for String {
        type Err = !;
        
        fn from_str(s: &str) -> Result<String, !> {
            Ok(String::from(s))
        }
    }

    This result can then be safely unwrapped to a String without using code-smelly things like unreachable!() which often mask bugs in code.

    let r: Result<String, !> = FromStr::from_str("hello");
    let s = match r {
        Ok(s)   => s,
        Err(e)  => match e {},
    }

    Empty types can also be used when someone needs a dummy type to implement a trait. Because ! can be converted to any other type it has a trivial implementation of any trait whose only associated items are non-static methods. The impl simply matches on self for every method.

    Example:

    trait ToSocketAddr {
        fn to_socket_addr(&self) -> IoResult<SocketAddr>;
        fn to_socket_addr_all(&self) -> IoResult<Vec<SocketAddr>>;
    }
    
    impl ToSocketAddr for ! {
        fn to_socket_addr(&self) -> IoResult<SocketAddr> {
            match self {}
        }
    
        fn to_socket_addr_all(&self) -> IoResult<Vec<SocketAddr>> {
            match self {}
        }
    }

    All possible implementations of this trait for ! are equivalent. This is because any two functions that take a ! argument and return the same type are equivalent - they return the same result for the same arguments and have the same effects (because they are uncallable).

    Suppose someone wants to call fn foo<T: SomeTrait>(arg: Option<T>) with None. They need to choose a type for T so they can pass None::<T> as the argument. However there may be no sensible default type to use for T or, worse, they may not have any types at their disposal that implement SomeTrait. As the user in this case is only using None, a sensible choice for T would be a type such that Option<T> can only be None, ie. it would be nice to use !. If ! has a trivial implementation of SomeTrait then the choice of T is truly irrelevant as this means foo doesn’t use any associated types/lifetimes/constants or static methods of T and is therefore unable to distinguish None::<A> from None::<B>. With this RFC, the user could impl SomeTrait for ! (if SomeTrait’s author hasn’t done so already) and call foo(None::<!>).

    Currently, Never can be used for all the above purposes. It’s useful enough that @reem has written a package for it here where it is named Void. I’ve also invented it independently for my own projects and probably other people have as well. However ! can be extended logically to cover all the above use cases. Doing so would standardise the concept and prevent different people reimplementing it under different names.

  • Better dead code detection

    Consider the following code:

    let t = std::thread::spawn(|| panic!("nope"));
    t.join().unwrap();
    println!("hello");
    
    

    Under this RFC: the closure body gets typed ! instead of (), the unwrap() gets typed !, and the println! will raise a dead code warning. There’s no way current rust can detect cases like that.

  • Because it’s the correct thing to do.

    The empty type is such a fundamental concept that - given that it already exists in the form of empty enums - it warrants having a canonical form of it built-into the language. For example, return and break expressions should logically be typed ! but currently seem to be typed (). (There is some code in the compiler that assigns type () to diverging expressions because it doesn’t have a sensible type to assign to them). This means we can write stuff like this:

    match break {
      ()  => ...  // huh? Where did that `()` come from?
    }

    But not this:

    match break {} // whaddaya mean non-exhaustive patterns?

    This is just weird and should be fixed.

I suspect the reason that ! isn’t already treated as a canonical empty type is just most people’s unfamilarity with empty types. To draw a parallel in history: in C void is in essence a type like any other. However it can’t be used in all the normal positions where a type can be used. This breaks generic code (eg. T foo(); T val = foo() where T == void) and forces one to use workarounds such as defining struct Void {} and wrapping void-returning functions.

In the early days of programming having a type that contained no data probably seemed pointless. After all, there’s no point in having a void typed function argument or a vector of voids. So void was treated as merely a special syntax for denoting a function as returning no value resulting in a language that was more broken and complicated than it needed to be.

Fifty years later, Rust, building on decades of experience, decides to fix C’s shortsightedness and bring void into the type system in the form of the empty tuple (). Rust also introduces coproduct types (in the form of enums), allowing programmers to work with uninhabited types (such as Never). However rust also introduces a special syntax for denoting a function as never returning: fn() -> !. Here, ! is in essence a type like any other. However it can’t be used in all the normal positions where a type can be used. This breaks generic code (eg. fn() -> T; let val: T = foo() where T == !) and forces one to use workarounds such as defining enum Never {} and wrapping !-returning functions.

To be clear, ! has a meaning in any situation that any other type does. A ! function argument makes a function uncallable, a Vec<!> is a vector that can never contain an element, a ! enum variant makes the variant guaranteed never to occur and so forth. It might seem pointless to use a ! function argument or a Vec<!> (just as it would be pointless to use a () function argument or a Vec<()>), but that’s no reason to disallow it. And generic code sometimes requires it.

Rust already has empty types in the form of empty enums. Any code that could be written with this RFC’s ! can already be written by swapping out ! with Never (sans implicit casts, see below). So if this RFC could create any issues for the language (such as making it unsound or complicating the compiler) then these issues would already exist for Never.

It’s also worth noting that the ! proposed here is not the bottom type that used to exist in Rust in the very early days. Making ! a subtype of all types would greatly complicate things as it would require, for example, Vec<!> be a subtype of Vec<T>. This ! is simply an empty type (albeit one that can be cast to any other type)

Detailed design

Add a type ! to Rust. ! behaves like an empty enum except that it can be implicitly cast to any other type. ie. the following code is acceptable:

let r: Result<i32, !> = Ok(23);
let i = match r {
    Ok(i)   => i,
    Err(e)  => e, // e is cast to i32
}

Implicit casting is necessary for backwards-compatibility so that code like the following will continue to compile:

let i: i32 = match some_bool {
    true  => 23,
    false => panic!("aaah!"), // an expression of type `!`, gets cast to `i32`
}

match break {
    ()  => 23,  // matching with a `()` forces the match argument to be cast to type `()`
}

These casts can be implemented by having the compiler assign a fresh, diverging type variable to any expression of type !.

In the compiler, remove the distinction between diverging and converging functions. Use the type system to do things like reachability analysis.

Allow expressions of type ! to be explicitly cast to any other type (eg. let x: u32 = break as u32;)

Add an implementation for ! of any trait that it can trivially implement. Add methods to Result<T, !> and Result<!, E> for safely extracting the inner value. Name these methods along the lines of unwrap_nopanic, safe_unwrap or something.

Drawbacks

Someone would have to implement this.

Alternatives

  • Don’t do this.
  • Move @reem’s Void type into libcore. This would create a standard empty type and make it available for use in the standard libraries. If we were to do this it might be an idea to rename Void to something else (Never, Empty and Mu have all been suggested). Although Void has some precedence in languages like Haskell and Idris the name is likely to trip up people coming from a C/Java et al. background as Void is not void but it can be easy to confuse the two.

Unresolved questions

! has a unique impl of any trait whose only items are non-static methods. It would be nice if there was a way to automate the creation of these impls. Should ! automatically satisfy any such trait? This RFC is not blocked on resolving this question if we are willing to accept backward-incompatibilities in questionably-valid code which tries to call trait methods on diverging expressions and relies on the trait being implemented for (). As such, the issue has been given it’s own RFC.

Summary

Allow renaming imports when importing a group of symbols from a module.

use std::io::{
    Error as IoError,
    Result as IoResult,
    Read,
    Write
}

Motivation

The current design requires the above example to be written like this:

use std::io::Error as IoError;
use std::io::Result as IoResult;
use std::io::{Read, Write};

It’s unfortunate to duplicate use std::io:: on the 3 lines, and the proposed example feels logical, and something you reach for in this instance, without knowing for sure if it worked.

Detailed design

The current grammar for use statements is something like:

  use_decl : "pub" ? "use" [ path "as" ident
                            | path_glob ] ;

  path_glob : ident [ "::" [ path_glob
                            | '*' ] ] ?
            | '{' path_item [ ',' path_item ] * '}' ;

  path_item : ident | "self" ;

This RFC proposes changing the grammar to something like:

  use_decl : "pub" ? "use" [ path [ "as" ident ] ?
                            | path_glob ] ;

  path_glob : ident [ "::" [ path_glob
                            | '*' ] ] ?
            | '{' path_item [ ',' path_item ] * '}' ;

  path_item : ident [ "as" ident] ?
            | "self" [ "as" ident];

The "as" ident part is optional in each location, and if omitted, it is expanded to alias to the same name, e.g. use foo::{bar} expands to use foo::{bar as bar}.

This includes being able to rename self, such as use std::io::{self as stdio, Result as IoResult};.

Drawbacks

Alternatives

Unresolved Questions

This RFC was previously approved, but later withdrawn

For details see the summary comment.

Summary

Rather than trying to find a clever syntax for placement-new that leverages the in keyword, instead use the syntax PLACE_EXPR <- VALUE_EXPR.

This takes advantage of the fact that <- was reserved as a token via historical accident (that for once worked out in our favor).

Motivation

One sentence: the syntax a <- b is short, can be parsed without ambiguity, and is strongly connotated already with assignment.

Further text (essentially historical background):

There is much debate about what syntax to use for placement-new. We started with box (PLACE_EXPR) VALUE_EXPR, then migrated towards leveraging the in keyword instead of box, yielding in (PLACE_EXPR) VALUE_EXPR.

A lot of people disliked the in (PLACE_EXPR) VALUE_EXPR syntax (see discussion from RFC 809).

In response to that discussion (and also due to personal preference) I suggested the alternative syntax in PLACE_EXPR { BLOCK_EXPR }, which is what landed when RFC 809 was merged.

However, it is worth noting that this alternative syntax actually failed to address a number of objections (some of which also applied to the original in (PLACE_EXPR) VALUE_EXPR syntax):

  • kennytm

    While in (place) value is syntactically unambiguous, it looks completely unnatural as a statement alone, mainly because there are no verbs in the correct place, and also using in alone is usually associated with iteration (for x in y) and member testing (elem in set).

  • petrochenkov

    As C++11 experience has shown, when it’s available, it will become the default method of inserting elements in containers, since it’s never performing worse than “normal insertion” and is often better. So it should really have as short and convenient syntax as possible.

  • p1start

    I’m not a fan of in { }, simply because the requirement of a block suggests that it’s some kind of control flow structure, or that all the statements inside will be somehow run ‘in’ the given (or perhaps, as @m13253 seems to have interpreted it, for all box expressions to go into the given place). It would be our first syntactical construct which is basically just an operator that has to have a block operand.

I believe the PLACE_EXPR <- VALUE_EXPR syntax addresses all of the above concerns.

Thus cases like allocating into an arena (which needs to take as input the arena itself and a value-expression, and returns a reference or handle for the allocated entry in the arena – i.e. cannot return unit) would look like:

let ref_1 = arena <- value_expression;
let ref_2 = arena <- value_expression;

compare the above against the way this would look under RFC 809:

let ref_1 = in arena { value_expression };
let ref_2 = in arena { value_expression };

Detailed design

Extend the parser to parse EXPR <- EXPR. The left arrow operator is right-associative and has precedence higher than assignment and binop-assignment, but lower than other binary operators.

EXPR <- EXPR is parsed into an AST form that is desugared in much the same way that in EXPR { BLOCK } or box (EXPR) EXPR are desugared (see PR 27215).

Thus the static and dynamic semantics of PLACE_EXPR <- VALUE_EXPR are equivalent to box (PLACE_EXPR) VALUE_EXPR. Namely, it is still an expression form that operates by:

  1. Evaluate the PLACE_EXPR to a place
  2. Evaluate VALUE_EXPR directly into the constructed place
  3. Return the finalized place value.

(See protocol as documented in RFC 809 for more details here.)

This parsing form can be separately feature-gated (this RFC was written assuming that would be the procedure). However, since placement-in landed very recently (PR 27215) and is still feature-gated, we can also just fold this change in with the pre-existing placement_in_syntax feature gate (though that may be non-intuitive since the keyword in is no longer part of the syntactic form).

This feature has already been prototyped, see place-left-syntax branch.

Then, (after sufficient snapshot and/or time passes) remove the following syntaxes:

  • box (PLACE_EXPR) VALUE_EXPR
  • in PLACE_EXPR { VALUE_BLOCK }

That is, PLACE_EXPR <- VALUE_EXPR will be the “one true way” to express placement-new.

(Note that support for box VALUE_EXPR will remain, and in fact, the expression (box ()) expression will become unambiguous and thus we could make it legal. Because, you know, those boxes of unit have a syntax that is really important to optimize.)

Finally, it would may be good, as part of this process, to actually amend the text RFC 809 itself to use the a <- b syntax. At least, it seems like many people use the RFC’s as a reference source even when they are later outdated. (An easier option though may be to just add a forward reference to this RFC from RFC 809, if this RFC is accepted.)

Drawbacks

The only drawback I am aware of is this comment from nikomataskis

the intent is less clear than with a devoted keyword.

Note however that this was stated with regards to a hypothetical overloading of the = operator (at least that is my understanding).

I think the use of the <- operator can be considered sufficiently “devoted” (i.e. separate) syntax to placate the above concern.

Alternatives

See different surface syntax from the alternatives from RFC 809.

Also, if we want to try to make it clear that this is not just an assignment, we could combine in and <-, yielding e.g.:

let ref_1 = in arena <- value_expression;
let ref_2 = in arena <- value_expression;

Precedence

Finally, precedence of this operator may be defined to be anything from being less than assignment/binop-assignment (set of right associative operators with lowest precedence) to highest in the language. The most prominent choices are:

  1. Less than assignment:

    Assuming () never becomes a Placer, this resolves a pretty common complaint that a statement such as x = y <- z is not clear or readable by forcing the programmer to write x = (y <- z) for code to typecheck. This, however introduces an inconsistency in parsing between let x = and x =: let x = (y <- z) but (x = z) <- y.

  2. Same as assignment and binop-assignment:

    x = y <- z = a <- b = c = d <- e <- f parses as x = (y <- (z = (a <- (b = (c = (d <- (e <- f))))))). This is so far the easiest option to implement in the compiler.

  3. More than assignment and binop-assignment, but less than any other operator:

    This is what this RFC currently proposes. This allows for various expressions involving equality symbols and <- to be parsed reasonably and consistently. For example x = y <- z += a <- b <- c would get parsed as x = ((y <- z) += (a <- (b <- c))).

  4. More than any operator:

    This is not a terribly interesting one, but still an option. Works well if we want to force people enclose both sides of the operator into parentheses most of the time. This option would get x <- y <- z * a parsed as (x <- (y <- z)) * a.

Unresolved questions

What should the precedence of the <- operator be? In particular, it may make sense for it to have the same precedence of =, as argued in these comments. The ultimate answer here will probably depend on whether the result of a <- b is commonly composed and how, so it was decided to hold off on a final decision until there was more usage in the wild.

Change log

2016.04.22. Amended by rust-lang/rfcs#1319 to adjust the precedence.

Summary

If the constant evaluator encounters erroneous code during the evaluation of an expression that is not part of a true constant evaluation context a warning must be emitted and the expression needs to be translated normally.

Definition of constant evaluation context

There are exactly five places where an expression needs to be constant.

  • the initializer of a constant const foo: ty = EXPR or static foo: ty = EXPR
  • the size of an array [T; EXPR]
  • the length of a repeat expression [VAL; LEN_EXPR]
  • C-Like enum variant discriminant values
  • patterns

In the future the body of const fn might also be interpreted as a constant evaluation context.

Any other expression might still be constant evaluated, but it could just as well be compiled normally and executed at runtime.

Motivation

Expressions are const-evaluated even when they are not in a const environment.

For example

fn blub<T>(t: T) -> T { t }
let x = 5 << blub(42);

will not cause a compiler error currently, while 5 << 42 will. If the constant evaluator gets smart enough, it will be able to const evaluate the blub function. This would be a breaking change, since the code would not compile anymore. (this occurred in https://github.com/rust-lang/rust/pull/26848).

Detailed design

The PRs https://github.com/rust-lang/rust/pull/26848 and https://github.com/rust-lang/rust/pull/25570 will be setting a precedent for warning about such situations (WIP, not pushed yet).

When the constant evaluator fails while evaluating a normal expression, a warning will be emitted and normal translation needs to be resumed.

Drawbacks

None, if we don’t do anything, the const evaluator cannot get much smarter.

Alternatives

allow breaking changes

Let the compiler error on things that will unconditionally panic at runtime.

insert an unconditional panic instead of generating regular code

GNAT (an Ada compiler) does this already:

procedure Hello is
  Var: Integer range 15 .. 20 := 21;
begin
  null;
end Hello;

The anonymous subtype Integer range 15 .. 20 only accepts values in [15, 20]. This knowledge is used by GNAT to emit the following warning during compilation:

warning: value not in range of subtype of "Standard.Integer" defined at line 2
warning: "Constraint_Error" will be raised at run time

I don’t have a GNAT with -emit-llvm handy, but here’s the asm with -O0:

.cfi_startproc
pushq   %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq    %rsp, %rbp
.cfi_def_cfa_register 6
movl    $2, %esi
movl    $.LC0, %edi
movl    $0, %eax
call    __gnat_rcheck_CE_Range_Check

Unresolved questions

Const-eval the body of const fn that are never used in a constant environment

Currently a const fn that is called in non-const code is treated just like a normal function.

In case there is a statically known erroneous situation in the body of the function, the compiler should raise an error, even if the function is never called.

The same applies to unused associated constants.

Summary

Move std::thread::catch_panic to std::panic::recover after replacing the Send + 'static bounds on the closure parameter with a new PanicSafe marker trait.

Motivation

In today’s stable Rust it’s not possible to catch a panic on the thread that caused it. There are a number of situations, however, where this is either required for correctness or necessary for building a useful abstraction:

  • It is currently defined as undefined behavior to have a Rust program panic across an FFI boundary. For example if C calls into Rust and Rust panics, then this is undefined behavior. Being able to catch a panic will allow writing C APIs in Rust that do not risk aborting the process they are embedded into.

  • Abstractions like thread pools want to catch the panics of tasks being run instead of having the thread torn down (and having to spawn a new thread).

Stabilizing the catch_panic function would enable these two use cases, but let’s also take a look at the current signature of the function:

fn catch_panic<F, R>(f: F) -> thread::Result<R>
    where F: FnOnce() -> R + Send + 'static

This function will run the closure f and if it panics return Err(Box<Any>). If the closure doesn’t panic it will return Ok(val) where val is the returned value of the closure. The closure, however, is restricted to only close over Send and 'static data. These bounds can be overly restrictive, and due to thread-local storage they can be subverted, making it unclear what purpose they serve. This RFC proposes to remove the bounds as well.

Historically Rust has purposefully avoided the foray into the situation of catching panics, largely because of a problem typically referred to as “exception safety”. To further understand the motivation of stabilization and relaxing the bounds, let’s review what exception safety is and what it means for Rust.

Background: What is exception safety?

Languages with exceptions have the property that a function can “return” early if an exception is thrown. While exceptions aren’t too hard to reason about when thrown explicitly, they can be problematic when they are thrown by code being called – especially when that code isn’t known in advance. Code is exception safe if it works correctly even when the functions it calls into throw exceptions.

The idea of throwing an exception causing bugs may sound a bit alien, so it’s helpful to drill down into exactly why this is the case. Bugs related to exception safety are comprised of two critical components:

  1. An invariant of a data structure is broken.
  2. This broken invariant is the later observed.

Exceptional control flow often exacerbates this first component of breaking invariants. For example many data structures have a number of invariants that are dynamically upheld for correctness, and the type’s routines can temporarily break these invariants to be fixed up before the function returns. If, however, an exception is thrown in this interim period the broken invariant could be accidentally exposed.

The second component, observing a broken invariant, can sometimes be difficult in the face of exceptions, but languages often have constructs to enable these sorts of witnesses. Two primary methods of doing so are something akin to finally blocks (code run on a normal or exceptional return) or just catching the exception. In both cases code which later runs that has access to the original data structure will see the broken invariants.

Now that we’ve got a better understanding of how an exception might cause a bug (e.g. how code can be “exception unsafe”), let’s take a look how we can make code exception safe. To be exception safe, code needs to be prepared for an exception to be thrown whenever an invariant it relies on is broken, for example:

  • Code can be audited to ensure it only calls functions which are statically known to not throw an exception.
  • Local “cleanup” handlers can be placed on the stack to restore invariants whenever a function returns, either normally or exceptionally. This can be done through finally blocks in some languages or via destructors in others.
  • Exceptions can be caught locally to perform cleanup before possibly re-raising the exception.

With all that in mind, we’ve now identified problems that can arise via exceptions (an invariant is broken and then observed) as well as methods to ensure that prevent this from happening. In languages like C++ this means that we can be memory safe in the face of exceptions and in languages like Java we can ensure that our logical invariants are upheld. Given this background let’s take a look at how any of this applies to Rust.

Background: What is exception safety in Rust?

Note: This section describes the current state of Rust today without this RFC implemented

Up to now we’ve been talking about exceptions and exception safety, but from a Rust perspective we can just replace this with panics and panic safety. Panics in Rust are currently implemented essentially as a C++ exception under the hood. As a result, exception safety is something that needs to be handled in Rust code today.

One of the primary examples where panics need to be handled in Rust is unsafe code. Let’s take a look at an example where this matters:

pub fn push_ten_more<T: Clone>(v: &mut Vec<T>, t: T) {
    unsafe {
        v.reserve(10);
        let len = v.len();
        v.set_len(len + 10);
        for i in 0..10 {
            ptr::write(v.as_mut_ptr().offset(len + i), t.clone());
        }
    }
}

While this code may look correct, it’s actually not memory safe. Vec has an internal invariant that its first len elements are safe to drop at any time. Our function above has temporarily broken this invariant with the call to set_len (the next 10 elements are uninitialized). If the type T’s clone method panics then this broken invariant will escape the function. The broken Vec is then observed during its destructor, leading to the eventual memory unsafety.

It’s important to keep in mind that panic safety in Rust is not solely limited to memory safety. Logical invariants are often just as critical to keep correct during execution and no unsafe code in Rust is needed to break a logical invariant. In practice, however, these sorts of bugs are rarely observed due to Rust’s design:

  • Rust doesn’t expose uninitialized memory
  • Panics cannot be caught in a thread
  • Across threads data is poisoned by default on panics
  • Idiomatic Rust must opt in to extra sharing across boundaries (e.g. RefCell)
  • Destructors are relatively rare and uninteresting in safe code

These mitigations all address the second aspect of exception unsafety: observation of broken invariants. With the tactics in place, it ends up being the case that safe Rust code can largely ignore exception safety concerns. That being said, it does not mean that safe Rust code can always ignore exception safety issues. There are a number of methods to subvert the mitigation strategies listed above:

  1. When poisoning data across threads, antidotes are available to access poisoned data. Namely the PoisonError type allows safe access to the poisoned information.
  2. Single-threaded types with interior mutability, such as RefCell, allow for sharing data across stack frames such that a broken invariant could eventually be observed.
  3. Whenever a thread panics, the destructors for its stack variables will be run as the thread unwinds. Destructors may have access to data which was also accessible lower on the stack (such as through RefCell or Rc) which has a broken invariant, and the destructor may then witness this.

But all of these “subversions” fall outside the realm of normal, idiomatic, safe Rust code, and so they all serve as a “heads up” that panic safety might be an issue. Thus, in practice, Rust programmers worry about exception safety far less than in languages with full-blown exceptions.

Despite these methods to subvert the mitigations placed by default in Rust, a key part of exception safety in Rust is that safe code can never lead to memory unsafety, regardless of whether it panics or not. Memory unsafety triggered as part of a panic can always be traced back to an unsafe block.

With all that background out of the way now, let’s take a look at the guts of this RFC.

Detailed design

At its heart, the change this RFC is proposing is to move std::thread::catch_panic to a new std::panic module and rename the function to recover. Additionally, the Send + 'static bounds on the closure parameter will be replaced with a new trait PanicSafe, modifying the signature to be:

fn recover<F: FnOnce() -> R + PanicSafe, R>(f: F) -> thread::Result<R>

Before analyzing this new signature, let’s take a look at this new PanicSafe trait.

A PanicSafe marker trait

As discussed in the motivation section above, the current bounds of Send + 'static on the closure parameter are too restrictive for common use cases, but they can serve as a “speed bump” (like poisoning on mutexes) to add to the repertoire of mitigation strategies that Rust has by default for dealing with panics.

The purpose of this marker trait will be to identify patterns which do not need to worry about exception safety and allow them by default. In situations where exception safety may be concerned then an explicit annotation will be needed to allow the usage. In other words, this marker trait will act similarly to a “targeted unsafe block”.

For the implementation details, the following items will be added to the std::panic module.

pub trait PanicSafe {}
impl PanicSafe for .. {}

impl<'a, T> !PanicSafe for &'a mut T {}
impl<'a, T: NoUnsafeCell> PanicSafe for &'a T {}
impl<T> PanicSafe for Mutex<T> {}

pub trait NoUnsafeCell {}
impl NoUnsafeCell for .. {}
impl<T> !NoUnsafeCell for UnsafeCell<T> {}

pub struct AssertPanicSafe<T>(pub T);
impl<T> PanicSafe for AssertPanicSafe<T> {}

impl<T> Deref for AssertPanicSafe<T> {
    type Target = T;
    fn deref(&self) -> &T { &self.0 }
}
impl<T> DerefMut for AssertPanicSafe<T> {
    fn deref_mut(&mut self) -> &mut T { &mut self.0 }
}

Let’s take a look at each of these items in detail:

  • impl PanicSafe for .. {} - this makes this trait a marker trait, implying that a the trait is implemented for all types by default so long as the constituent parts implement the trait.
  • impl<T> !PanicSafe for &mut T {} - this indicates that exception safety needs to be handled when dealing with mutable references. Thinking about the recover function, this means that the pointer could be modified inside the block, but once it exits the data may or may not be in an invalid state.
  • impl<T: NoUnsafeCell> PanicSafe for &T {} - similarly to the above implementation for &mut T, the purpose here is to highlight points where data can be mutated across a recover boundary. If &T does not contains an UnsafeCell, then no mutation should be possible and it is safe to allow.
  • impl<T> PanicSafe for Mutex<T> {} - as mutexes are poisoned by default, they are considered exception safe.
  • pub struct AssertPanicSafe<T>(pub T); - this is the “opt out” structure of exception safety. Wrapping something in this type indicates an assertion that it is exception safe and shouldn’t be warned about when crossing the recover boundary. Otherwise this type simply acts like a T.

Example usage

The only consumer of the PanicSafe bound is the recover function on the closure type parameter, and this ends up meaning that the environment needs to be exception safe. In terms of error messages, this causes the compiler to emit an error per closed-over-variable to indicate whether or not it is exception safe to share across the boundary.

It is also a critical design aspect that usage of PanicSafe or AssertPanicSafe does not require unsafe code. As discussed above, panic safety does not directly lead to memory safety problems in otherwise safe code.

In the normal usage of recover, neither PanicSafe nor AssertPanicSafe should be necessary to mention. For example when defining an FFI function:

#[no_mangle]
pub extern fn called_from_c(ptr: *const c_char, num: i32) -> i32 {
    let result = panic::recover(|| {
        let s = unsafe { CStr::from_ptr(ptr) };
        println!("{}: {}", s, num);
    });
    match result {
        Ok(..) => 0,
        Err(..) => 1,
    }
}

Additionally, if FFI functions instead use normal Rust types, AssertPanicSafe still need not be mentioned at all:

#[no_mangle]
pub extern fn called_from_c(ptr: &i32) -> i32 {
    let result = panic::recover(|| {
        println!("{}", *ptr);
    });
    match result {
        Ok(..) => 0,
        Err(..) => 1,
    }
}

If, however, types are coming in which are flagged as not exception safe then the AssertPanicSafe wrapper can be used to leverage recover:

fn foo(data: &RefCell<i32>) {
    panic::recover(|| {
        println!("{}", data.borrow()); //~ ERROR RefCell is not panic safe
    });
}

This can be fixed with a simple assertion that the usage here is indeed exception safe:

fn foo(data: &RefCell<i32>) {
    let data = AssertPanicSafe(data);
    panic::recover(|| {
        println!("{}", data.borrow()); // ok
    });
}

Future extensions

In the future, this RFC proposes adding the following implementation of PanicSafe:

impl<T: Send + 'static> PanicSafe for T {}

This implementation block encodes the “exception safe” boundary of thread::spawn but is unfortunately not allowed today due to coherence rules. If available, however, it would possibly reduce the number of false positives which require using AssertPanicSafe.

Global complexity

Adding a new marker trait is a pretty hefty move for the standard library. The current marker traits, Send and Sync, are well known and are ubiquitous throughout the ecosystem and standard library. Due to the way that these properties are derived, adding a new marker trait can lead to a multiplicative increase in global complexity (as all types must consider the marker trait).

With PanicSafe, however, it is expected that this is not the case. The recover function is not intended to be used commonly outside of FFI or thread pool-like abstractions. Within FFI the PanicSafe trait is typically not mentioned due to most types being relatively simple. Thread pools, on the other hand, will need to mention AssertPanicSafe, but will likely propagate panics to avoid exposing PanicSafe as a bound.

Overall, the expected idiomatic usage of recover should mean that PanicSafe is rarely mentioned, if at all. It is intended that AssertPanicSafe is ideally only necessary where it actually needs to be considered (which idiomatically isn’t too often) and even then it’s lightweight to use.

Will Rust have exceptions?

In a technical sense this RFC is not “adding exceptions to Rust” as they already exist in the form of panics. What this RFC is adding, however, is a construct via which to catch these exceptions within a thread, bringing the standard library closer to the exception support in other languages.

Catching a panic makes it easier to observe broken invariants of data structures shared across the catch_panic boundary, which can possibly increase the likelihood of exception safety issues arising.

The risk of this step is that catching panics becomes an idiomatic way to deal with error-handling, thereby making exception safety much more of a headache than it is today (as it’s more likely that a broken invariant is later witnessed). The catch_panic function is intended to only be used where it’s absolutely necessary, e.g. for FFI boundaries, but how can it be ensured that catch_panic isn’t overused?

There are two key reasons catch_panic likely won’t become idiomatic:

  1. There are already strong and established conventions around error handling, and in particular around the use of panic and Result with stabilized usage of them in the standard library. There is little chance these conventions would change overnight.

  2. There has long been a desire to treat every use of panic! as an abort which is motivated by portability, compile time, binary size, and a number of other factors. Assuming this step is taken, it would be extremely unwise for a library to signal expected errors via panics and rely on consumers using catch_panic to handle them.

For reference, here’s a summary of the conventions around Result and panic, which still hold good after this RFC:

Result vs Panic

There are two primary strategies for signaling that a function can fail in Rust today:

  • Results represent errors/edge-cases that the author of the library knew about, and expects the consumer of the library to handle.

  • panics represent errors that the author of the library did not expect to occur, such as a contract violation, and therefore does not expect the consumer to handle in any particular way.

Another way to put this division is that:

  • Results represent errors that carry additional contextual information. This information allows them to be handled by the caller of the function producing the error, modified with additional contextual information, and eventually converted into an error message fit for a top-level program.

  • panics represent errors that carry no contextual information (except, perhaps, debug information). Because they represented an unexpected error, they cannot be easily handled by the caller of the function or presented to the top-level program (except to say “something unexpected has gone wrong”).

Some pros of Result are that it signals specific edge cases that you as a consumer should think about handling and it allows the caller to decide precisely how to handle the error. A con with Result is that defining errors and writing down Result + try! is not always the most ergonomic.

The pros and cons of panic are essentially the opposite of Result, being easy to use (nothing to write down other than the panic) but difficult to determine when a panic can happen or handle it in a custom fashion, even with catch_panic.

These divisions justify the use of panics for things like out-of-bounds indexing: such an error represents a programming mistake that (1) the author of the library was not aware of, by definition, and (2) cannot be meaningfully handled by the caller.

In terms of heuristics for use, panics should rarely if ever be used to report routine errors for example through communication with the system or through IO. If a Rust program shells out to rustc, and rustc is not found, it might be tempting to use a panic because the error is unexpected and hard to recover from. A user of the program, however, would benefit from intermediate code adding contextual information about the in-progress operation, and the program could report the error in terms a they can understand. While the error is rare, when it happens it is not a programmer error. In short, panics are roughly analogous to an opaque “an unexpected error has occurred” message.

Stabilizing catch_panic does little to change the tradeoffs around Result and panic that led to these conventions.

Drawbacks

A drawback of this RFC is that it can water down Rust’s error handling story. With the addition of a “catch” construct for exceptions, it may be unclear to library authors whether to use panics or Result for their error types. As we discussed above, however, Rust’s design around error handling has always had to deal with these two strategies, and our conventions don’t materially change by stabilizing catch_panic.

Alternatives

One alternative, which is somewhat more of an addition, is to have the standard library entirely abandon all exception safety mitigation tactics. As explained in the motivation section, exception safety will not lead to memory unsafety unless paired with unsafe code, so it is perhaps within the realm of possibility to remove the tactics of poisoning from mutexes and simply require that consumers deal with exception safety 100% of the time.

This alternative is often motivated by saying that there are enough methods to subvert the default mitigation tactics that it’s not worth trying to plug some holes and not others. Upon closer inspection, however, the areas where safe code needs to worry about exception safety are isolated to the single-threaded situations. For example RefCell, destructors, and catch_panic all only expose data possibly broken through a panic in a single thread.

Once a thread boundary is crossed, the only current way to share data mutably is via Mutex or RwLock, both of which are poisoned by default. This sort of sharing is fundamental to threaded code, and poisoning by default allows safe code to freely use many threads without having to consider exception safety across threads (as poisoned data will tear down all connected threads).

This property of multithreaded programming in Rust is seen as strong enough that poisoning should not be removed by default, and in fact a new hypothetical thread::scoped API (a rough counterpart of catch_panic) could also propagate panics by default (like poisoning) with an ability to opt out (like PoisonError).

Unresolved questions

  • Is it worth keeping the 'static and Send bounds as a mitigation measure in practice, even if they aren’t enforceable in theory? That would require thread pools to use unsafe code, but that could be acceptable.

  • Should catch_panic be stabilized within std::thread where it lives today, or somewhere else?

Summary

Revise the Drop Check (dropck) part of Rust’s static analyses in two ways. In the context of this RFC, these revisions are respectively named cannot-assume-parametricity and unguarded-escape-hatch.

  1. cannot-assume-parametricity (CAP): Make dropck analysis stop relying on parametricity of type-parameters.

  2. unguarded-escape-hatch (UGEH): Add an attribute (with some name starting with “unsafe”) that a library designer can attach to a drop implementation that will allow a destructor to side-step the dropck’s constraints (unsafely).

Motivation

Background: Parametricity in dropck

The Drop Check rule (dropck) for Sound Generic Drop relies on a reasoning process that needs to infer that the behavior of a polymorphic function (e.g. fn foo<T>) does not depend on the concrete type instantiations of any of its unbounded type parameters (e.g. T in fn foo<T>), at least beyond the behavior of the destructor (if any) for those type parameters.

This property is a (weakened) form of a property known in academic circles as Parametricity. (See e.g. Reynolds, IFIP 1983, Wadler, FPCA 1989.)

  • Parametricity, in this context, essentially says that the compiler can reason about the body of foo (and the subroutines that foo invokes) without having to think about the particular concrete types that the type parameter T is instantiated with. foo cannot do anything with a t: T except:

    1. move t to some other owner expecting a T or,

    2. drop t, running its destructor and freeing associated resources.

  • For example, this allows the compiler to deduce that even if T is instantiated with a concrete type like &Vec<u32>, the body of foo cannot actually read any u32 data out of the vector. More details about this are available on the Sound Generic Drop RFC.

“Mistakes were made”

The parametricity-based reasoning in the Drop Check analysis (dropck) was clever, but fragile and unproven.

  • Regarding its fragility, it has been shown to have bugs; in particular, parametricity is a necessary but not sufficient condition to justify the inferences that dropck makes.

  • Regarding its unproven nature, dropck violated the heuristic in Rust’s design to not incorporate ideas unless those ideas had already been proven effective elsewhere.

These issues might alone provide motivation for ratcheting back on dropck’s rules in the short term, putting in a more conservative rule in the stable release channel while allowing experimentation with more-aggressive feature-gated rules in the development nightly release channel.

However, there is also a specific reason why we want to ratchet back on the dropck analysis as soon as possible.

Impl specialization is inherently non-parametric

The parametricity requirement in the Drop Check rule over-restricts the design space for future language changes.

In particular, the impl specialization RFC describes a language change that will allow the invocation of a polymorphic function f to end up in different sequences of code based solely on the concrete type of T, even when T has no trait bounds within its declaration in f.

Detailed design

Revise the Drop Check (dropck) part of Rust’s static analyses in two ways. In the context of this RFC, these revisions are respectively named cannot-assume-parametricity (CAP) and unguarded-escape-hatch (UGEH).

Though the revisions are given distinct names, they both fall under the feature gate dropck_parametricity. (Note however that this might be irrelevant to CAP; see CAP stabilization details).

cannot-assume-parametricity

The heart of CAP is this: make dropck analysis stop relying on parametricity of type-parameters.

Changes to the Drop-Check Rule

The Drop-Check Rule (both in its original form and as revised here) dictates when a lifetime 'a must strictly outlive some value v, where v owns data of type D; the rule gave two circumstances where 'a must strictly outlive the scope of v.

  • The first circumstance (D is directly instantiated at 'a) remains unchanged by this RFC.

  • The second circumstance (D has some type parameter with trait-provided methods, i.e. that could be invoked within Drop) is broadened by this RFC to simply say “D has some type parameter.”

That is, under the changes of this RFC, whether the type parameter has a trait-bound is irrelevant to the Drop-Check Rule. The reason is that any type parameter, regardless of whether it has a trait bound or not, may end up participating in impl specialization, and thus could expose an otherwise invisible reference &'a AlreadyDroppedData.

cannot-assume-parametricity is a breaking change, since the language will start assuming that a destructor for a data-type definition such as struct Parametri<C> may read from data held in its C parameter, even though the fn drop formerly appeared to be parametric with respect to C. This will cause rustc to reject code that it had previously accepted (below are some examples that continue to work and some that start being rejected).

CAP stabilization details

cannot-assume-parametricity will be incorporated into the beta and stable Rust channels, to ensure that destructor code atop stable channels in the wild stop relying on parametricity as soon as possible. This will enable new language features such as impl specialization.

  • It is not yet clear whether it is feasible to include a warning cycle for CAP.

  • For now, this RFC is proposing to remove the parts of Drop-Check that attempted to prove that the impl<T> Drop was parametric with respect to T. This would mean that there would be more warning cycle; dropck would simply start rejecting more code. There would be no way to opt back into the old dropck rules.

  • (However, during implementation of this change, we should double-check whether a warning-cycle is in fact feasible.)

unguarded-escape-hatch

The heart of unguarded-escape-hatch (UGEH) is this: Provide a new, unsafe (and unstable) attribute-based escape hatch for use in the standard library for cases where Drop Check is too strict.

Why we need an escape hatch

The original motivation for the parametricity special-case in the original Drop-Check rule was due to an observation that collection types such as TypedArena<T> or Vec<T> were often used to contain values that wanted to refer to each other.

An example would be an element type like struct Concrete<'a>(u32, Cell<Option<&'a Concrete<'a>>>);, and then instantiations of TypedArena<Concrete> or Vec<Concrete>. This pattern has been used within rustc, for example, to store elements of a linked structure within an arena.

Without the parametricity special-case, the existence of a destructor on TypedArena<T> or Vec<T> led the Drop-Check analysis to conclude that those destructors might hypothetically read from the references held within T – forcing dropck to reject those destructors.

(Note that Concrete itself has no destructor; if it did, then dropck, both as originally stated and under the changes of this RFC, would force the 'a parameter of any instance to strictly outlive the instance value, thus ruling out cross-references in the same TypedArena or Vec.)

Of course, the whole point of this RFC is that using parametricity as the escape hatch seems like it does not suffice. But we still need some escape hatch.

The new escape hatch: an unsafe attribute

This leads us to the second component of the RFC, unguarded-escape-hatch (UGEH): Add an attribute (with a name starting with “unsafe”) that a library designer can attach to a drop implementation that will allow a destructor to side-step the dropck’s constraints (unsafely).

This RFC proposes the attribute name unsafe_destructor_blind_to_params. This name was specifically chosen to be long and ugly; see UGEH stabilization details for further discussion.

Much like the unsafe_destructor attribute that we had in the past, this attribute relies on the programmer to ensure that the destructor cannot actually be used unsoundly. It states an (unproven) assumption that the given implementation of drop (and all functions that this drop may transitively call) will never read or modify a value of any type parameter, apart from the trivial operations of either dropping the value or moving the value from one location to another.

  • (In particular, it certainly must not dereference any &-reference within such a value, though this RFC is adopts a somewhat stronger requirement to encourage the attribute to only be used for the limited case of parametric collection types, where one need not do anything more than move or drop values.)

The above assumption must hold regardless of what impact impl specialization has on the resolution of all function calls.

UGEH stabilization details

The proposed attribute is only a short-term patch to work-around a bug exposed by the combination of two desirable features (namely impl specialization and dropck).

In particular, using the attribute in cases where control-flow in the destructor can reach functions that may be specialized on a type-parameter T may expose the system to use-after-free scenarios or other unsound conditions. This may a non-trivial thing for the programmer to prove.

  • Short term strategy: The working assumption of this RFC is that the standard library developers will use the proposed attribute in cases where the destructor is parametric with respect to all type parameters, even though the compiler cannot currently prove this to be the case.

    The new attribute will be restricted to non-stable channels, like any other new feature under a feature-gate.

  • Long term strategy: This RFC does not make any formal guarantees about the long-term strategy for including an escape hatch. In particular, this RFC does not propose that we stabilize the proposed attribute

    It may be possible for future language changes to allow us to directly express the necessary parametricity properties. See further discussion in the continue supporting parametricity alternative.

    The suggested attribute name (unsafe_destructor_blind_to_params above) was deliberately selected to be long and ugly, in order to discourage it from being stabilized in the future without at least some significant discussion. (Likewise, the acronym “UGEH” was chosen for its likely pronunciation “ugh”, again a reminder that we do not want to adopt this approach for the long term.)

Examples of code changes under the RFC

This section shows some code examples, starting with code that works today and must continue to work tomorrow, then showing an example of code that will start being rejected, and ending with an example of the UGEH attribute.

Examples of code that must continue to work

Here is some code that works today and must continue to work in the future:

use std::cell::Cell;

struct Concrete<'a>(u32, Cell<Option<&'a Concrete<'a>>>);

fn main() {
    let mut data = Vec::new();
    data.push(Concrete(0, Cell::new(None)));
    data.push(Concrete(0, Cell::new(None)));

    data[0].1.set(Some(&data[1]));
    data[1].1.set(Some(&data[0]));
}

In the above, we are building up a vector, pushing Concrete elements onto it, and then later linking those concrete elements together via optional references held in a cell in each concrete element.

We can even wrap the vector in a struct that holds it. This also must continue to work (and will do so under this RFC); such structural composition is a common idiom in Rust code.

use std::cell::Cell;

struct Concrete<'a>(u32, Cell<Option<&'a Concrete<'a>>>);

struct Foo<T> { data: Vec<T> }

fn main() {
    let mut foo = Foo {  data: Vec::new() };
    foo.data.push(Concrete(0, Cell::new(None)));
    foo.data.push(Concrete(0, Cell::new(None)));

    foo.data[0].1.set(Some(&foo.data[1]));
    foo.data[1].1.set(Some(&foo.data[0]));
}

Examples of code that will start to be rejected

The main change injected by this RFC is this: due to cannot-assume-parametricity, an attempt to add a destructor to the struct Foo above will cause the code above to be rejected, because we will assume that the destructor for Foo may invoke methods on the concrete elements that dereferences their links.

Thus, this code will be rejected:

use std::cell::Cell;

struct Concrete<'a>(u32, Cell<Option<&'a Concrete<'a>>>);

struct Foo<T> { data: Vec<T> }

// This is the new `impl Drop`
impl<T> Drop for Foo<T> {
    fn drop(&mut self) { }
}

fn main() {
    let mut foo = Foo {  data: Vec::new() };
    foo.data.push(Concrete(0, Cell::new(None)));
    foo.data.push(Concrete(0, Cell::new(None)));

    foo.data[0].1.set(Some(&foo.data[1]));
    foo.data[1].1.set(Some(&foo.data[0]));
}

NOTE: Based on a preliminary crater run, it seems that mixing together destructors with this sort of cyclic structure is sufficiently rare that no crates on crates.io actually regressed under the new rule: everything that compiled before the change continued to compile after it.

Example of the unguarded-escape-hatch

If the developer of Foo has access to the feature-gated escape-hatch, and is willing to assert that the destructor for Foo does nothing with the links in the data, then the developer can work around the above rejection of the code by adding the corresponding attribute.

#![feature(dropck_parametricity)]
use std::cell::Cell;

struct Concrete<'a>(u32, Cell<Option<&'a Concrete<'a>>>);

struct Foo<T> { data: Vec<T> }

impl<T> Drop for Foo<T> {
    #[unsafe_destructor_blind_to_params] // This is the UGEH attribute
    fn drop(&mut self) { }
}

fn main() {
    let mut foo = Foo {  data: Vec::new() };
    foo.data.push(Concrete(0, Cell::new(None)));
    foo.data.push(Concrete(0, Cell::new(None)));

    foo.data[0].1.set(Some(&foo.data[1]));
    foo.data[1].1.set(Some(&foo.data[0]));
}

Drawbacks

As should be clear by the tone of this RFC, the unguarded-escape-hatch is clearly a hack. It is subtle and unsafe, just as unsafe_destructor was (and for the most part, the whole point of Sound Generic Drop was to remove unsafe_destructor from the language).

  • However, the expectation is that most clients will have no need to ever use the unguarded-escape-hatch.

  • It may suffice to use the escape hatch solely within the collection types of libstd.

  • Otherwise, if clients outside of libstd determine that they do need to be able to write destructors that need to bypass dropck safely, then we can (and should) investigate one of the sound alternatives, rather than stabilize the unsafe hackish escape hatch..

Alternatives

CAP without UGEH

One might consider adopting cannot-assume-parametricity without unguarded-escape-hatch. However, unless some other sort of escape hatch were added, this path would break much more code.

UGEH for lifetime parameters

Since we’re already being unsafe here, one might consider having the unsafe_destructor_blind_to_params apply to lifetime parameters as well as type parameters.

However, given that the unsafe_destructor_blind_to_params attribute is only intended as a short-term band-aid (see UGEH stabilization details) it seems better to just make it only as broad as it needs to be (and no broader).

“Sort-of Guarded” Escape Hatch

We could add the escape hatch but continue employing the current dropck analysis to it. This would essentially mean that code would have to apply the unsafe attribute to be considered for parametricity, but if there were obvious problems (namely, if the type parameter had a trait bound) then the attempt to opt into parametricity would be ignored and the strict ordering restrictions on the lifetimes would be imposed.

I only mention this because it occurred to me in passing; I do not really think it has much of a benefit. It would potentially lead someone to think that their code has been proven sound (since the dropck would catch some mistakes in programmer reasoning) but the pitfalls with respect to specialization would remain.

Continue Supporting Parametricity

There may be ways to revise the language so that functions can declare that they must be parametric with respect to their type parameters. Here we sketch two potential ideas for how one might do this, mostly to give a hint of why this is not a trivial change to the language.

Neither design is likely to be adopted, at least as described here, because both of them impose significant burdens on implementors of parametric destructors, as we will see.

(Also, if we go down this path, we will need to fix other bugs in the Drop Check rule, where, as previously noted, parametricity is a necessary but insufficient condition for soundness.)

Parametricity via effect-system attributes

One feature of the impl specialization RFC is that all functions that can be specialized must be declared as such, via the default keyword.

This leads us to one way that a function could declare that its body must not be allows to call into specialized methods: an attribute like #[unspecialized]. The #[unspecialized] attribute, when applied to a function fn foo(), would mean two things:

  • foo is not allowed to call any functions that have the default keyword.

  • foo is only allowed to call functions that are also marked #[unspecialized]

All fn drop methods would be required to be #[unspecialized].

It is the second bullet that makes this an ad-hoc effect system: it provides a recursive property ensuring that during the extent of the call to foo, we will never invoke a function marked as default (and therefore, I think, will never even potentially invoke a method that has been specialized).

It is also this second bullet that represents a significant burden on the destructor implementor. In particular, it immediately rules out using any library routine unless that routine has been marked as #[unspecialized]. The attribute is unlikely to be included on any function unless the its developer is making a destructor that calls it in tandem.

Parametricity via some ?-bound

Another approach starts from another angle: As described earlier, parametricity in dropck is the requirement that fn drop cannot do anything with a t: T (where T is some relevant type parameter) except:

  1. move t to some other owner expecting a T or,

  2. drop t, running its destructor and freeing associated resources.

So, perhaps it would be more natural to express this requirement via a bound.

We would start with the assumption that functions may be non-parametric (and thus their implementations may be specialized to specific types).

But then if you want to declare a function as having a stronger constraint on its behavior (and thus expanding its potential callers to ones that require parametricity), you could add a bound T: ?Special.

The Drop-check rule would treat T: ?Special type-parameters as parametric, and other type-parameters as non-parametric.

The marker trait Special would be an OIBIT that all sized types would get.

Any expression in the context of a type-parameter binding of the form <T: ?Special> would not be allowed to call any default method where T could affect the specialization process.

(The careful reader will probably notice the potential sleight-of-hand here: is this really any different from the effect-system attributes proposed earlier? Perhaps not, though it seems likely that the finer grain parameter-specific treatment proposed here is more expressive, at least in theory.)

Like the previous proposal, this design represents a significant burden on the destructor implementor: Again, the T: ?Special attribute is unlikely to be included on any function unless the its developer is making a destructor that calls it in tandem.

Unresolved questions

  • What name to use for the attribute? Is unsafe_destructor_blind_to_params sufficiently long and ugly? ;)

  • What is the real long-term plan?

  • Should we consider merging the discussion of alternatives into the impl specialization RFC?

Bibliography

Reynolds

John C. Reynolds. “Types, abstraction and parametric polymorphism”. IFIP 1983 http://www.cse.chalmers.se/edu/year/2010/course/DAT140_Types/Reynolds_typesabpara.pdf

Wadler

Philip Wadler. “Theorems for free!”. FPCA 1989 http://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf

Summary

Taking a reference into a struct marked repr(packed) should become unsafe, because it can lead to undefined behaviour. repr(packed) structs need to be banned from storing Drop types for this reason.

Motivation

Issue #27060 noticed that it was possible to trigger undefined behaviour in safe code via repr(packed), by creating references &T which don’t satisfy the expected alignment requirements for T.

Concretely, the compiler assumes that any reference (or raw pointer, in fact) will be aligned to at least align_of::<T>(), i.e. the following snippet should run successfully:

let some_reference: &T = /* arbitrary code */;

let actual_address = some_reference as *const _ as usize;
let align = std::mem::align_of::<T>();

assert_eq!(actual_address % align, 0);

However, repr(packed) allows on to violate this, by creating values of arbitrary types that are stored at “random” byte addresses, by removing the padding normally inserted to maintain alignment in structs. E.g. suppose there’s a struct Foo defined like #[repr(packed, C)] struct Foo { x: u8, y: u32 }, and there’s an instance of Foo allocated at a 0x1000, the u32 will be placed at 0x1001, which isn’t 4-byte aligned (the alignment of u32).

Issue #27060 has a snippet which crashes at runtime on at least two x86-64 CPUs (the author’s and the one playpen runs on) and almost certainly most other platforms.

#![feature(simd, test)]

extern crate test;

// simd types require high alignment or the CPU faults
#[simd]
#[derive(Debug, Copy, Clone)]
struct f32x4(f32, f32, f32, f32);

#[repr(packed)]
#[derive(Copy, Clone)]
struct Unalign<T>(T);

struct Breakit {
    x: u8,
    y: Unalign<f32x4>
}

fn main() {
    let val = Breakit { x: 0, y: Unalign(f32x4(0.0, 0.0, 0.0, 0.0)) };

    test::black_box(&val);

    println!("before");

    let ok = val.y;
    test::black_box(ok.0);

    println!("middle");

    let bad = val.y.0;
    test::black_box(bad);

    println!("after");
}

On playpen, it prints:

before
middle
playpen: application terminated abnormally with signal 4 (Illegal instruction)

That is, the bad variable is causing the CPU to fault. The let statement is (in pseudo-Rust) behaving like let bad = load_with_alignment(&val.y.0, align_of::<f32x4>());, but the alignment isn’t satisfied. (The ok line is compiled to a movupd instruction, while the bad is compiled to a movapd: u == unaligned, a == aligned.)

(NB. The use of SIMD types in the example is just to be able to demonstrate the problem on x86. That platform is generally fairly relaxed about pointer alignments and so SIMD & its specialised mov instructions are the easiest way to demonstrate the violated assumptions at runtime. Other platforms may fault on other types.)

Being able to assume that accesses are aligned is useful, for performance, and almost all references will be correctly aligned anyway (repr(packed) types and internal references into them are quite rare).

The problems with unaligned accesses can be avoided by ensuring that the accesses are actually aligned (e.g. via runtime checks, or other external constraints the compiler cannot understand directly). For example, consider the following

#[repr(packed, C)]
struct Bar {
    x: u8,
    y: u16,
    z: u8,
    w: u32,
}

Taking a reference to some of those fields may cause undefined behaviour, but not always. It is always correct to take a reference to x or z since u8 has alignment 1. If the struct value itself is 4-byte aligned (which is not guaranteed), w will also be 4-byte aligned since the u8, u16, u8 take up 4 bytes, hence it is correct to take a reference to w in this case (and only that case). Similarly, it is only correct to take a reference to y if the struct is at an odd address, so that the u16 starts at an even one (i.e. is 2-byte aligned).

Detailed design

It is unsafe to take a reference to the field of a repr(packed) struct. It is still possible, but it is up to the programmer to ensure that the alignment requirements are satisfied. Referencing (by-reference, or by-value) a subfield of a struct (including indexing elements of a fixed-length array) stored inside a repr(packed) struct counts as taking a reference to the packed field and hence is unsafe.

It is still legal to manipulate the fields of a packed struct by value, e.g. the following is correct (and not unsafe), no matter the alignment of bar:

let bar: Bar = ...;

let x = bar.y;
bar.w = 10;

It is illegal to store a type T implementing Drop (including a generic type) in a repr(packed) type, since the destructor of T is passed a reference to that T. The crater run (see appendix) found no crate that needs to use repr(packed) to store a Drop type (or a generic type). The generic type rule is conservatively approximated by disallowing generic repr(packed) structs altogether, but this can be relaxed (see Alternatives).

Concretely, this RFC is proposing the introduction of the // errors in the following code.

struct Baz {
    x: u8,
}

#[repr(packed)]
struct Qux<T> { // error: generic repr(packed) struct
    y: Baz,
    z: u8,
    w: String, // error: storing a Drop type in a repr(packed) struct
    t: [u8; 4],
}

let mut qux = Qux { ... };

// all ok:
let y_val = qux.y;
let z_val = qux.z;
let t_val = qux.t;
qux.y = Baz { ... };
qux.z = 10;
qux.t = [0, 1, 2, 3];

// new errors:

let y_ref = &qux.y; // error: taking a reference to a field of a repr(packed) struct is unsafe
let z_ref = &mut qux.z; // ditto
let y_ptr: *const _ = &qux.y; // ditto
let z_ptr: *mut _ = &mut qux.z; // ditto

let x_val = qux.y.x; // error: directly using a subfield of a field of a repr(packed) struct is unsafe
let x_ref = &qux.y.x; // ditto
qux.y.x = 10; // ditto

let t_val = qux.t[0]; // error: directly indexing an array in a field of a repr(packed) struct is unsafe
let t_ref = &qux.t[0]; // ditto
qux.t[0] = 10; // ditto

(NB. the subfield and indexing cases can be resolved by first copying the packed field’s value onto the stack, and then accessing the desired value.)

Staging

This change will first land as warnings indicating that code will be broken, with the warnings switched to the intended errors after one release cycle.

Drawbacks

This will cause some functionality to stop working in possibly-surprising ways (NB. the drawback here is mainly the “possibly-surprising”, since the functionality is broken with general packed types.). For example, #[derive] usually takes references to the fields of structs, and so #[derive(Clone)] will generate errors. However, this use of derive is incorrect in general (no guarantee that the fields are aligned), and, one can easily replace it by:

#[derive(Copy)]
#[repr(packed)]
struct Foo { ... }

impl Clone for Foo { fn clone(&self) -> Foo { *self } }

Similarly, println!("{}", foo.bar) will be an error despite there not being a visible reference (println! takes one internally), however, this can be resolved by, for instance, assigning to a temporary.

Alternatives

  • A short-term solution would be to feature gate repr(packed) while the kinks are worked out of it
  • Taking an internal reference could be made flat-out illegal, and the times when it is correct simulated by manual raw-pointer manipulation.
  • The rules could be made less conservative in several cases, however the crater run didn’t indicate any need for this:
    • a generic repr(packed) struct can use the generic in ways that avoids problems with Drop, e.g. if the generic is bounded by Copy, or if the type is only used in ways that are Copy such as behind a *const T.
    • using a subfield of a field of a repr(packed) struct by-value could be OK.

Unresolved questions

None.

Appendix

Crater analysis

Crater was run on 2015/07/23 with a patch that feature gated repr(packed).

High-level summary:

  • several unnecessary uses of repr(packed) (patches have been submitted and merged to remove all of these)
  • most necessary ones are to match the declaration of a struct in C
  • many “necessary” uses can be replaced by byte arrays/arrays of smaller types
  • 8 crates are currently on stable themselves (unsure about deps), 4 are already on nightly
    • 1 of the 8, http2parse, is essentially only used by a nightly-only crate (tendril)
    • 4 of the stable and 1 of the nightly crates don’t need repr(packed) at all
stableneededFFI only
image
nix
tendril
assimp-sys
stemmer
x86
http2parse
nl80211rs
openal
elfloader
x11
kiss3d

More detailed analysis inline with broken crates. (Don’t miss kiss3d in the non-root section.)

Regression report c85ba3e9cb4620c6ec8273a34cce6707e91778cb vs. 7a265c6d1280932ba1b881f31f04b03b20c258e5

  • From: c85ba3e9cb4620c6ec8273a34cce6707e91778cb
  • To: 7a265c6d1280932ba1b881f31f04b03b20c258e5

Coverage

  • 2617 crates tested: 1404 working / 1151 broken / 40 regressed / 0 fixed / 22 unknown.

Regressions

  • There are 11 root regressions
  • There are 40 regressions

Root regressions, sorted by rank:

  • image-0.3.11 (before) (after)

    • use seems entirely unnecessary (no raw bytewise operations on the struct itself)

    On stable.

  • nix-0.3.9 (before) (after)

    On stable.

  • tendril-0.1.2 (before) (after)

    • use 1 not strictly necessary?
    • use 2 required on 64-bit platforms to get size_of::<Header>() == 12 rather than 16.
    • use 3, as above, does some precise tricks with the layout for optimisation.

    Requires nightly.

  • assimp-sys-0.0.3 (before) (after)

    • many uses, required to match C structs (one example). In author’s words:

      [11:36:15] <eljay> huon: well my assimp binding is basically abandoned for now if you are just worried about breaking things, and seems unlikely anyone is using it :P

    On stable.

  • stemmer-0.1.1 (before) (after)

    • use, completely unnecessary

    On stable.

  • x86-0.2.0 (before) (after)

    Requires nightly.

  • http2parse-0.0.3 (before) (after)

    • use, used to get super-fast “parsing” of headers, by transmuting &[u8] to &[Setting].

    On stable, however:

    [11:30:38] <huon> reem: why is https://github.com/reem/rust-http2parse/blob/b363139ac2f81fa25db504a9256face9f8c799b6/src/payload.rs#L208 packed?
    [11:31:59] <reem> huon: I transmute from & [u8] to & [Setting]
    [11:32:35] <reem> So repr packed gets me the layout I need
    [11:32:47] <reem> With no padding between the u8 and u16
    [11:33:11] <reem> and between Settings
    [11:33:17] <huon> ok
    [11:33:22] <huon> (stop doing bad things :P )
    [11:34:00] <huon> (there's some problems with repr(packed) https://github.com/rust-lang/rust/issues/27060 and we may be feature gating it)
    [11:35:02] <huon> reem: wait, aren't there endianness problems?
    [11:36:16] <reem> Ah yes, looks like I forgot to finish the Setting interface
    [11:36:27] <reem> The identifier and value methods take care of converting to types values
    [11:36:39] <reem> The goal is just to avoid copying the whole buffer and requiring an allocation
    [11:37:01] <reem> Right now the whole parser takes like 9 ns to parse a frame
    [11:39:11] <huon> would you be sunk if repr(packed) was feature gated?
    [11:40:17] <huon> or, is maybe something like `struct SettingsRaw { identifier:  [u8; 2], value:  [u8; 4] }` OK (possibly with conversion functions etc.)?
    [11:40:46] <reem> Yea, I could get around it if I needed to
    [11:40:58] <reem> Anyway the primary consumer is transfer and I'm running on nightly there
    [11:41:05] <reem> So it doesn't matter too much
    
  • nl80211rs-0.1.0 (before) (after)

    On stable.

  • openal-0.2.1 (before) (after)

    On stable.

  • elfloader-0.0.1 (before) (after)

    Requires nightly.

  • x11cap-0.1.0 (before) (after)

    • use unnecessary.

    Requires nightly.

Non-root regressions, sorted by rank:

Summary

A Cargo crate’s dependencies are associated with constraints that specify the set of versions of the dependency with which the crate is compatible. These constraints range from accepting exactly one version (=1.2.3), to accepting a range of versions (^1.2.3, ~1.2.3, >= 1.2.3, < 3.0.0), to accepting any version at all (*). This RFC proposes to update crates.io to reject publishes of crates that have compile or build dependencies with a wildcard version constraint.

Motivation

Version constraints are a delicate balancing act between stability and flexibility. On one extreme, one can lock dependencies to an exact version. From one perspective, this is great, since the dependencies a user will consume will be the same that the developers tested against. However, on any nontrival project, one will inevitably run into conflicts where library A depends on version 1.2.3 of library B, but library C depends on version 1.2.4, at which point, the only option is to force the version of library B to one of them and hope everything works.

On the other hand, a wildcard (*) constraint will never conflict with anything! There are other things to worry about here, though. A version constraint is fundamentally an assertion from a library’s author to its users that the library will work with any version of a dependency that matches its constraint. A wildcard constraint is claiming that the library will work with any version of the dependency that has ever been released or will ever be released, forever. This is a somewhat absurd guarantee to make - forever is a long time!

Absurd guarantees on their own are not necessarily sufficient motivation to make a change like this. The real motivation is the effect that these guarantees have on consumers of libraries.

As an example, consider the openssl crate. It is one of the most popular libraries on crates.io, with several hundred downloads every day. 50% of the libraries that depend on it have a wildcard constraint on the version. None of them can build against every version that has ever been released. Indeed, no libraries can since many of those releases can before Rust 1.0 released. In addition, almost all of them them will fail to compile against version 0.7 of openssl when it is released. When that happens, users of those libraries will be forced to manually override Cargo’s version selection every time it is recalculated. This is not a fun time.

Bad version restrictions are also “viral”. Even if a developer is careful to pick dependencies that have reasonable version restrictions, there could be a wildcard constraint hiding five transitive levels down. Manually searching the entire dependency graph is an exercise in frustration that shouldn’t be necessary.

On the other hand, consider a library that has a version constraint of ^0.6. When openssl 0.7 releases, the library will either continue to work against version 0.7, or it won’t. In the first case, the author can simply extend the constraint to >= 0.6, < 0.8 and consumers can use it with version 0.6 or 0.7 without any trouble. If it does not work against version 0.7, consumers of the library are fine! Their code will continue to work without any manual intervention. The author can update the library to work with version 0.7 and release a new version with a constraint of ^0.7 to support consumers that want to use that newer release.

Making crates.io more picky than Cargo itself is not a new concept; it currently requires several items in published crates that Cargo will not:

  • A valid license
  • A description
  • A list of authors

All of these requirements are in place to make it easier for developers to use the libraries uploaded to crates.io - that’s why crates are published, after all! A restriction on wildcards is another step down that path.

Note that this restriction would only apply to normal compile dependencies and build dependencies, but not to dev dependencies. Dev dependencies are only used when testing a crate, so it doesn’t matter to downstream consumers if they break.

This RFC is not trying to prohibit all constraints that would run into the issues described above. For example, the constraint >= 0.0.0 is exactly equivalent to *. This is for a couple of reasons:

  • It’s not totally clear how to precisely define “reasonable” constraints. For example, one might want to forbid constraints that allow unreleased major versions. However, some crates provide strong guarantees that any breaks will be followed by one full major version of deprecation. If a library author is sure that their crate doesn’t use any deprecated functionality of that kind of dependency, it’s completely safe and reasonable to explicitly extend the version constraint to include the next unreleased version.
  • Cargo and crates.io are missing tools to deal with overly-restrictive constraints. For example, it’s not currently possible to force Cargo to allow dependency resolution that violates version constraints. Without this kind of support, it is somewhat risky to push too hard towards tight version constraints.
  • Wildcard constraints are popular, at least in part, because they are the path of least resistance when writing a crate. Without wildcard constraints, crate authors will be forced to figure out what kind of constraints make the most sense in their use cases, which may very well be good enough.

Detailed design

The prohibition on wildcard constraints will be rolled out in stages to make sure that crate authors have lead time to figure out their versioning stories.

In the next stable Rust release (1.4), Cargo will issue warnings for all wildcard constraints on build and compile dependencies when publishing, but publishes those constraints will still succeed. Along side the next stable release after that (1.5 on December 11th, 2015), crates.io be updated to reject publishes of crates with those kinds of dependency constraints. Note that the check will happen on the crates.io side rather than on the Cargo side since Cargo can publish to locations other than crates.io which may not worry about these restrictions.

Drawbacks

The barrier to entry when publishing a crate will be mildly higher.

Tightening constraints has the potential to cause resolution breakage when no breakage would occur otherwise.

Alternatives

We could continue allowing these kinds of constraints, but complain in a “sufficiently annoying” manner during publishes to discourage their use.

This RFC originally proposed forbidding all constraints that had no upper version bound but has since been pulled back to just * constraints.

Summary

This RFC proposes a policy around the crates under the rust-lang github organization that are not part of the Rust distribution (compiler or standard library). At a high level, it proposes that these crates be:

  • Governed similarly to the standard library;
  • Maintained at a similar level to the standard library, including platform support;
  • Carefully curated for quality.

Motivation

There are three main motivations behind this RFC.

Keeping std small. There is a widespread desire to keep the standard library reasonably small, and for good reason: the stability promises made in std are tied to the versioning of Rust itself, as are updates to it, meaning that the standard library has much less flexibility than other crates enjoy. While we do plan to continue to grow std, and there are legitimate reasons for APIs to live there, we still plan to take a minimalistic approach. See this discussion for more details.

The desire to keep std small is in tension with the desire to provide high-quality libraries that belong to the whole Rust community and cover a wider range of functionality. The poster child here is the regex crate, which provides vital functionality but is not part of the standard library or basic Rust distribution – and which is, in principle, under the control of the whole Rust community.

This RFC resolves the tension between a “batteries included” Rust and a small std by treating rust-lang crates as, in some sense, “the rest of the standard library”. While this doesn’t solve the entire problem of curating the library ecosystem, it offers a big step for some of the most significant/core functionality we want to commit to.

Staging std. For cases where we do want to grow the standard library, we of course want to heavily vet APIs before their stabilization. Historically we’ve done so by landing the APIs directly in std, but marked unstable, relegating their use to nightly Rust. But in many cases, new std APIs can just as well begin their life as external crates, usable on stable Rust, and ultimately stabilized wholesale. The recent std::net RFC is a good example of this phenomenon.

The main challenge to making this kind of “std staging” work is getting sufficient visibility, central management, and community buy-in for the library prior to stabilization. When there is widespread desire to extend std in a certain way, this RFC proposes that the extension can start its life as an external rust-lang crate (ideally usable by stable Rust). It also proposes an eventual migration path into std.

Cleanup. During the stabilization of std, a fair amount of functionality was moved out into external crates hosted under the rust-lang github organization. The quality and future prospects of these crates varies widely, and we would like to begin to organize and clean them up.

Detailed design

The lifecycle of a rust-lang crate

First, two additional github organizations are proposed:

  • rust-lang-nursery
  • rust-lang-deprecated

New cratess start their life in a 0.X series that lives in the rust-lang-nursery. Crates in this state do not represent a major commitment from the Rust maintainers; rather, they signal a trial period. A crate enters the nursery when (1) there is already a working body of code and (2) the library subteam approves a petition for inclusion. The petition is informal (not an RFC), and can take the form of a discuss post laying out the motivation and perhaps some high-level design principles, and linking to the working code.

If the library team accepts a crate into the nursery, they are indicating an interest in ultimately advertising the crate as “a core part of Rust”, and in maintaining the crate permanently. During the 0.X series in the nursery, the original crate author maintains control of the crate, approving PRs and so on, but the library subteam and broader community is expected to participate. As we’ll see below, nursery crates will be advertised (though not in the same way as full rust-lang crates), increasing the chances that the crate is scrutinized before being promoted to the next stage.

Eventually, a nursery crate will either fail (and move to rust-lang-deprecated) or reach a point where a 1.0 release would be appropriate. The failure case will be determined by means of an RFC.

If, on the other hand, a library reaches the 1.0 point, it is ready to be promoted into rust-lang proper. To do so, an RFC must be written outlining the motivation for the crate, the reasons that community ownership are important, and delving into the API design and its rationale design. These RFCs are intended to follow similar lines to the pre-1.0 stabilization RFCs for the standard library (such as collections or Duration) – which have been very successful in improving API design prior to stabilization. Once a “1.0 RFC” is approved by the libs team, the crate moves into the rust-lang organization, and is henceforth governed by the whole Rust community. That means in particular that significant changes (certainly those that would require a major version bump, but other substantial PRs as well) are reviewed by the library subteam and may require an RFC. On the other hand, the community has broadly agreed to maintain the library in perpetuity (unless it is later deprecated). And again, as we’ll see below, the promoted crate is very visibly advertised as part of the “core Rust” package.

Promotion to 1.0 requires first-class support on all first-tier platforms, except for platform-specific libraries.

Crates in rust-lang may issue new major versions, just like any other crates, though such changes should go through the RFC process. While the library subteam is responsible for major decisions about the library after 1.0, its original author(s) will of course wield a great deal of influence, and their objections will be given due weight in the consensus process.

Relation to std

In many cases, the above description of the crate lifecycle is complete. But some rust-lang crates are destined for std. Usually this will be clear up front.

When a std-destined crate has reached sufficient maturity, the libs subteam can call a “final comment period” for moving it into std proper. Assuming there are no blocking objections, the code is moved into std, and the original repo is left intact, with the following changes:

  • a minor version bump,
  • conditionally replacing all definitions with pub use from std (which will require the ability to cfg switch on feature/API availability – a highly-desired feature on its own).

By re-routing the library to std when available we provide seamless compatibility between users of the library externally and in std. In particular, traits and types defined in the crate are compatible across either way of importing them.

Deprecation

At some point a library may become stale – either because it failed to make it out of the nursery, or else because it was supplanted by a superior library. Nursery and rust-lang crates can be deprecated only through an RFC. This is expected to be a rare occurrence.

Deprecated crates move to rust-lang-deprecated and are subsequently minimally maintained. Alternatively, if someone volunteers to maintain the crate, ownership can be transferred externally.

Advertising

Part of the reason for having rust-lang crates is to have a clear, short list of libraries that are broadly useful, vetted and maintained. But where should this list appear?

This RFC doesn’t specify the complete details, but proposes a basic direction:

  • The crates in rust-lang should appear in the sidebar in the core rustdocs distributed with Rust, along side the standard library. (For nightly releases, we should include the nursery crates as well.)

  • The crates should also be published on crates.io, and should somehow be badged. But the design of a badging/curation system for crates.io is out of scope for this RFC.

Plan for existing crates

There are already a number of non-std crates in rust-lang. Below, we give the full list along with recommended actions:

Transfer ownership

Please volunteer if you’re interested in taking one of these on!

  • rlibc
  • semver
  • threadpool

Move to rust-lang-nursery

  • bitflags
  • getopts
  • glob
  • libc
  • log
  • rand (note, @huonw has a major revamp in the works)
  • regex
  • rustc-serialize (but will likely be replaced by serde or other approach eventually)
  • tempdir (destined for std after reworking)
  • uuid

Move to rust-lang-deprecated

  • fourcc: highly niche
  • hexfloat: niche
  • num: this is essentially a dumping ground from 1.0 stabilization; needs a complete re-think.
  • term: API needs total overhaul
  • time: needs total overhaul destined for std
  • url: replaced by https://github.com/servo/rust-url

Drawbacks

The drawbacks of this RFC are largely social:

  • Emphasizing rust-lang crates may alienate some in the Rust community, since it means that certain libraries obtain a special “blessing”. This is mitigated by the fact that these libraries also become owned by the community at large.

  • On the other hand, requiring that ownership/governance be transferred to the library subteam may be a disincentive for library authors, since they lose unilateral control of their libraries. But this is an inherent aspect of the policy design, and the vastly increased visibility of libraries is likely a strong enough incentive to overcome this downside.

Alternatives

The main alternative would be to not maintain other crates under the rust-lang umbrella, and to offer some other means of curation (the latter of which is needed in any case).

That would be a missed opportunity, however; Rust’s governance and maintenance model has been very successful so far, and given our minimalistic plans for the standard library, it is very appealing to have some other way to apply the full Rust community in taking care of additional crates.

Unresolved questions

Part of the maintenance standard for Rust is the CI infrastructure, including bors/homu. What level of CI should we provide for these crates, and how do we do it?

Summary

Document and expand the open options.

Motivation

The options that can be passed to the os when opening a file vary between systems. And even if the options seem the same or similar, there may be unexpected corner cases.

This RFC attempts to

  • describe the different corner cases and behaviour of various operating systems.
  • describe the intended behaviour and interaction of Rusts options.
  • remedy cross-platform inconsistencies.
  • suggest extra options to expose more platform-specific options.

Detailed design

Access modes

Read-only

Open a file for read-only.

Write-only

Open a file for write-only.

If a file already exist, the contents of that file get overwritten, but it is not truncated. Example:

// contents of file before: "aaaaaaaa"
file.write(b"bbbb")
// contents of file after: "bbbbaaaa"

Read-write

This is the simple combinations of read-only and write-only.

Append-mode

Append-mode is similar to write-only, but all writes always happen at the end of the file. This mode is especially useful if multiple processes or threads write to a single file, like a log file. The operating system guarantees all writes are atomic: no writes get mangled because another process writes at the same time. No guarantees are made about the order writes end up in the file though.

Note: sadly append-mode is not atomic on NFS filesystems.

One maybe obvious note when using append-mode: make sure that all data that belongs together, is written to the file in one operation. This can be done by concatenating strings before passing them to write(), or using a buffered writer (with a more than adequately sized buffer) and calling flush() when the message is complete.

Implementation detail: On Windows opening a file in append-mode has one flag less, the right to change existing data is removed. On Unix opening a file in append-mode has one flag extra, that sets the status of the file descriptor to append-mode. You could say that on Windows write is a superset of append, while on Unix append is a superset of write.

Because of this append is treated as a separate access mode in Rust, and if .append(true) is specified than .write() is ignored.

Read-append

Writing to the file works exactly the same as in append-mode.

Reading is more difficult, and may involve a lot of seeking. When the file is opened, the position for reading may be set at the end of the file, so you should first seek to the beginning. Also after every write the position is set to the end of the file. So before writing you should save the current position, and restore it after the write.

try!(file.seek(SeekFrom::Start(0)));
try!(file.read(&mut buffer));
let pos = try!(file.seek(SeekFrom::Current(0)));
try!(file.write(b"foo"));
try!(file.seek(SeekFrom::Start(pos)));
try!(file.read(&mut buffer));

No access mode set

Even if you don’t have read or write permission to a file, it is possible to open it on some systems by opening it with no access mode set (or the equivalent there of). This is true for Windows, Linux (with the flag O_PATH) and GNU/Hurd.

What can be done with a file opened this way is system-specific and niche. Since Linux version 2.6.39 all three operating systems support reading metadata such as the file size and timestamps.

On practically all variants of Unix opening a file without specifying the access mode falls back to opening the file read-only. This is because of the way the access flags where traditionally defined: O_RDONLY = 0, O_WRONLY = 1 and O_RDWR = 2. When no flags are set, the access mode is 0: read-only. But code that relies on this is considered buggy and not portable.

What should Rust do when no access mode is specified? Fall back to read-only, open with the most similar system-specific mode, or always fail to open? This RFC proposes to always fail. This is the conservative choice, and can be changed to open in a system-specific mode if a clear use case arises. Implementing a fallback is not worth it: it is no great effort to set the access mode explicitly.

Windows-specific

.access_mode(FILE_READ_DATA)

On Windows you can detail whether you want to have read and/or write access to the files data, attributes and/or extended attributes. Managing permissions in such detail has proven itself too difficult, and generally not worth it.

In Rust, .read(true) gives you read access to the data, attributes and extended attributes. Similarly, .write(true) gives write access to those three, and the right to append data beyond the current end of the file.

But if you want fine-grained control, with access_mode you have it.

.access_mode() overrides the access mode set with Rusts cross-platform options. Reasons to do so:

  • it is not possible to un-set the flags set by Rusts options;
  • otherwise the cross-platform options have to be wrapped with #[cfg(unix)], instead of only having to wrap the Windows-specific option.

As a reference, this are the flags set by Rusts access modes:

bitflagreadwriteread-writeappendread-append
generic rights
31GENERIC_READsetsetset
30GENERIC_WRITEsetset
29GENERIC_EXECUTE
28GENERIC_ALL
specific rights
0FILE_READ_DATAimpliedimpliedimplied
1FILE_WRITE_DATAimpliedimplied
2FILE_APPEND_DATAimpliedimpliedsetset
3FILE_READ_EAimpliedimpliedimplied
4FILE_WRITE_EAimpliedimpliedsetset
6FILE_EXECUTE
7FILE_READ_ATTRIBUTESimpliedimpliedimplied
8FILE_WRITE_ATTRIBUTESimpliedimpliedsetset
standard rights
16DELETE
17READ_CONTROLimpliedimpliedimpliedsetset+implied
18WRITE_DAC
19WRITE_OWNER
20SYNCHRONIZEimpliedimpliedimpliedsetset+implied

The implied flags can be specified explicitly with the constants FILE_GENERIC_READ and FILE_GENERIC_WRITE.

Creation modes

creation modefile existsfile does not existUnixWindows
not set (open existing)openfailOPEN_EXISTING
.create(true)opencreateO_CREATOPEN_ALWAYS
.truncate(true)truncatefailO_TRUNCTRUNCATE_EXISTING
.create(true).truncate(true)truncatecreateO_CREAT + O_TRUNCCREATE_ALWAYS
.create_new(true)failcreateO_CREAT + O_EXCLCREATE_NEW + FILE_FLAG_OPEN_REPARSE_POINT

Not set (open existing)

Open an existing file. Fails if the file does not exist.

Create

.create(true)

Open an existing file, or create a new file if it does not already exists.

Truncate

.truncate(true)

Open an existing file, and truncate it to zero length. Fails if the file does not exist. Attributes and permissions of the truncated file are preserved.

Note when using the Windows-specific .access_mode(): truncating will only work if the GENERIC_WRITE flag is set. Setting the equivalent individual flags is not enough.

Create and truncate

.create(true).truncate(true)

Open an existing file and truncate it to zero length, or create a new file if it does not already exists.

Note when using the Windows-specific .access_mode(): Contrary to only .truncate(true), with .create(true).truncate(true) Windows can truncate an existing file without requiring any flags to be set.

On Windows the attributes of an existing file can cause .open() to fail. If the existing file has the attribute hidden set, it is necessary to open with FILE_ATTRIBUTE_HIDDEN. Similarly if the existing file has the attribute system set, it is necessary to open with FILE_ATTRIBUTE_SYSTEM. See the Windows-specific .attributes() below on how to set these.

Create_new

.create_new(true)

Create a new file, and fail if it already exist.

On Unix this options started its life as a security measure. If you first check if a file does not exists with exists() and then call open(), some other process may have created in the in mean time. .create_new() is an atomic operation that will fail if a file already exist at the location.

.create_new() has a special rule on Unix for dealing with symlinks. If there is a symlink at the final element of its path (e.g. the filename), open will fail. This is to prevent a vulnerability where an unprivileged process could trick a privileged process into following a symlink and overwriting a file the unprivileged process has no access to. See Exploiting symlinks and tmpfiles. On Windows this behaviour is imitated by specifying not only CREATE_NEW but also FILE_FLAG_OPEN_REPARSE_POINT.

Simply put: nothing is allowed to exist on the target location, also no (dangling) symlink.

if .create_new(true) is set, .create() and .truncate() are ignored.

Unix-specific: Mode

.mode(0o666)

On Unix the new file is created by default with permissions 0o666 minus the systems umask (see Wikipedia). It is possible to set on other mode with this option.

If a file already exist or .create(true) or .create_new(true) are not specified, .mode() is ignored.

Rust currently does not expose a way to modify the umask.

Windows-specific: Attributes

.attributes(FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_HIDDEN | FILE_ATTRIBUTE_SYSTEM)

Files on Windows can have several attributes, most commonly one or more of the following four: readonly, hidden, system and archive. Most others are properties set by the file system. Of the others only FILE_ATTRIBUTE_ENCRYPTED, FILE_ATTRIBUTE_TEMPORARY and FILE_ATTRIBUTE_OFFLINE can be set when creating a new file. All others are silently ignored.

It is no use to set the archive attribute, as Windows sets it automatically when the file is newly created or modified. This flag may then be used by backup applications as an indication of which files have changed.

If a new file is created because it does not yet exist and .create(true) or .create_new(true) are specified, the new file is given the attributes declared with .attributes().

If an existing file is opened with .create(true).truncate(true), its existing attributes are preserved and combined with the ones declared with .attributes().

In all other cases the attributes get ignored.

Combination of access modes and creation modes

Some combinations of creation modes and access modes do not make sense.

For example: .create(true) when opening read-only. If the file does not already exist, it is created and you start reading from an empty file. And it is questionable whether you have permission to create a new file if you don’t have write access. A new file is created on all systems I have tested, but there is no documentation that explicitly guarantees this behaviour.

The same is true for .truncate(true) with read and/or append mode. Should an existing file be modified if you don’t have write permission? On Unix it is undefined (see some comments on the OpenBSD mailing list). The behaviour on Windows is inconsistent and depends on whether .create(true) is set.

To give guarantees about cross-platform (and sane) behaviour, Rust should allow only the following combinations of access modes and creations modes:

creation modereadwriteread-writeappendread-append
not set (open existing)XXXXX
createXXXX
truncateXX
create and truncateXX
create_newXXXX

It is possible to bypass these restrictions by using system-specific options (as in this case you already have to take care of cross-platform support yourself). On Unix this is done by setting the creation mode using .custom_flags() with O_CREAT, O_TRUNC and/or O_EXCL. On Windows this can be done by manually specifying .access_mode() (see above).

Asynchronous IO

Out op scope.

Other options

Inheritance of file descriptors

Leaking file descriptors to child processes can cause problems and can be a security vulnerability. See this report by Python.

On Windows, child processes do not inherit file descriptors by default (but this can be changed). On Unix they always inherit, unless the close-on-exec flag is set.

The close on exec flag can be set atomically when opening the file, or later with fcntl. The O_CLOEXEC flag is in the relatively new POSIX-2008 standard, and all modern versions of Unix support it. The following table lists for which operating systems we can rely on the flag to be supported.

ossince versionoldest supported version
OS X10.610.7?
Linux2.6.232.6.32 (supported by Rust)
FreeBSD8.38.4
OpenBSD5.05.7
NetBSD6.05.0
Dragonfly BSD3.2? (3.2 is not updated since 2012-12-14)
Solaris1110

This means we can always set the flag O_CLOEXEC, and do an additional fcntl if the os is NetBSD or Solaris.

Custom flags

.custom_flags()

Windows and the various flavours of Unix support flags that are not cross-platform, but that can be useful in some circumstances. On Unix they will be passed as the variable flags to open, on Windows as the dwFlagsAndAttributes parameter.

The cross-platform options of Rust can do magic: they can set any flag necessary to ensure it works as expected. For example, .append(true) on Unix not only sets the flag O_APPEND, but also automatically O_WRONLY or O_RDWR. This special treatment is not available for the custom flags.

Custom flags can only set flags, not remove flags set by Rusts options.

For the custom flags on Unix, the bits that define the access mode are masked out with O_ACCMODE, to ensure they do not interfere with the access mode set by Rusts options.

Windows:

bitflag
31FILE_FLAG_WRITE_THROUGH
30FILE_FLAG_OVERLAPPED
29FILE_FLAG_NO_BUFFERING
28FILE_FLAG_RANDOM_ACCESS
27FILE_FLAG_SEQUENTIAL_SCAN
26FILE_FLAG_DELETE_ON_CLOSE
25FILE_FLAG_BACKUP_SEMANTICS
24FILE_FLAG_POSIX_SEMANTICS
23FILE_FLAG_SESSION_AWARE
21FILE_FLAG_OPEN_REPARSE_POINT
20FILE_FLAG_OPEN_NO_RECALL
19FILE_FLAG_FIRST_PIPE_INSTANCE
18FILE_FLAG_OPEN_REQUIRING_OPLOCK

Unix:

POSIXLinuxOS XFreeBSDOpenBSDNetBSDDragonfly BSDSolaris
O_TRUNCO_TRUNCO_TRUNCO_TRUNCO_TRUNCO_TRUNCO_TRUNCO_TRUNC
O_CREATO_CREATO_CREATO_CREATO_CREATO_CREATO_CREATO_CREAT
O_EXCLO_EXCLO_EXCLO_EXCLO_EXCLO_EXCLO_EXCLO_EXCL
O_APPENDO_APPENDO_APPENDO_APPENDO_APPENDO_APPENDO_APPENDO_APPEND
O_CLOEXECO_CLOEXECO_CLOEXECO_CLOEXECO_CLOEXECO_CLOEXECO_CLOEXECO_CLOEXEC
O_DIRECTORYO_DIRECTORYO_DIRECTORYO_DIRECTORYO_DIRECTORYO_DIRECTORYO_DIRECTORYO_DIRECTORY
O_NOCTTYO_NOCTTYO_NOCTTYO_NOCTTYO_NOCTTYO_NOCTTY
O_NOFOLLOWO_NOFOLLOWO_NOFOLLOWO_NOFOLLOWO_NOFOLLOWO_NOFOLLOWO_NOFOLLOWO_NOFOLLOW
O_NONBLOCKO_NONBLOCKO_NONBLOCKO_NONBLOCKO_NONBLOCKO_NONBLOCKO_NONBLOCKO_NONBLOCK
O_SYNCO_SYNCO_SYNCO_SYNCO_SYNCO_SYNCO_FSYNCO_SYNC
O_DSYNCO_DSYNCO_DSYNCO_DSYNCO_DSYNC
O_RSYNCO_RSYNCO_RSYNC
O_DIRECTO_DIRECTO_DIRECTO_DIRECT
O_ASYNCO_ASYNC
O_NOATIME
O_PATH
O_TMPFILE
O_SHLOCKO_SHLOCKO_SHLOCKO_SHLOCKO_SHLOCK
O_EXLOCKO_EXLOCKO_EXLOCKO_EXLOCKO_EXLOCK
O_SYMLINK
O_EVTONLY
O_NOSIGPIPE
O_ALT_IO
O_NOLINKS
O_XATTR
POSIXLinuxOS XFreeBSDOpenBSDNetBSDDragonfly BSDSolaris

Windows-specific flags and attributes

The following variables for CreateFile2 currently have no equivalent functions in Rust to set them:

DWORD                 dwSecurityQosFlags;
LPSECURITY_ATTRIBUTES lpSecurityAttributes;
HANDLE                hTemplateFile;

Changes from current

Access mode

  • Current: .append(true) requires .write(true) on Unix, but not on Windows. New: ignore .write() if .append(true) is specified.
  • Current: when .append(true) is set, it is not possible to modify file attributes on Windows, but it is possible to change the file mode on Unix. New: allow file attributes to be modified on Windows in append-mode.
  • Current: On Windows .read() and .write() set individual bit flags instead of generic flags. New: Set generic flags, as recommend by Microsoft. e.g. GENERIC_WRITE instead of FILE_GENERIC_WRITE and GENERIC_READ instead of FILE_GENERIC_READ. Currently truncate is broken on Windows, this fixes it.
  • Current: when no access mode is set, this falls back to opening the file read-only on Unix, and opening with no access permissions on Windows. New: always fail to open if no access mode is set.
  • Rename the Windows-specific .desired_access() to .access_mode()

Creation mode

  • Implement .create_new().
  • Do not allow .truncate(true) if the access mode is read-only and/or append.
  • Do not allow .create(true) or .create_new (true) if the access mode is read-only.
  • Remove the Windows-specific .creation_disposition(). It has no use, because all its options can be set in a cross-platform way.
  • Split the Windows-specific .flags_and_attributes() into .custom_flags() and .attributes(). This is a form of future-proofing, as the new Windows 8 Createfile2 also splits these attributes. This has the advantage of a clear separation between file attributes, that are somewhat similar to Unix mode bits, and the custom flags that modify the behaviour of the current file handle.

Other options

  • Set the close-on-exec flag atomically on Unix if supported.
  • Implement .custom_flags() on Windows and Unix to pass custom flags to the system.

Drawbacks

This adds a thin layer on top of the raw operating system calls. In this pull request the conclusion was: this seems like a good idea for a “high level” abstraction like OpenOptions.

This adds extra options that many applications can do without (otherwise they were already implemented).

Also this RFC is in line with the vision for IO in the IO-OS-redesign:

  • [The APIs] should impose essentially zero cost over the underlying OS services; the core APIs should map down to a single syscall unless more are needed for cross-platform compatibility.
  • The APIs should largely feel like part of “Rust” rather than part of any legacy, and they should enable truly portable code.
  • Coverage. The std APIs should over time strive for full coverage of non-niche, cross-platform capabilities.

Alternatives

The first version of this RFC contained a proposal for options that control caching anf file locking. They are out of scope for now, but included here for reference.

Sharing / locking

On Unix it is possible for multiple processes to read and write to the same file at the same time.

When you open a file on Windows, the system by default denies other processes to read or write to the file, or delete it. By setting the sharing mode, it is possible to allow other processes read, write and/or delete access. For cross-platform consistency, Rust imitates Unix by setting all sharing flags.

Unix has no equivalent to the kind of file locking that Windows has. It has two types of advisory locking, POSIX and BSD-style. Advisory means any process that does not use locking itself can happily ignore the locking af another process. As if that is not bad enough, they both have problems that make them close to unusable for modern multi-threaded programs. Linux may in some very rare cases support mandatory file locking, but it is just as broken as advisory.

Windows-specific: Share mode

.share_mode(FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE)

It is possible to set the individual share permissions with .share_mode().

The current philosophy of this function is that others should have no rights, unless explicitly granted. I think a better fit for Rust would be to give all others all rights, unless explicitly denied, e.g.: .share_mode(DENY_READ | DENY_WRITE | DENY_DELETE).

Controlling caching

When dealing file file systems and hard disks, there are several kinds of caches. Giving hints or controlling them may improve performance or data consistency.

  1. read-ahead (performance of reads and overwrites) Instead of requesting only the data necessary for a single read() call from a storage device, an operating system may request more data than necessary to have it already available for the next read.
  2. os cache (performance of reads and overwrites) The os may keep the data of previous reads and writes in memory to increase the performance of future reads and possibly writes.
  3. os staging area (convenience/performance of reads and writes) The size and alignment of data reads and writes to a disk should correspondent to sectors on the storage device, usually 512 or 4096 bytes. The os makes sure a regular write() or read() doesn’t have to care about this. For example a small write (say a 100 bytes) has to rewrite a whole sector. The os often has the surrounding data in its cache and can efficiently combine it to write the whole sector.
  4. delayed writing (performance/correctness of writes) The os may delay writes to improve performance, for example by batching consecutive writes, and scheduling with reads to minimize seeking.
  5. on-disk write cache (performance/correctness of writes) Most hard disk / storage devices have a small RAM cache. It can speed up reads, and writes can return as soon as the data is written to the devices cache.

Read-ahead hint

.read_ahead_hint(enum CacheHint)

enum ReadAheadHint {
    Default,
    Sequential,
    Random,
}

If you read a file sequentially the read-ahead is beneficial, for completely random access it can become a penalty.

  • Default uses the generally good heuristics of the operating system.
  • Sequential indicates sequential but not necessary consecutive access. With this the os may increase the amount of data that is read ahead.
  • Random indicates mainly random access. The os may disable its read-ahead cache.

This option is treated as a hint. It is ignored if the os does not support it, or if the behaviour of the application proves it is set wrong.

Open flags / system calls:

  • Windows: flags FILE_FLAG_SEQUENTIAL_SCAN and FILE_FLAG_RANDOM_ACCESS
  • Linux, FreeBSD, NetBSD: posix_fadvise() with the flags POSIX_FADV_SEQUENTIAL and POSIX_FADV_RANDOM
  • OS X: fcntl() with with F_RDAHEAD 0 for random (there is no special mode for sequential).

OS cache

used_once(true)

When reading many gigabytes of data a process may push useful data from other processes out of the os cache. To keep the performance of the whole system up, a process could indicate to the os whether data is only needed once, or not needed anymore. On Linux, FreeBSD and NetBSD this is possible with fcntl POSIX_FADV_DONTNEED after a read or write with sync (or before close). On FreeBSD and NetBSD it is also possible to specify this up-front with fnctl POSIX_FADV_NOREUSE, and on OS X with fnctl F_NOCACHE. Windows does not seem to provide an option for this.

This option may negatively effect the performance of writes smaller than the sector size, as cached data may not be available to the os staging area.

This control over the os cache is the main reason some applications use direct io, despite it being less convenient and disabling other useful caches.

Delayed writing and on-disk write cache

.sync_data(true) and .sync_all(true)

There can be two delays (by the os and by the disk cache) between when an application performs a write, and when the data is written to persistent storage. They increase performance, but increase the risk of data loss in case of a systems crash or power outage.

When dealing with critical data, it may be useful to control these caches to make the chance of data loss smaller. The application should normally do so by calling Rusts stand-alone functions sync_data() or sync_all() at meaningful points (e.g. when the file is in a consistent state, or a state it can recover from).

However, .sync_data() and .sync_all() may also be given as an open option. This guarantees every write will not return before the data is written to disk. These options improve reliability as and you can never accidentally forget a sync.

Whether performance with these options is worse than with the stand-alone functions is hard to say. With these options the data maybe has to be synchronised more often. But the stand-alone functions often sync outstanding writes to all files, while the options possibly sync only the current file.

The difference between .sync_all() and .sync_data(true) is that .sync_data(true) does not update the less critical metadata such as the last modified timestamp (although it will be written eventually).

Open flags:

  • Windows: FILE_FLAG_WRITE_THROUGH for .sync_all()
  • Unix: O_SYNC for .sync_all() and O_DSYNC for .sync_data()

If a system does not support syncing only data, this option will fall back to syncing both data and metadata. If .sync_all(true) is specified, .sync_data() is ignored.

Direct access / no caching

Most operating systems offer a mode that reads data straight from disk to an application buffer, or that writes straight from a buffer to disk. This avoid the small cost of a memory copy. It has the side effect that the data is not available to the os to provide caching. Also, because this does not use the os staging area all reads and writes have to take care of data sizes and alignment themselves.

Overview:

  • os staging area: not used
  • read-ahead: not used
  • os cache: data may be used, but is not added
  • delayed writing: no delay
  • on-disk write cache: maybe

Open flags / system calls:

  • Windows: flag FILE_FLAG_NO_BUFFERING
  • Linux, FreeBSD, NetBSD, Dragonfly BSD: flag O_DIRECT

The other options offer a more fine-grained control over caching, and usually offer better performance or correctness guarantees. This option is sometimes used by applications as a crude way to control (disable) the os cache.

Rust should not currently expose this as an open option, because it should be used with an abstraction / external crate that handles the data size and alignment requirements. If it should be used at all.

Unresolved questions

None.

Summary

Implement .drain(range) and .drain() respectively as appropriate on collections.

Motivation

The drain methods and their draining iterators serve to mass remove elements from a collection, receiving them by value in an iterator, while the collection keeps its allocation intact (if applicable).

The range parameterized variants of drain are a generalization of drain, to affect just a subrange of the collection, for example removing just an index range from a vector.

drain thus serves both to consume all or some elements from a collection without consuming the collection itself. The ranged drain allows bulk removal of elements, more efficiently than any other safe API.

Detailed design

  • Implement .drain(a..b) where a and b are indices, for all collections that are sequences.
  • Implement .drain() for other collections. This is just like .drain(..) would be (drain the whole collection).
  • Ranged drain accepts all range types, currently .., a.., ..b, a..b, and drain will accept inclusive end ranges (“closed ranges”) when they are implemented.
  • Drain removes every element in the range.
  • Drain returns an iterator that produces the removed items by value.
  • Drain removes the whole range, regardless if you iterate the draining iterator or not.
  • Drain preserves the collection’s capacity where it is possible.

Collections

Vec and String already have ranged drain, so they are complete.

HashMap and HashSet already have .drain(), so they are complete; their elements have no meaningful order.

BinaryHeap already has .drain(), and just like its other iterators, it promises no particular order. So this collection is already complete.

The following collections need updated implementations:

VecDeque should implement .drain(range) for index ranges, just like Vec does.

LinkedList should implement .drain(range) for index ranges. Just like the other sequences, this is a O(n) operation, and LinkedList already has other indexed methods (.split_off()).

BTreeMap and BTreeSet

BTreeMap already has a ranged iterator, .range(a, b), and drain for BTreeMap and BTreeSet should have arguments completely consistent the range method. This will be addressed separately.

Stabilization

The following can be stabilized as they are:

  • HashMap::drain
  • HashSet::drain
  • BinaryHeap::drain

The following can be stabilized, but their argument’s trait is not stable:

  • Vec::drain
  • String::drain

The following will be heading towards stabilization after changes:

  • VecDeque::drain

Drawbacks

  • Collections disagree on if they are drained with a range (Vec) or not (HashMap)
  • No trait for the drain method.

Alternatives

  • Use a trait for the drain method and let all collections implement it. This will force all collections to use a single parameter (a range) for the drain method.

  • Provide .splice(range, iterator) for Vec instead of .drain(range):

    fn splice<R, I>(&mut self, range: R, iter: I) -> Splice<T>
        where R: RangeArgument, I: IntoIterator<T>

    if the method .splice() would both return an iterator of the replaced elements, and consume an iterator (of arbitrary length) to replace the removed range, then it includes drain’s tasks.

  • RFC #574 proposed accepting either a single index (single key for maps) or a range for ranged drain, so an alternative would be to do that. The single index case is however out of place, and writing a range that spans a single index is easy.

  • Use the name .remove_range(a..b) instead of .drain(a..b). Since the method has two simultaneous roles, removing a range and yielding a range as an iterator, either role could guide the name. This alternative name was not very popular with the rust developers I asked (but they are already used to what drain means in rust context).

  • Provide .drain() without arguments and separate range drain into a separate method name, implemented in addition to drain where applicable.

  • Do not support closed ranges in drain.

  • BinaryHeap::drain could drain the heap in sorted order. The primary proposal is arbitrary order, to match preexisting BinaryHeap iterators.

Unresolved questions

  • Concrete shape of the BTreeMap API is not resolved here
  • Will closed ranges be used for the drain API?

Summary

Allow a re-export of a function as entry point main.

Motivation

Functions and re-exports of functions usually behave the same way, but they do not for the program entry point main. This RFC aims to fix this inconsistency.

The above mentioned inconsistency means that e.g. you currently cannot use a library’s exported function as your main function.

Example:

pub mod foo {
    pub fn bar() {
        println!("Hello world!");
    }
}
use foo::bar as main;

Example 2:

extern crate main_functions;
pub use main_functions::rmdir as main;

See also https://github.com/rust-lang/rust/issues/27640 for the corresponding issue discussion.

The #[main] attribute can also be used to change the entry point of the generated binary. This is largely irrelevant for this RFC as this RFC tries to fix an inconsistency with re-exports and directly defined functions. Nevertheless, it can be pointed out that the #[main] attribute does not cover all the above-mentioned use cases.

Detailed design

Use the symbol main at the top-level of a crate that is compiled as a program (--crate-type=bin) – instead of explicitly only accepting directly-defined functions, also allow (possibly non-pub) re-exports.

Drawbacks

None.

Alternatives

None.

Unresolved questions

None.

Summary

Preventing overlapping implementations of a trait makes complete sense in the context of determining method dispatch. There must not be ambiguity in what code will actually be run for a given type. However, for marker traits, there are no associated methods for which to indicate ambiguity. There is no harm in a type being marked as Sync for multiple reasons.

Motivation

This is purely to improve the ergonomics of adding/implementing marker traits. While specialization will certainly make all cases not covered today possible, removing the restriction entirely will improve the ergonomics in several edge cases.

Some examples include:

  • the coercible trait design presents at RFC #91;
  • the ExnSafe trait proposed in RFC #1236.

Detailed design

For the purpose of this RFC, the definition of a marker trait is a trait with no associated items. The design here is quite straightforward. The following code fails to compile today:

trait Marker<A> {}

struct GenericThing<A, B> {
    a: A,
    b: B,
}

impl<A, B> Marker<GenericThing<A, B>> for A {}
impl<A, B> Marker<GenericThing<A, B>> for B {}

The two impls are considered overlapping, as there is no way to prove currently that A and B are not the same type. However, in the case of marker traits, there is no actual reason that they couldn’t be overlapping, as no code could actually change based on the impl.

For a concrete use case, consider some setup like the following:

trait QuerySource {
    fn select<T, C: Selectable<T, Self>>(&self, columns: C) -> SelectSource<C, Self> {
        ...
    }
}

trait Column<T> {}
trait Table: QuerySource {}
trait Selectable<T, QS: QuerySource>: Column<T> {}

impl<T: Table, C: Column<T>> Selectable<T, T> for C {}

However, when the following becomes introduced:

struct JoinSource<Left, Right> {
    left: Left,
    right: Right,
}

impl<Left, Right> QuerySource for JoinSource<Left, Right> where
    Left: Table + JoinTo<Right>,
    Right: Table,
{
    ...
}

It becomes impossible to satisfy the requirements of select. The following impl is disallowed today:

impl<Left, Right, C> Selectable<Left, JoinSource<Left, Right>> for C where
    Left: Table + JoinTo<Right>,
    Right: Table,
    C: Column<Left>,
{}

impl<Left, Right, C> Selectable<Right, JoinSource<Left, Right>> for C where
    Left: Table + JoinTo<Right>,
    Right: Table,
    C: Column<Right>,
{}

Since Left and Right might be the same type, this causes an overlap. However, there’s also no reason to forbid the overlap. There is no way to work around this today. Even if you write an impl that is more specific about the tables, that would be considered a non-crate local blanket implementation. The only way to write it today is to specify each column individually.

Drawbacks

With this change, adding any methods to an existing marker trait, even defaulted, would be a breaking change. Once specialization lands, this could probably be considered an acceptable breakage.

Alternatives

If the lattice rule for specialization is eventually accepted, there does not appear to be a case that is impossible to write, albeit with some additional boilerplate, as you’ll have to manually specify the empty impl for any overlap that might occur.

Unresolved questions

How can we implement this design? Simply lifting the coherence restrictions is easy enough, but we will encounter some challenges when we come to test whether a given trait impl holds. For example, if we have something like:

impl<T:Send> MarkerTrait for T { }
impl<T:Sync> MarkerTrait for T { }

means that a type Foo: MarkerTrait can hold either by Foo: Send or by Foo: Sync. Today, we prefer to break down an obligation like Foo: MarkerTrait into component obligations (e.g., Foo: Send). Due to coherence, there is always one best way to do this (sort of — where clauses complicate matters). That is, except for complications due to type inference, there is a best impl to choose. But under this proposal, there would not be. Experimentation is needed (similar concerns arise with the proposals around specialization, so it may be that progress on that front will answer the questions raised here).

Should we add some explicit way to indicate that this is a marker trait? This would address the drawback that adding items is a backwards incompatible change.

Summary

This RFC proposes to allow library authors to use a #[deprecated] attribute, with optional since = "version" and note = "free text"fields. The compiler can then warn on deprecated items, while rustdoc can document their deprecation accordingly.

Motivation

Library authors want a way to evolve their APIs; which also involves deprecating items. To do this cleanly, they need to document their intentions and give their users enough time to react.

Currently there is no support from the language for this oft-wanted feature (despite a similar feature existing for the sole purpose of evolving the Rust standard library). This RFC aims to rectify that, while giving a pleasant interface to use while maximizing usefulness of the metadata introduced.

Detailed design

Public API items (both plain fns, methods, trait- and inherent implementations as well as const definitions, type definitions, struct fields and enum variants) can be given a #[deprecated] attribute. All possible fields are optional:

  • since is defined to contain the version of the crate at the time of deprecating the item, following the semver scheme. Rustc does not know about versions, thus the content of this field is not checked (but will be by external lints, e.g. rust-clippy.
  • note should contain a human-readable string outlining the reason for deprecating the item and/or what to use instead. While this field is not required, library authors are strongly advised to make use of it. The string is interpreted as plain unformatted text (for now) so that rustdoc can include it in the item’s documentation without messing up the formatting.

On use of a deprecated item, rustc will warn of the deprecation. Note that during Cargo builds, warnings on dependencies get silenced. While this has the upside of keeping things tidy, it has a downside when it comes to deprecation:

Let’s say I have my llogiq crate that depends on foobar which uses a deprecated item of serde. I will never get the warning about this unless I try to build foobar directly. We may want to create a service like crater to warn on use of deprecated items in library crates, however this is outside the scope of this RFC.

rustdoc will show deprecation on items, with a [deprecated] box that may optionally show the version and note where available.

The language reference will be extended to describe this feature as outlined in this RFC. Authors shall be advised to leave their users enough time to react before removing a deprecated item.

The internally used feature can either be subsumed by this or possibly renamed to avoid a name clash.

Intended Use

Crate author Anna wants to evolve her crate’s API. She has found that one type, Foo, has a better implementation in the rust-foo crate. Also she has written a frob(Foo) function to replace the earlier Foo::frobnicate(self) method.

So Anna first bumps the version of her crate (because deprecation is always done on a version change) from 0.1.1 to 0.2.1. She also adds the following prefix to the Foo type:

extern crate rust_foo;

#[deprecated(since = "0.2.1",
    note="The rust_foo version is more advanced, and this crate's will likely be discontinued")]
struct Foo { .. }

Users of her crate will see the following once they cargo update and build:

src/foo_use.rs:27:5: 27:8 warning: Foo is marked deprecated as of version 0.2.1
src/foo_use.rs:27:5: 27:8 note: The rust_foo version is more advanced, and this crate's will likely be discontinued

Rust-clippy will likely gain more sophisticated checks for deprecation:

  • future_deprecation will warn on items marked as deprecated, but with a version lower than their crates’, while current_deprecation will warn only on those items marked as deprecated where the version is equal or lower to the crates’ one.
  • deprecation_syntax will check that the since field really contains a semver number and not some random string.

Clippy users can then activate the clippy checks and deactivate the standard deprecation checks.

Drawbacks

  • Once the feature is public, we can no longer change its design

Alternatives

  • Do nothing
  • make the since field required and check that it’s a single version
  • require either reason or use be present
  • reason could include markdown formatting
  • rename the reason field to note to clarify its broader usage. (done!)
  • add a note field and make reason a field with specific meaning, perhaps even predefine a number of valid reason strings, as JEP277 currently does
  • Add a use field containing a plain text of what to use instead
  • Add a use field containing a path to some function, type, etc. to replace the current feature. Currently with the rustc-private feature, people are describing a replacement in the reason field, which is clearly not the original intention of the field
  • Optionally, cargo could offer a new dependency category: “doc-dependencies” which are used to pull in other crates’ documentations to link them (this is obviously not only relevant to deprecation)

Unresolved questions

  • What other restrictions should we introduce now to avoid being bound to a possibly flawed design?
  • Can / Should the std library make use of the #[deprecated] extensions?
  • Bikeshedding: Are the names good enough?

Summary

This RFC proposes several new types and associated APIs for working with times in Rust. The primary new types are Instant, for working with time that is guaranteed to be monotonic, and SystemTime, for working with times across processes on a single system (usually internally represented as a number of seconds since an epoch).

Motivations

The primary motivation of this RFC is to flesh out a larger set of APIs for representing instants in time and durations of time.

For various reasons that this RFC will explore, APIs related to time are fairly error-prone and have a number of caveats that programmers do not expect.

Rust APIs tend to expose more of these kinds of caveats through their APIs, in order to help programmers become aware of and handle edge-cases. At the same time, un-ergonomic APIs can work against that goal.

This RFC attempts to balance the desire to expose common footguns and help programmers handle edge-cases with a desire to avoid creating so many hoops to jump through that the useful caveats get ignored.

At a high level, this RFC covers two concepts related to time:

  • Instants, moments in time
  • Durations, an amount of time between two instants

We would like to be able to do some basic operations with these instants:

  • Compare two instants
  • Add a time period to an instant
  • Subtract a time period from an instant
  • Compare an instant to “now” to discover time elapsed

However, there are a number of problems that arise when trying to define these types and operations.

First of all, with the exception of moments in time created using system APIs that guarantee monotonicity (because they were created within a single process, or created during since the last boot), moments in time are not monotonic. A simple example of this is that if a program creates two files sequentially, it cannot assume that the creation time of the second file is later than the creation time of the first file.

This is because NTP (the network time protocol) can arbitrarily change the system clock, and can even rewind time. This kind of time travel means that the “system time-line” is not continuous and monotonic, which is something that programmers very often forget when writing code involving machine times.

This design attempts to help programmers avoid some of the most egregious and unexpected consequences of this kind of “time travel”.


Leap seconds, which cannot be predicted, mean that it is impossible to reliably add a number of seconds to a particular moment in time represented as a human date and time (“1 million seconds from 2015-09-20 at midnight”).

They also mean that seemingly simple concepts, like “1 minute”, have caveats depending on exactly how they are used. Caveats related to leap seconds create real-world bugs, because of how unusual leap seconds are, and how unlikely programmers are to consider “12:00:60” as a valid time.

Certain kinds of seemingly simple operations may not make sense in all cases. For example, adding “1 year” to February 29, 2012 would produce February 29, 2013, which is not a valid date. Adding “1 month” to August 31, 2015 would produce September 31, 2015, which is also not a valid date.

Certain human descriptions of durations, like “1 month and 35 days” do not make sense, and human descriptions like “1 month and 5 days” have ambiguous meaning when used in operations (do you add 1 month first and then 5 days or vice versa).

For these reasons, this RFC does not attempt to define a human duration with fields for years, days or months. Such a duration would be difficult to use in operations without hard-to-remember ordering rules.

For these reasons, this RFC does not propose APIs related to human concepts dates and times. It is intentionally forwards-compatible with such extensions.


Finally, many APIs that take a Duration can only do something useful with positive values. For example, a timeout API would not know how to wait a negative amount of time before timing out. Even discounting the possibility of coding mistakes, the problem of system clock time travel means that programmers often produce negative durations that they did not expect, and APIs that liberally accept negative durations only propagate the error further.

As a result, this RFC makes a number of simplifying assumptions that can be relaxed over time with additional types or through further RFCs:

It provides convenience methods for constructing Durations from larger units of time (minutes, hours, days), but gives them names like Duration.from_standard_hour. A standard hour is always 3600 seconds, regardless of leap seconds.

It provides APIs that are expected to produce positive Durations, and expects that APIs like timeouts will accept positive Durations (which is currently the case in Rust’s standard library). These APIs help the programmer discover the possibility of system clock time travel, and either handle the error explicitly, or at least avoid propagating the problem into other APIs (by using unwrap).

It separates monotonic time (Instant) from time derived from the system clock (SystemTime), which must account for the possibility of time travel. This allows methods related to monotonic time to be uncaveated, while working with the system clock has more methods that return Results.

This RFC does not attempt to define a type for calendared DateTimes, nor does it directly address time zones.

Proposal

Types

pub struct Instant {
  secs: u64,
  nanos: u32
}

pub struct SystemTime {
  secs: u64,
  nanos: u32
}

pub struct Duration {
  secs: u64,
  nanos: u32
}

Instant

Instant is the simplest of the types representing moments in time. It represents an opaque (non-serializable!) timestamp that is guaranteed to be monotonic when compared to another Instant.

In this context, monotonic means that a timestamp created later in real-world time will always be not less than a timestamp created earlier in real-world time.

The Duration type can be used in conjunction with Instant, and these operations have none of the usual time-related caveats.

  • Add a Duration to a Instant, producing a new Instant
  • compare two Instants to each other
  • subtract a Instant from a later Instant, producing a Duration
  • ask for an amount of time elapsed since a Instant, producing a Duration

Asking for an amount of time elapsed from a given Instant is a very common operation that is guaranteed to produce a positive Duration. Asking for the difference between an earlier and a later Instant also produces a positive Duration when used correctly.

This design does not assume that negative Durations are never useful, but rather that the most common uses of Duration do not have a meaningful use for negative values. Rather than require each API that takes a Duration to produce an Err (or panic!) when receiving a negative value, this design optimizes for the broadly useful positive Duration.

impl Instant {
  /// Returns an instant corresponding to "now".
  pub fn now() -> Instant;

  /// Panics if `earlier` is later than &self.
  /// Because Instant is monotonic, the only time that `earlier` should be
  /// a later time is a bug in your code.
  pub fn duration_from_earlier(&self, earlier: Instant) -> Duration;

  /// Panics if self is later than the current time (can happen if a Instant
  /// is produced synthetically)
  pub fn elapsed(&self) -> Duration;
}

impl Add<Duration> for Instant {
  type Output = Instant;
}

impl Sub<Duration> for Instant {
  type Output = Instant;
}

impl PartialEq for Instant;
impl Eq for Instant;
impl PartialOrd for Instant;
impl Ord for Instant;

For convenience, several new constructors are added to Duration. Because any unit greater than seconds has caveats related to leap seconds, all of the constructors take “standard” units. For example a “standard minute” is 60 seconds, while a “standard hour” is 3600 seconds.

The “standard” terminology comes from JodaTime.

impl Duration {
  /// a standard minute is 60 seconds
  /// panics if the number of minutes is larger than u64 seconds
  pub fn from_standard_minutes(minutes: u64) -> Duration;

  /// a standard hour is 60 standard minutes
  /// panics if the number of hours is larger than u64 seconds
  pub fn from_standard_hours(hours: u64) -> Duration;

  /// a standard day is 24 standard hours
  /// panics if the number of days is larger than u64 seconds
  pub fn from_standard_days(days: u64) -> Duration;
}

SystemTime

This type should not be used for in-process timestamps, like those used in benchmarks.

A SystemTime represents a time stored on the local machine derived from the system clock (in UTC). For example, it is used to represent mtime on the file system.

The most important caveat of SystemTime is that it is not monotonic. This means that you can save a file to the file system, then save another file to the file system, and the second file has an mtime earlier than the second.

This means that an operation that happens after another operation in real time may have an earlier SystemTime!

In practice, most programmers do not think about this kind of “time travel” with the system clock, leading to strange bugs once the mistaken assumption propagates through the system.

This design attempts to help the programmer catch the most egregious of these kinds of mistakes (unexpected travel back in time) before the mistake propagates.

impl SystemTime {
  /// Returns the system time corresponding to "now".
  pub fn now() -> SystemTime;

  /// Returns an `Err` if `earlier` is later
  pub fn duration_from_earlier(&self, earlier: SystemTime) -> Result<Duration, SystemTimeError>;

  /// Returns an `Err` if &self is later than the current system time.
  pub fn elapsed(&self) -> Result<Duration, SystemTimeError>;
}

impl Add<Duration> for SystemTime {
  type Output = SystemTime;
}

impl Sub<Duration> for SystemTime {
  type Output = SystemTime;
}

// An anchor which can be used to generate new SystemTime instances from a known
// Duration or convert a SystemTime to a Duration which can later then be used
// again to recreate the SystemTime.
//
// Defined to be "1970-01-01 00:00:00 UTC" on all systems.
const UNIX_EPOCH: SystemTime = ...;

// Note that none of these operations actually imply that the underlying system
// operation that produced these SystemTimes happened at the same time
// (for Eq) or before/after (for Ord) than the other system operation.
impl PartialEq for SystemTime;
impl Eq for SystemTime;
impl PartialOrd for SystemTime;
impl Ord for SystemTime;

impl SystemTimeError {
    /// A SystemTimeError originates from attempting to subtract two SystemTime
    /// instances, a and b. If a < b then an error is returned, and the duration
    /// returned represents (b - a).
    pub fn duration(&self) -> Duration;
}

The main difference from the design of Instant is that it is impossible to know for sure that a SystemTime is in the past, even if the operation that produced it happened in the past (in real time).


Illustrative Example:

If a program requests a SystemTime that represents the mtime of a given file, then writes a new file and requests its SystemTime, it may expect the second SystemTime to be after the first.

Using duration_from_earlier will remind the programmer that “time travel” is possible, and make it easy to handle that case. As always, the programmer can use .unwrap() in the prototype stage to avoid having to handle the edge-case yet, while retaining a reminder that the edge-case is possible.

Drawbacks

This RFC defines two new types for describing times, and posits a third type to complete the picture. At first glance, having three different APIs for working with times may seem overly complex.

However, there are significant differences between times that only go forward and times that can go forward or backward. There are also significant differences between times represented as a number since an epoch and time represented in human terms.

As a result, this RFC chose to make these differences explicit, allowing ergonomic, uncaveated use of monotonic time, and a small speedbump when working with times that can move both forward and backward.

Alternatives

One alternative design would be to attempt to have a single unified time type. The rationale for not doing so is explained under Drawbacks.

Another possible alternative is to allow free math between instants, rather than providing operations for comparing later instants to earlier ones.

In practice, the vast majority of APIs taking a Duration expect a positive-only Duration, and therefore code that subtracts a time from another time will usually want a positive Duration.

The problem is especially acute when working with SystemTime, where it is possible for a question like: “how much time has elapsed since I created this file” to return a negative Duration!

This RFC attempts to catch mistakes related to negative Durations at the point where they are produced, rather than requiring all APIs that take a Duration to guard against negative values.

Because Ord is implemented on SystemTime and Instant, it is possible to compare two arbitrary times to each other first, and then use duration_from_earlier reliably to get a positive Duration.

Unresolved Questions

This RFC leaves types related to human representations of dates and times to a future proposal.

Summary

Promote the libc crate from the nursery into the rust-lang organization after applying changes such as:

  • Remove the internal organization of the crate in favor of just one flat namespace at the top of the crate.
  • Set up a large number of CI builders to verify FFI bindings across many platforms in an automatic fashion.
  • Define the scope of libc in terms of bindings it will provide for each platform.

Motivation

The current libc crate is a bit of a mess unfortunately, having long since departed from its original organization and scope of definition. As more platforms have been added over time as well as more APIs in general, the internal as well as external facing organization has become a bit muddled. Some specific concerns related to organization are:

  • There is a vast amount of duplication between platforms with some common definitions. For example all BSD-like platforms end up defining a similar set of networking struct constants with the same definitions, but duplicated in many locations.
  • Some subset of libc is reexported at the top level via globs, but not all of libc is reexported in this fashion.
  • When adding new APIs it’s unclear what modules it should be placed into. It’s not always the case that the API being added conforms to one of the existing standards that a module exist for and it’s not always easy to consult the standard itself to see if the API is in the standard.
  • Adding a new platform to liblibc largely entails just copying a huge amount of code from some previously similar platform and placing it at a new location in the file.

Additionally, on the technical and tooling side of things some concerns are:

  • None of the FFI bindings in this module are verified in terms of testing. This means that they are both not automatically generated nor verified, and it’s highly likely that there are a good number of mistakes throughout.
  • It’s very difficult to explore the documentation for libc on different platforms, but this is often one of the more important libraries to have documentation for across all platforms.

The purpose of this RFC is to largely propose a reorganization of the libc crate, along with tweaks to some of the mundane details such as internal organization, CI automation, how new additions are accepted, etc. These changes should all help push libc to a more more robust position where it can be well trusted across all platforms both now and into the future!

Detailed design

All design can be previewed as part of an in progress fork available on GitHub. Additionally, all mentions of the libc crate in this RFC refer to the external copy on crates.io, not the in-tree one in the rust-lang/rust repository. No changes are being proposed (e.g. to stabilize) the in-tree copy.

What is this crate?

The primary purpose of this crate is to provide all of the definitions necessary to easily interoperate with C code (or “C-like” code) on each of the platforms that Rust supports. This includes type definitions (e.g. c_int), constants (e.g. EINVAL) as well as function headers (e.g. malloc).

One question that typically comes up with this sort of purpose is whether the crate is “cross platform” in the sense that it basically just works across the platforms it supports. The libc crate, however, is not intended to be cross platform but rather the opposite, an exact binding to the platform in question. In essence, the libc crate is targeted as “replacement for #include in Rust” for traditional system header files, but it makes no effort to be portable by tweaking type definitions and signatures.

The Home of libc

Currently this crate resides inside of the main rust repo of the rust-lang organization, but this unfortunately somewhat hinders its development as it takes awhile to land PRs and isn’t quite as quick to release as external repositories. As a result, this RFC proposes having the crate reside externally in the rust-lang organization so additions can be made through PRs (tested much more quickly).

The main repository will have a submodule pointing at the external repository to continue building libstd.

Public API

The libc crate will hide all internal organization of the crate from users of the crate. All items will be reexported at the top level as part of a flat namespace. This brings with it a number of benefits:

  • The internal structure can evolve over time to better fit new platforms while being backwards compatible.
  • This design matches what one would expect from C, where there’s only a flat namespace available.
  • Finding an API is quite easy as the answer is “it’s always at the root”.

A downside of this approach, however, is that the public API of libc will be platform-specific (e.g. the set of symbols it exposes is different across platforms), which isn’t seen very commonly throughout the rest of the Rust ecosystem today. This can be mitigated, however, by clearly indicating that this is a platform specific library in the sense that it matches what you’d get if you were writing C code across multiple platforms.

The API itself will include any number of definitions typically found in C header files such as:

  • C types, e.g. typedefs, primitive types, structs, etc.
  • C constants, e.g. #define directives
  • C statics
  • C functions (their headers)
  • C macros (exported as #[inline] functions in Rust)

As a technical detail, all struct types exposed in libc will be guaranteed to implement the Copy and Clone traits. There will be an optional feature of the library to implement Debug for all structs, but it will be turned off by default.

Changes from today

The in progress implementation of this RFC has a number of API changes and breakages from today’s libc crate. Almost all of them are minor and targeted at making bindings more correct in terms of faithfully representing the underlying platforms.

There is, however, one large notable change from today’s crate. The size_t, ssize_t, ptrdiff_t, intptr_t, and uintptr_t types are all defined in terms of isize and usize instead of known sizes. Brought up by @briansmith on #28096 this helps decrease the number of casts necessary in normal code and matches the existing definitions on all platforms that libc supports today. In the future if a platform is added where these type definitions are not correct then new ones will simply be available for that target platform (and casts will be necessary if targeting it).

Note that part of this change depends upon removing the compiler’s lint-by-default about isize and usize being used in FFI definitions. This lint is mostly a holdover from when the types were named int and uint and it was easy to confuse them with C’s int and unsigned int types.

The final change to the libc crate will be to bump its version to 1.0.0, signifying that breakage has happened (a bump from 0.1.x) as well as having a future-stable interface until 2.0.0.

Scope of libc

The name “libc” is a little nebulous as to what it means across platforms. It is clear, however, that this library must have a well defined scope up to which it can expand to ensure that it doesn’t start pulling in dozens of runtime dependencies to bind all the system APIs that are found.

Unfortunately, however, this library also can’t be “just libc” in the sense of “just libc.so on Linux,” for example, as this would omit common APIs like pthreads and would also mean that pthreads would be included on platforms like MUSL (where it is literally inside libc.a). Additionally, the purpose of libc isn’t to provide a cross platform API, so there isn’t necessarily one true definition in terms of sets of symbols that libc will export.

In order to have a well defined scope while satisfying these constraints, this RFC proposes that this crate will have a scope that is defined separately for each platform that it targets. The proposals are:

  • Linux (and other unix-like platforms) - the libc, libm, librt, libdl, libutil, and libpthread libraries. Additional platforms can include libraries whose symbols are found in these libraries on Linux as well.
  • OSX - the common library to link to on this platform is libSystem, but this transitively brings in quite a few dependencies, so this crate will refine what it depends upon from libSystem a little further, specifically: libsystem_c, libsystem_m, libsystem_pthread, libsystem_malloc and libdyld.
  • Windows - the VS CRT libraries. This library is currently intended to be distinct from the winapi crate as well as bindings to common system DLLs found on Windows, so the current scope of libc will be pared back to just what the CRT contains. This notably means that a large amount of the current contents will be removed on Windows.

New platforms added to libc can decide the set of libraries libc will link to and bind at that time.

Internal structure

The primary change being made is that the crate will no longer be one large file sprinkled with #[cfg] annotations. Instead, the crate will be split into a tree of modules, and all modules will reexport the entire contents of their children. Unlike most libraries, however, most modules in libc will be hidden via #[cfg] at compile time. Each platform supported by libc will correspond to a path from a leaf module to the root, picking up more definitions, types, and constants as the tree is traversed upwards.

This organization provides a simple method of deduplication between platforms. For example libc::unix contains functions found across all unix platforms whereas libc::unix::bsd is a refinement saying that the APIs within are common to only BSD-like platforms (these may or may not be present on non-BSD platforms as well). The benefits of this structure are:

  • For any particular platform, it’s easy in the source to look up what its value is (simply trace the path from the leaf to the root, aka the filesystem structure, and the value can be found).
  • When adding an API it’s easy to know where the API should be added because each node in the module hierarchy corresponds clearly to some subset of platforms.
  • Adding new platforms should be a relatively simple and confined operation. New leaves of the hierarchy would be created and some definitions upwards may be pushed to lower levels if APIs need to be changed or aren’t present on the new platform. It should be easy to audit, however, that a new platform doesn’t tamper with older ones.

Testing

The current set of bindings in the libc crate suffer a drawback in that they are not verified. This is often a pain point for new platforms where when copying from an existing platform it’s easy to forget to update a constant here or there. This lack of testing leads to problems like a wrong definition of ioctl which in turn lead to backwards compatibility problems when the API is fixed.

In order to solve this problem altogether, the libc crate will be enhanced with the ability to automatically test the FFI bindings it contains. As this crate will begin to live in rust-lang instead of the rust repo itself, this means it can leverage external CI systems like Travis CI and AppVeyor to perform these tasks.

The current implementation of the binding testing verifies attributes such as type size/alignment, struct field offset, struct field types, constant values, function definitions, etc. Over time it can be enhanced with more metrics and properties to test.

In theory adding a new platform to libc will be blocked until automation can be set up to ensure that the bindings are correct, but it is unfortunately not easy to add this form of automation for all platforms, so this will not be a requirement (beyond “tier 1 platforms”). There is currently automation for the following targets, however, through Travis and AppVeyor:

  • {i686,x86_64}-pc-windows-{msvc,gnu}
  • {i686,x86_64,mips,aarch64}-unknown-linux-gnu
  • x86_64-unknown-linux-musl
  • arm-unknown-linux-gnueabihf
  • arm-linux-androideabi
  • {i686,x86_64}-apple-{darwin,ios}

Drawbacks

Loss of module organization

The loss of an internal organization structure can be seen as a drawback of this design. While perhaps not precisely true today, the principle of the structure was that it is easy to constrain yourself to a particular C standard or subset of C to in theory write “more portable programs by default” by only using the contents of the respective module. Unfortunately in practice this does not seem to be that much in use, and it’s also not clear whether this can be expressed through simply headers in libc. For example many platforms will have slight tweaks to common structures, definitions, or types in terms of signedness or value, so even if you were restricted to a particular subset it’s not clear that a program would automatically be more portable.

That being said, it would still be useful to have these abstractions to some degree, but the flip side is that it’s easy to build this sort of layer on top of libc as designed here externally on crates.io. For example extern crate posix could just depend on libc and reexport all the contents for the POSIX standard, perhaps with tweaked signatures here and there to work better across platforms.

Loss of Windows bindings

By only exposing the CRT functions on Windows, the contents of libc will be quite trimmed down which means when accessing similar functions like send or connect crates will be required to link to two libraries at least.

This is also a bit of a maintenance burden on the standard library itself as it means that all the bindings it uses must move to src/libstd/sys/windows/c.rs in the immedidate future.

Alternatives

  • Instead of only exporting a flat namespace the libc crate could optionally also do what it does today with respect to reexporting modules corresponding to various C standards. The downside to this, unfortunately, is that it’s unclear how much portability using these standards actually buys you.

  • The crate could be split up into multiple crates which represent an exact correspondence to system libraries, but this has the downside of using common functions available on both OSX and Linux would require at least two extern crate directives and dependencies.

Unresolved questions

  • The only platforms without automation currently are the BSD-like platforms (e.g. FreeBSD, OpenBSD, Bitrig, DragonFly, etc), but if it were possible to set up automation for these then it would be plausible to actually require automation for any new platform. It is possible to do this?

  • What is the relation between std::os::*::raw and libc? Given that the standard library will probably always depend on an in-tree copy of the libc crate, should libc define its own in this case, have the standard library reexport, and then the out-of-tree libc reexports the standard library?

  • Should Windows be supported to a greater degree in libc? Should this crate and winapi have a closer relationship?

Summary

Enable the compiler to cache incremental workproducts.

Motivation

The goal of incremental compilation is, naturally, to improve build times when making small edits. Any reader who has never felt the need for such a feature is strongly encouraged to attempt hacking on the compiler or servo sometime (naturally, all readers are so encouraged, regardless of their opinion on the need for incremental compilation).

Basic usage

The basic usage will be that one enables incremental compilation using a compiler flag like -C incremental-compilation=TMPDIR. The TMPDIR directory is intended to be an empty directory that the compiler can use to store intermediate by-products; the compiler will automatically “GC” this directory, deleting older files that are no longer relevant and creating new ones.

High-level design

The high-level idea is that we will track the following intermediate workproducts for every function (and, indeed, for other kinds of items as well, but functions are easiest to describe):

  • External signature
    • For a function, this would include the types of its arguments, where-clauses declared on the function, and so forth.
  • MIR
    • The MIR represents the type-checked statements in the body, in simplified forms. It is described by RFC #1211. As the MIR is not fully implemented, this is a non-trivial dependency. We could instead use the existing annotated HIR, however that would require a larger effort in terms of porting and adapting data structures to an incremental setting. Using the MIR simplifies things in this respect.
  • Object files
    • This represents the final result of running LLVM. It may be that the best strategy is to “cache” compiled code in the form of an rlib that is progressively patched, or it may be easier to store individual .o files that must be relinked (anyone who has worked in a substantial C++ project can attest, however, that linking can take a non-trivial amount of time).

Of course, the key to any incremental design is to determine what must be changed. This can be encoded in a dependency graph. This graph connects the various bits of the HIR to the external products (signatures, MIR, and object files). It is of the utmost importance that this dependency graph is complete: if edges are missing, the result will be obscure errors where changes are not fully propagated, yielding inexplicable behavior at runtime. This RFC proposes an automatic scheme based on encapsulation.

Interaction with lints and compiler plugins

Although rustc does not yet support compiler plugins through a stable interface, we have long planned to allow for custom lints, syntax extensions, and other sorts of plugins. It would be nice therefore to be able to accommodate such plugins in the design, so that their inputs can be tracked and accounted for as well.

Interaction with optimization

It is important to clarify, though, that this design does not attempt to enable full optimizing for incremental compilation; indeed the two are somewhat at odds with one another, as full optimization may perform inlining and inter-function analysis, which can cause small edits in one function to affect the generated code of another. This situation is further exacerbated by the fact that LLVM does not provide any way to track these sorts of dependencies (e.g., one cannot even determine what inlining took place, though @dotdash suggested a clever trick of using llvm lifetime hints). Strategies for handling this are discussed in the Optimization section below.

Detailed design

We begin with a high-level execution plan, followed by sections that explore aspects of the plan in more detail. The high-level summary includes links to each of the other sections.

High-level execution plan

Regardless of whether it is invoked in incremental compilation mode or not, the compiler will always parse and macro expand the entire crate, resulting in a HIR tree. Once we have a complete HIR tree, and if we are invoked in incremental compilation mode, the compiler will then try to determine which parts of the crate have changed since the last execution. For each item, we compute a (mostly) stable id based primarily on the item’s name and containing module. We then compute a hash of its contents and compare that hash against the hash that the item had in the compilation (if any).

Once we know which items have changed, we consult a dependency graph to tell us which artifacts are still usable. These artifacts can take the form of serializing MIR graphs, LLVM IR, compiled object code, and so forth. The dependency graph tells us which bits of AST contributed to each artifact. It is constructed by dynamically monitoring what the compiler accesses during execution.

Finally, we can begin execution. The compiler is currently structured in a series of passes, each of which walks the entire AST. We do not need to change this structure to enable incremental compilation. Instead, we continue to do every pass as normal, but when we come to an item for which we have a pre-existing artifact (for example, if we are type-checking a fn that has not changed since the last execution), we can simply skip over that fn instead. Similar strategies can be used to enable lazy or parallel compilation at later times. (Eventually, though, it might be nice to restructure the compiler so that it operates in more of a demand driven style, rather than a series of sweeping passes.)

When we come to the final LLVM stages, we must separate the functions into distinct “codegen units” for the purpose of LLVM code generation. This will build on the existing “codegen-units” used for parallel code generation. LLVM may perform inlining or interprocedural analysis within a unit, but not across units, which limits the amount of reoptimization needed when one of those functions changes.

Finally, the RFC closes with a discussion of testing strategies we can use to help avoid bugs due to incremental compilation.

Staging

One important question is how to stage the incremental compilation work. That is, it’d be nice to start seeing some benefit as soon as possible. One possible plan is as follows:

  1. Implement stable def-ids (in progress, nearly complete).
  2. Implement the dependency graph and tracking system (started).
  3. Experiment with distinct modularization schemes to find the one which gives the best fragmentation with minimal performance impact. Or, at least, implement something finer-grained than today’s codegen-units.
  4. Persist compiled object code only.
  5. Persist intermediate MIR and generated LLVM as well.

The most notable staging point here is that we can begin by just saving object code, and then gradually add more artifacts that get saved. The effect of saving fewer things (such as only saving object code) will simply be to make incremental compilation somewhat less effective, since we will be forced to re-type-check and re-trans functions where we might have gotten away with only generating new object code. However, this is expected to be a second order effect overall, particularly since LLVM optimization time can be a very large portion of compilation.

Handling DefIds

In order to correlate artifacts between compilations, we need some stable way to name items across compilations (and across crates). The compiler currently uses something called a DefId to identify each item. However, these ids today are based on a node-id, which is just an index into the HIR and hence will change whenever anything preceding it in the HIR changes. We need to make the DefId for an item independent of changes to other items.

Conceptually, the idea is to change DefId into the pair of a crate and a path:

DEF_ID = (CRATE, PATH)
CRATE = <crate identifier>
PATH = PATH_ELEM | PATH :: PATH_ELEM
PATH_ELEM = (PATH_ELEM_DATA, <disambiguating integer>)
PATH_ELEM_DATA = Crate(ID)
               | Mod(ID)
               | Item(ID)
               | TypeParameter(ID)
               | LifetimeParameter(ID)
               | Member(ID)
               | Impl
               | ...

However, rather than actually store the path in the compiler, we will instead intern the paths in the CStore, and the DefId will simply store an integer. So effectively the node field of DefId, which currently indexes into the HIR of the appropriate crate, becomes an index into the crate’s list of paths.

For the most part, these paths match up with user’s intuitions. So a struct Foo declared in a module bar would just have a path like bar::Foo. However, the paths are also able to express things for which there is no syntax, such as an item declared within a function body.

Disambiguation

For the most part, paths should naturally be unique. However, there are some cases where a single parent may have multiple children with the same path. One case would be erroneous programs, where there are (e.g.) two structs declared with the same name in the same module. Another is that some items, such as impls, do not have a name, and hence we cannot easily distinguish them. Finally, it is possible to declare multiple functions with the same name within function bodies:

fn foo() {
    {
        fn bar() { }
    }

    {
        fn bar() { }
    }
}

All of these cases are handled by a simple disambiguation mechanism. The idea is that we will assign a path to each item as we traverse the HIR. If we find that a single parent has two children with the same name, such as two impls, then we simply assign them unique integers in the order that they appear in the program text. For example, the following program would use the paths shown (I’ve elided the disambiguating integer except where it is relevant):

mod foo {               // Path: <root>::foo
    pub struct Type { } // Path: <root>::foo::Type
    impl Type {         // Path: <root>::foo::(<impl>,0)
        fn bar() {..}   // Path: <root>::foo::(<impl>,0)::bar
    }
    impl Type { }       // Path: <root>::foo::(<impl>,1)
}

Note that the impls were arbitrarily assigned indices based on the order in which they appear. This does mean that reordering impls may cause spurious recompilations. We can try to mitigate this somewhat by making the path entry for an impl include some sort of hash for its header or its contents, but that will be something we can add later.

Implementation note: Refactoring DefIds in this way is a large task. I’ve made several attempts at doing it, but my latest branch appears to be working out (it is not yet complete). As a side benefit, I’ve uncovered a few fishy cases where we using the node id from external crates to index into the local crate’s HIR map, which is certainly incorrect. –nmatsakis

## Identifying and tracking dependencies

Core idea: a fine-grained dependency graph

Naturally any form of incremental compilation requires a detailed understanding of how each work item is dependent on other work items. This is most readily visualized as a dependency graph; the finer-grained the nodes and edges in this graph, the better. For example, consider a function foo that calls a function bar:

fn foo() {
    ...
    bar();
    ...
}

Now imagine that the body (but not the external signature) of bar changes. Do we need to type-check foo again? Of course not: foo only cares about the signature of bar, not its body. For the compiler to understand this, though, we’ll need to create distinct graph nodes for the signature and body of each function.

(Note that our policy of making “external signatures” fully explicit is helpful here. If we supported, e.g., return type inference, than it would be harder to know whether a change to bar means foo must be recompiled.)

Categories of nodes

This section gives a kind of “first draft” of the set of graph nodes/edges that we will use. It is expected that the full set of nodes/edges will evolve in the course of implementation (and of course over time as well). In particular, some parts of the graph as presented here are intentionally quite coarse and we envision that the graph will be gradually more fine-grained.

The nodes fall into the following categories:

  • HIR nodes. Represent some portion of the input HIR. For example, the body of a fn as a HIR node. These are the inputs to the entire compilation process.
    • Examples:
      • SIG(X) would represent the signature of some fn item X that the user wrote (i.e., the names of the types, where-clauses, etc)
      • BODY(X) would be the body of some fn item X
      • and so forth
  • Metadata nodes. These represent portions of the metadata from another crate. Each piece of metadata will include a hash of its contents. When we need information about an external item, we load that info out of the metadata and add it into the IR nodes below; this can be represented in the graph using edges. This means that incremental compilation can also work across crates.
  • IR nodes. Represent some portion of the computed IR. For example, the MIR representation of a fn body, or the ty representation of a fn signature. These also frequently correspond to a single entry in one of the various compiler hashmaps. These are the outputs (and intermediate steps) of the compilation process
    • Examples:
      • ITEM_TYPE(X) – entry in the obscurely named tcache table for X (what is returned by the rather-more-clearly-named lookup_item_type)
      • PREDICATES(X) – entry in the predicates table
      • ADT(X) – ADT node for a struct (this may want to be more fine-grained, particularly to cover the ivars)
      • MIR(X) – the MIR for the item X
      • LLVM(X) – the LLVM IR for the item X
      • OBJECT(X) – the object code generated by compiling some item X; the precise way that this is saved will depend on whether we use .o files that are linked together, or if we attempt to amend the shared library in place.
  • Procedure nodes. These represent various passes performed by the compiler. For example, the act of type checking a fn body, or the act of constructing MIR for a fn body. These are the “glue” nodes that wind up reading the inputs and creating the outputs, and hence which ultimately tie the graph together.
    • Examples:
      • COLLECT(X) – the collect code executing on item X
      • WFCHECK(X) – the wfcheck code executing on item X
      • BORROWCK(X) – the borrowck code executing on item X

To see how this all fits together, let’s consider the graph for a simple example:

fn foo() {
    bar();
}

fn bar() {
}

This might generate a graph like the following (the following sections will describe how this graph is constructed). Note that this is not a complete graph, it only shows the data needed to produce MIR(foo).

BODY(foo) ----------------------------> TYPECK(foo) --> MIR(foo)
                                          ^ ^ ^ ^         |
SIG(foo) ----> COLLECT(foo)               | | | |         |
                 |                        | | | |         v
                 +--> ITEM_TYPE(foo) -----+ | | |      LLVM(foo)
                 +--> PREDICATES(foo) ------+ | |         |
                                              | |         |
SIG(bar) ----> COLLECT(bar)                   | |         v
                 |                            | |     OBJECT(foo)
                 +--> ITEM_TYPE(bar) ---------+ |
                 +--> PREDICATES(bar) ----------+

As you can see, this graph indicates that if the signature of either function changes, we will need to rebuild the MIR for foo. But there is no path from the body of bar to the MIR for foo, so changes there need not trigger a rebuild (we are assuming here that bar is not inlined into foo; see the section on optimizations for more details on how to handle those sorts of dependencies).

Building the graph

It is very important the dependency graph contain all edges. If any edges are missing, it will mean that we will get inconsistent builds, where something should have been rebuilt what was not. Hand-coding a graph like this, therefore, is probably not the best choice – we might get it right at first, but it’s easy to for such a setup to fall out of sync as the code is edited. (For example, if a new table is added, or a function starts reading data that it didn’t before.)

Another consideration is compiler plugins. At present, of course, we don’t have a stable API for such plugins, but eventually we’d like to support a rich family of them, and they may want to participate in the incremental compilation system as well. So we need to have an idea of what data a plugin accesses and modifies, and for what purpose.

The basic strategy then is to build the graph dynamically with an API that looks something like this:

  • push_procedure(procedure_node)
  • pop_procedure(procedure_node)
  • read_from(data_node)
  • write_to(data_node)

Here, the procedure_node arguments are one of the procedure labels above (like COLLECT(X)), and the data_node arguments are either HIR or IR nodes (e.g., SIG(X), MIR(X)).

The idea is that we maintain for each thread a stack of active procedures. When push_procedure is called, a new entry is pushed onto that stack, and when pop_procedure is called, an entry is popped. When read_from(D) is called, we add an edge from D to the top of the stack (it is an error if the stack is empty). Similarly, write_to(D) adds an edge from the top of the stack to D.

Naturally it is easy to misuse the above methods: one might forget to push/pop a procedure at the right time, or fail to invoke read/write. There are a number of refactorings we can do on the compiler to make this scheme more robust.

Procedures

Most of the compiler passes operate an item at a time. Nonetheless, they are largely encoded using the standard visitor, which walks all HIR nodes. We can refactor most of them to instead use an outer visitor, which walks items, and an inner visitor, which walks a particular item. (Many passes, such as borrowck, already work this way.) This outer visitor will be parameterized with the label for the pass, and will automatically push/pop procedure nodes as appropriate. This means that as long as you base your pass on the generic framework, you don’t really have to worry.

In general, while I described the general case of a stack of procedure nodes, it may be desirable to try and maintain the invariant that there is only ever one procedure node on the stack at a time. Otherwise, failing to push/pop a procedure at the right time could result in edges being added to the wrong procedure. It is likely possible to refactor things to maintain this invariant, but that has to be determined as we go.

IR nodes

Adding edges to the IR nodes that represent the compiler’s intermediate byproducts can be done by leveraging privacy. The idea is to enforce the use of accessors to the maps and so forth, rather than allowing direct access. These accessors will call the read_from and write_to methods as appropriate to add edges to/from the current active procedure.

HIR nodes

HIR nodes are a bit trickier to encapsulate. After all, the HIR map itself gives access to the root of the tree, which in turn gives access to everything else – and encapsulation is harder to enforce here.

Some experimentation will be required here, but the rough plan is to:

  1. Leveraging the HIR, move away from storing the HIR as one large tree, and instead have a tree of items, with each item containing only its own content.
    • This way, giving access to the HIR node for an item doesn’t implicitly give access to all of its subitems.
    • Ideally this would match precisely the HIR nodes we setup, which means that e.g. a function would have a subtree corresponding to its signature, and a separating subtree corresponding to its body.
    • We can still register the lexical nesting of items by linking “indirectly” via a DefId.
  2. Annotate the HIR map accessor methods so that they add appropriate read/write edges.

This will integrate with the “default visitor” described under procedure nodes. This visitor can hand off just an opaque id for each item, requiring the pass itself to go through the map to fetch the actual HIR, thus triggering a read edge (we might also bake this behavior into the visitor for convenience).

Persisting the graph

Once we’ve built the graph, we have to persist it, along with some associated information. The idea is that the compiler, when invoked, will be supplied with a directory. It will store temporary files in there. We could also consider extending the design to support use by multiple simultaneous compiler invocations, which could mean incremental compilation results even across branches, much like ccache (but this may require tweaks to the GC strategy).

Once we get to the point of persisting the graph, we don’t need the full details of the graph. The process nodes, in particular, can be removed. They exist only to create links between the other nodes. To remove them, we first compute the transitive reachability relationship and then drop the process nodes out of the graph, leaving only the HIR nodes (inputs) and IR nodes (output). (In fact, we only care about the IR nodes that we intend to persist, which may be only a subset of the IR nodes, so we can drop those that we do not plan to persist.)

For each HIR node, we will hash the HIR and store that alongside the node. This indicates precisely the state of the node at the time. Note that we only need to hash the HIR itself; contextual information (like use statements) that are needed to interpret the text will be part of a separate HIR node, and there should be edges from that node to the relevant compiler data structures (such as the name resolution tables).

For each IR node, we will serialize the relevant information from the table and store it. The following data will need to be serialized:

  • Types, regions, and predicates
  • ADT definitions
  • MIR definitions
  • Identifiers
  • Spans

This list was gathered primarily by spelunking through the compiler. It is probably somewhat incomplete. The appendix below lists an exhaustive exploration.

Reusing and garbage collecting artifacts

The general procedure when the compiler starts up in incremental mode will be to parse and macro expand the input, create the corresponding set of HIR nodes, and compute their hashes. We can then load the previous dependency graph and reconcile it against the current state:

  • If the dep graph contains a HIR node that is no longer present in the source, that node is queued for deletion.
  • If the same HIR node exists in both the dep graph and the input, but the hash has changed, that node is queued for deletion.
  • If there is a HIR node that exists only in the input, it is added to the dep graph with no dependencies.

We then delete the transitive closure of nodes queued for deletion (that is, all the HIR nodes that have changed or been removed, and all nodes reachable from those HIR nodes). As part of the deletion process, we remove whatever on disk artifact that may have existed.

Handling spans

There are times when the precise span of an item is a significant part of its metadata. For example, debuginfo needs to identify line numbers and so forth. However, editing one fn will affect the line numbers for all subsequent fns in the same file, and it’d be best if we can avoid recompiling all of them. Our plan is to phase span support in incrementally:

  1. Initially, the AST hash will include the filename/line/column, which does mean that later fns in the same file will have to be recompiled (somewhat unnnecessarily).
  2. Eventually, it would be better to encode spans by identifying a particular AST node (relative to the root of the item). Since we are hashing the structure of the AST, we know the AST from the previous and current compilation will match, and thus we can compute the current span by finding the corresponding AST node and loading its span. This will require some refactoring and work however.

Optimization and codegen units

There is an inherent tension between incremental compilation and full optimization. Full optimization may perform inlining and inter-function analysis, which can cause small edits in one function to affect the generated code of another. This situation is further exacerbated by the fact that LLVM does not provide any means to track when one function was inlined into another, or when some sort of interprocedural analysis took place (to the best of our knowledge, at least).

This RFC proposes a simple mechanism for permitting aggressive optimization, such as inlining, while also supporting reasonable incremental compilation. The idea is to create codegen units that compartmentalize closely related functions (for example, on a module boundary). This means that those compartmentalized functions may analyze one another, while treating functions from other compartments as opaque entities. This means that when a function in compartment X changes, we know that functions from other compartments are unaffected and their object code can be reused. Moreover, while the other functions in compartment X must be re-optimized, we can still reuse the existing LLVM IR. (These are the same codegen units as we use for parallel codegen, but setup differently.)

In terms of the dependency graph, we would create one IR node representing the codegen unit. This would have the object code as an associated artifact. We would also have edges from each component of the codegen unit. As today, generic or inlined functions would not belong to any codegen unit, but rather would be instantiated anew into each codegen unit in which they are (transitively) referenced.

There is an analogy here with C++, which naturally faces the same problems. In that setting, templates and inlineable functions are often placed into header files. Editing those header files naturally triggers more recompilation. The compiler could employ a similar strategy by replicating things that look like good candidates for inlining into each module; call graphs and profiling information may be a good input for such heuristics.

Testing strategy

If we are not careful, incremental compilation has the potential to produce an infinite stream of irreproducible bug reports, so it’s worth considering how we can best test this code.

Regression tests

The first and most obvious piece of infrastructure is something for reliable regression testing. The plan is simply to have a series of sources and patches. The source will have each patch applied in sequence, rebuilding (incrementally) at each point. We can then check that (a) we only rebuilt what we expected to rebuild and (b) compare the result with the result of a fresh build from scratch. This allows us to build up tests for specific scenarios or bug reports, but doesn’t help with finding bugs in the first place.

Replaying crates.io versions and git history

The next step is to search across crates.io for consecutive releases. For a given package, we can checkout version X.Y and then version X.(Y+1) and check that incrementally building from one to the other is successful and that all tests still yield the same results as before (pass or fail).

A similar search can be performed across git history, where we identify pairs of consecutive commits. This has the advantage of being more fine-grained, but the disadvantage of being a MUCH larger search space.

Fuzzing

The problem with replaying crates.io versions and even git commits is that they are probably much larger changes than the typical recompile. Another option is to use fuzzing, making “innocuous” changes that should trigger a recompile. Fuzzing is made easier here because we have an oracle – that is, we can check that the results of recompiling incrementally match the results of compiling from scratch. It’s also not necessary that the edits are valid Rust code, though we should test that too – in particular, we want to test that the proper errors are reported when code is invalid, as well. @nrc also suggested a clever hybrid, where we use git commits as a source for the fuzzer’s edits, gradually building up the commit.

Drawbacks

The primary drawback is that incremental compilation may introduce a new vector for bugs. The design mitigates this concern by attempting to make the construction of the dependency graph as automated as possible. We also describe automated testing strategies.

Alternatives

This design is an evolution from RFC 594.

Unresolved questions

None.

  • Feature Name: intrinsic-semantics
  • Start Date: 2015-09-29
  • RFC PR: rust-lang/rfcs#1300
  • Rust Issue: N/A

Summary

Define the general semantics of intrinsic functions. This does not define the semantics of the individual intrinsics, instead defines the semantics around intrinsic functions in general.

Motivation

Intrinsics are currently poorly-specified in terms of how they function. This means they are a cause of ICEs and general confusion. The poor specification of them also means discussion affecting intrinsics gets mired in opinions about what intrinsics should be like and how they should act or be implemented.

Detailed design

Intrinsics are currently implemented by generating the code for the intrinsic at the call site. This allows for intrinsics to be implemented much more efficiently in many cases. For example, transmute is able to evaluate the input expression directly into the storage for the result, removing a potential copy. This is the main idea of intrinsics, a way to generate code that is otherwise inexpressible in Rust.

Keeping this in-place behaviour is desirable, so this RFC proposes that intrinsics should only be usable as functions when called. This is not a change from the current behaviour, as you already cannot use intrinsics as function pointers. Using an intrinsic in any way other than directly calling should be considered an error.

Intrinsics should continue to be defined and declared the same way. The rust-intrinsic and platform-intrinsic ABIs indicate that the function is an intrinsic function.

Drawbacks

  • Fewer bikesheds to paint.
  • Doesn’t allow intrinsics to be used as regular functions. (Note that this is not something we have evidence to suggest is a desired property, as it is currently the case anyway)

Alternatives

  • Allow coercion to regular functions and generate wrappers. This is similar to how we handle named tuple constructors. Doing this undermines the idea of intrinsics as a way of getting the compiler to generate specific code at the call-site however.
  • Do nothing.

Unresolved questions

None.

Summary

Add some additional utility methods to OsString and OsStr.

Motivation

OsString and OsStr are extremely bare at the moment; some utilities would make them easier to work with. The given set of utilities is taken from String, and don’t add any additional restrictions to the implementation.

I don’t think any of the proposed methods are controversial.

Detailed design

Add the following methods to OsString:

/// Creates a new `OsString` with the given capacity. The string will be able
/// to hold exactly `capacity` bytes without reallocating. If `capacity` is 0,
/// the string will not allocate.
///
/// See main `OsString` documentation information about encoding.
fn with_capacity(capacity: usize) -> OsString;

/// Truncates `self` to zero length.
fn clear(&mut self);

/// Returns the number of bytes this `OsString` can hold without reallocating.
///
/// See `OsString` introduction for information about encoding.
fn capacity(&self) -> usize;

/// Reserves capacity for at least `additional` more bytes to be inserted in the
/// given `OsString`. The collection may reserve more space to avoid frequent
/// reallocations.
fn reserve(&mut self, additional: usize);

/// Reserves the minimum capacity for exactly `additional` more bytes to be
/// inserted in the given `OsString`. Does nothing if the capacity is already
/// sufficient.
///
/// Note that the allocator may give the collection more space than it
/// requests. Therefore capacity can not be relied upon to be precisely
/// minimal. Prefer reserve if future insertions are expected.
fn reserve_exact(&mut self, additional: usize);

Add the following methods to OsStr:

/// Checks whether `self` is empty.
fn is_empty(&self) -> bool;

/// Returns the number of bytes in this string.
///
/// See `OsStr` introduction for information about encoding.
fn len(&self) -> usize;

Drawbacks

The meaning of len() might be a bit confusing because it’s the size of the internal representation on Windows, which isn’t otherwise visible to the user.

Alternatives

None.

Unresolved questions

None.

Summary

This RFC describes the Rust Language Server (RLS). This is a program designed to service IDEs and other tools. It offers a new access point to compilation and APIs for getting information about a program. The RLS can be thought of as an alternate compiler, but internally will use the existing compiler.

Using the RLS offers very low latency compilation. This allows for an IDE to present information based on compilation to the user as quickly as possible.

Requirements

To be concrete about the requirements for the RLS, it should enable the following actions:

  • show compilation errors and warnings, updated as the user types,
  • code completion as the user types,
  • highlight all references to an item,
  • find all references to an item,
  • jump to definition.

These requirements will be covered in more detail in later sections.

History note

This RFC started as a more wide-ranging RFC. Some of the details have been scaled back to allow for more focused and incremental development.

Parts of the RFC dealing with robust compilation have been removed - work here is ongoing and mostly doesn’t require an RFC.

The RLS was earlier referred to as the oracle.

Motivation

Modern IDEs are large and complex pieces of software; creating a new one from scratch for Rust would be impractical. Therefore we need to work with existing IDEs (such as Eclipse, IntelliJ, and Visual Studio) to provide functionality. These IDEs provide excellent editor and project management support out of the box, but know nothing about the Rust language. This information must come from the compiler.

An important aspect of IDE support is that response times must be extremely quick. Users expect some feedback as they type. Running normal compilation of an entire project is far too slow. Furthermore, as the user is typing, the program will not be a valid, complete Rust program.

We expect that an IDE may have its own lexer and parser. This is necessary for the IDE to quickly give parse errors as the user types. Editors are free to rely on the compiler’s parsing if they prefer (the compiler will do its own parsing in any case). Further information (name resolution, type information, etc.) will be provided by the RLS.

Requirements

We stated some requirements in the summary, here we’ll cover more detail and the workflow between IDE and RLS.

The RLS should be safe to use in the face of concurrent actions. For example, multiple requests for compilation could occur, with later requests occurring before earlier requests have finished. There could be multiple clients making requests to the RLS, some of which may mutate its data. The RLS should provide reliable and consistent responses. However, it is not expected that clients are totally isolated, e.g., if client 1 updates the program, then client 2 requests information about the program, client 2’s response will reflect the changes made by client 1, even if these are not otherwise known to client 2.

Show compilation errors and warnings, updated as the user types

The IDE will request compilation of the in-memory program. The RLS will compile the program and asynchronously supply the IDE with errors and warnings.

Code completion as the user types

The IDE will request compilation of the in-memory program and request code- completion options for the cursor position. The RLS will compile the program. As soon as it has enough information for code-completion it will return options to the IDE.

  • The RLS should return code-completion options asynchronously to the IDE. Alternatively, the RLS could block the IDE’s request for options.
  • The RLS should not filter the code-completion options. For example, if the user types foo.ba where foo has available fields bar and qux, it should return both these fields, not just bar. The IDE can perform it’s own filtering since it might want to perform spell checking, etc. Put another way, the RLS is not a code completion tool, but supplies the low-level data that a code completion tool uses to provide suggestions.

Highlight all references to an item

The IDE requests all references in the same file based on a position in the file. The RLS returns a list of spans.

Find all references to an item

The IDE requests all references based on a position in the file. The RLS returns a list of spans.

Jump to definition

The IDE requests the definition of an item based on a position in a file. The RLS returns a list of spans (a list is necessary since, for example, a dynamically dispatched trait method could be defined in multiple places).

Detailed design

Architecture

The basic requirements for the architecture of the RLS are that it should be:

  • reusable by different clients (IDEs, tools, …),
  • fast (we must provide semantic information about a program as the user types),
  • handle multi-crate programs,
  • consistent (it should handle multiple, potentially mutating, concurrent requests).

The RLS will be a long running daemon process. Communication between the RLS and an IDE will be via IPC calls (tools (for example, Racer) will also be able to use the RLS as an in-process library.). The RLS will include the compiler as a library.

The RLS has three main components - the compiler, a database, and a work queue.

The RLS accepts two kinds of requests - compilation requests and queries. It will also push data to registered programs (generally triggered by compilation completing). Essentially, all communication with the RLS is asynchronous (when used as an in-process library, the client will be able to use synchronous function calls too).

The work queue is used to sequentialise requests and ensure consistency of responses. Both compilation requests and queries are stored in the queue. Some compilation requests can cause earlier compilation requests to be canceled. Queries blocked on the earlier compilation then become blocked on the new request.

In the future, we should move queries ahead of compilation requests where possible.

When compilation completes, the database is updated (see below for more details). All queries are answered from the database. The database has data for the whole project, not just one crate. This also means we don’t need to keep the compiler’s data in memory.

Compilation

The RLS is somewhat parametric in its compilation model. Theoretically, it could run a full compile on the requested crate, however this would be too slow in practice.

The general procedure is that the IDE (or other client) requests that the RLS compile a crate. It is up to the IDE to interact with Cargo (or some other build system) in order to produce the correct build command and to ensure that any dependencies are built.

Initially, the RLS will do a standard incremental compile on the specified crate. See RFC PR 1298 for more details on incremental compilation.

The crate being compiled should include any modifications made in the client and not yet committed to a file (e.g., changes the IDE has in memory). The client should pass such changes to the RLS along with the compilation request.

I see two ways to improve compilation times: lazy compilation and keeping the compiler in memory. We might also experiment with having the IDE specify which parts of the program have changed, rather than having the compiler compute this.

Lazy compilation

With lazy compilation the IDE requests that a specific item is compiled, rather than the whole program. The compiler compiles this function compiling other items only as necessary to compile the requested item.

Lazy compilation should also be incremental - an item is only compiled if required and if it has changed.

Obviously, we could miss some errors with pure lazy compilation. To address this the RLS schedules both a lazy and a full (but still incremental) compilation. The advantage of this approach is that many queries scheduled after compilation can be performed after the lazy compilation, but before the full compilation.

Keeping the compiler in memory

There are still overheads with the incremental compilation approach. We must startup the compiler initialising its data structures, we must parse the whole crate, and we must read the incremental compilation data and metadata from disk.

If we can keep the compiler in memory, we avoid these costs.

However, this would require some significant refactoring of the compiler. There is currently no way to invalidate data the compiler has already computed. It also becomes difficult to cancel compilation: if we receive two compile requests in rapid succession, we may wish to cancel the first compilation before it finishes, since it will be wasted work. This is currently easy - the compilation process is killed and all data released. However, if we want to keep the compiler in memory we must invalidate some data and ensure the compiler is in a consistent state.

Compilation output

Once compilation is finished, the RLS’s database must be updated. Errors and warnings produced by the compiler are stored in the database. Information from name resolution and type checking is stored in the database (exactly which information will grow with time). The analysis information will be provided by the save-analysis API.

The compiler will also provide data on which (old) code has been invalidated. Any information (including errors) in the database concerning this code is removed before the new data is inserted.

Multiple crates

The RLS does not track dependencies, nor much crate information. However, it will be asked to compile many crates and it will keep track of which crate data belongs to. It will also keep track of which crates belong to a single program and will not share data between programs, even if the same crate is shared. This helps avoid versioning issues.

Versioning

The RLS will be released using the same train model as Rust. A version of the RLS is pinned to a specific version of Rust. If users want to operate with multiple versions, they will need multiple versions of the RLS (I hope we can extend multirust/rustup.rs to handle the RLS as well as Rust).

Drawbacks

It’s a lot of work. But better we do it once than each IDE doing it themselves, or having sub-standard IDE support.

Alternatives

The big design choice here is using a database rather than the compiler’s data structures. The primary motivation for this is the ‘find all references’ requirement. References could be in multiple crates, so we would need to reload incremental compilation data (which must include the serialised MIR, or something equivalent) for all crates, then search this data for matching identifiers. Assuming the serialisation format is not too complex, this should be possible in a reasonable amount of time. Since identifiers might be in function bodies, we can’t rely on metadata.

This is a reasonable alternative, and may be simpler than the database approach. However, it is not planned to output this data in the near future (the initial plan for incremental compilation is to not store information required to re- check function bodies). This approach might be too slow for very large projects, we might wish to do searches in the future that cannot be answered without doing the equivalent of a database join, and the database simplifies questions about concurrent accesses.

We could only provide the RLS as a library, rather than providing an API via IPC. An IPC interface allows a single instance of the RLS to service multiple programs, is language-agnostic, and allows for easy asynchronous-ness between the RLS and its clients. It also provides isolation - a panic in the RLS will not cause the IDE to crash, not can a long-running operation delay the IDE. Most of these advantages could be captured using threads. However, the cost of implementing an IPC interface is fairly low and means less effort for clients, so it seems worthwhile to provide.

Extending this idea, we could do less than the RLS - provide a high-level library API for the Rust compiler and let other projects do the rest. In particular, Racer does an excellent job at providing the information the RLS would provide without much information from the compiler. This is certainly less work for the compiler team and more flexible for clients. On the other hand, it means more work for clients and possible fragmentation. Duplicated effort means that different clients will not benefit from each other’s innovations.

The RLS could do more - actually perform some of the processing tasks usually done by IDEs (such as editing source code) or other tools (refactoring, reformatting, etc.).

Unresolved questions

A problem is that Visual Studio uses UTF16 while Rust uses UTF8, there is (I understand) no efficient way to convert between byte counts in these systems. I’m not sure how to address this. It might require the RLS to be able to operate in UTF16 mode. This is only a problem with byte offsets in spans, not with row/column data (the RLS will supply both). It may be possible for Visual Studio to just use the row/column data, or convert inefficiently to UTF16. I guess the question comes down to should this conversion be done in the RLS or the client. I think we should start assuming the client, and perhaps adjust course later.

What kind of IPC protocol to use? HTTP is popular and simple to deal with. It’s platform-independent and used in many similar pieces of software. On the other hand it is heavyweight and requires pulling in large libraries, and requires some attention to security issues. Alternatives are some kind of custom protocol, or using a solution like Thrift. My preference is for HTTP, since it has been proven in similar situations.

Summary

Refine the unguarded-escape-hatch from RFC 1238 (nonparametric dropck) so that instead of a single attribute side-stepping all dropck constraints for a type’s destructor, we instead have a more focused system that specifies exactly which type and/or lifetime parameters the destructor is guaranteed not to access.

Specifically, this RFC proposes adding the capability to attach attributes to the binding sites for generic parameters (i.e. lifetime and type parameters). Atop that capability, this RFC proposes adding a #[may_dangle] attribute that indicates that a given lifetime or type holds data that must not be accessed during the dynamic extent of that drop invocation.

As a side-effect, enable adding attributes to the formal declarations of generic type and lifetime parameters.

The proposal in this RFC is intended as a temporary solution (along the lines of #[fundamental] and will not be stabilized as-is. Instead, we anticipate a more comprehensive approach to be proposed in a follow-up RFC.

Motivation

The unguarded escape hatch (UGEH) from RFC 1238 is a blunt instrument: when you use unsafe_destructor_blind_to_params, it is asserting that your destructor does not access borrowed data whose type includes any lifetime or type parameter of the type.

For example, the current destructor for RawVec<T> (in liballoc/) looks like this:

impl<T> Drop for RawVec<T> {
    #[unsafe_destructor_blind_to_params]
    /// Frees the memory owned by the RawVec *without* trying to Drop its contents.
    fn drop(&mut self) {
        [... free memory using global system allocator ...]
    }
}

The above is sound today, because the above destructor does not call any methods that can access borrowed data in the values of type T, and so we do not need to enforce the drop-ordering constraints imposed when you leave out the unsafe_destructor_blind_to_params attribute.

While the above attribute suffices for many use cases today, it is not fine-grain enough for other cases of interest. In particular, it cannot express that the destructor will not access borrowed data behind a subset of the type parameters.

Here are two concrete examples of where the need for this arises:

Example: CheckedHashMap

The original Sound Generic Drop proposal (RFC 769) had an appendix with an example of a CheckedHashMap<K, V> type that called the hashcode method for all of the keys in the map in its destructor. This is clearly a type where we cannot claim that we do not access borrowed data potentially hidden behind K, so it would be unsound to use the blunt unsafe_destructor_blind_to_params attribute on this type.

However, the values of the V parameter to CheckedHashMap are, in all likelihood, not accessed by the CheckedHashMap destructor. If that is the case, then it should be sound to instantiate V with a type that contains references to other parts of the map (e.g., references to the keys or to other values in the map). However, we cannot express this today: There is no way to say that the CheckedHashMap will not access borrowed data that is behind just V.

Example: Vec<T, A:Allocator=DefaultAllocator>

The Rust developers have been talking for a long time about adding an Allocator trait that would allow users to override the allocator used for the backing storage of collection types like Vec and HashMap.

For example, we would like to generalize the RawVec given above as follows:

#[unsafe_no_drop_flag]
pub struct RawVec<T, A:Allocator=DefaultAllocator> {
    ptr: Unique<T>,
    cap: usize,
    alloc: A,
}

impl<T, A:Allocator> Drop for RawVec<T, A> {
    #[should_we_put_ugeh_attribute_here_or_not(???)]
    /// Frees the memory owned by the RawVec *without* trying to Drop its contents.
    fn drop(&mut self) {
        [... free memory using self.alloc ...]
    }
}

However, we cannot soundly add an allocator parameter to a collection that today uses the unsafe_destructor_blind_to_params UGEH attribute in the destructor that deallocates, because that blunt instrument would allow someone to write this:

// (`ArenaAllocator`, when dropped, automatically frees its allocated blocks)

// (Usual pattern for assigning same extent to `v` and `a`.)
let (v, a): (Vec<Stuff, &ArenaAllocator>, ArenaAllocator);

a = ArenaAllocator::new();
v = Vec::with_allocator(&a);

... v.push(stuff) ...

// at end of scope, `a` may be dropped before `v`, invalidating
// soundness of subsequent invocation of destructor for `v` (because
// that would try to free buffer of `v` via `v.buf.alloc` (== `&a`)).

The only way today to disallow the above unsound code would be to remove unsafe_destructor_blind_to_params from RawVec/ Vec, which would break other code (for example, code using Vec as the backing storage for cyclic graph structures).

Detailed design

First off: The proposal in this RFC is intended as a temporary solution (along the lines of #[fundamental] and will not be stabilized as-is. Instead, we anticipate a more comprehensive approach to be proposed in a follow-up RFC.

Having said that, here is the proposed short-term solution:

  1. Add the ability to attach attributes to syntax that binds formal lifetime or type parameters. For the purposes of this RFC, the only place in the syntax that requires such attributes are impl blocks, as in impl<T> Drop for Type<T> { ... }

  2. Add a new fine-grained attribute, may_dangle, which is attached to the binding sites for lifetime or type parameters on an Drop implementation. This RFC will sometimes call this attribute the “eyepatch”, since it does not make dropck totally blind; just blind on one “side”.

  3. Add a new requirement that any Drop implementation that uses the #[may_dangle] attribute must be declared as an unsafe impl. This reflects the fact that such Drop implementations have an additional constraint on their behavior (namely that they cannot access certain kinds of data) that will not be verified by the compiler and thus must be verified by the programmer.

  4. Remove unsafe_destructor_blind_to_params, since all uses of it should be expressible via #[may_dangle].

Attributes on lifetime or type parameters

This is a simple extension to the syntax.

It is guarded by the feature gate generic_param_attrs.

Constructions like the following will now become legal.

Example of eyepatch attribute on a single type parameter:

unsafe impl<'a, #[may_dangle] X, Y> Drop for Foo<'a, X, Y> {
    ...
}

Example of eyepatch attribute on a lifetime parameter:

unsafe impl<#[may_dangle] 'a, X, Y> Drop for Bar<'a, X, Y> {
    ...
}

Example of eyepatch attribute on multiple parameters:

unsafe impl<#[may_dangle] 'a, X, #[may_dangle] Y> Drop for Baz<'a, X, Y> {
    ...
}

These attributes are only written next to the formal binding sites for the generic parameters. The usage sites, points which refer back to the parameters, continue to disallow the use of attributes.

So while this is legal syntax:

unsafe impl<'a, #[may_dangle] X, Y> Drop for Foo<'a, X, Y> {
    ...
}

the follow would be illegal syntax (at least for now):

unsafe impl<'a, X, Y> Drop for Foo<'a, #[may_dangle] X, Y> {
    ...
}

The “eyepatch” attribute

Add a new attribute, #[may_dangle] (the “eyepatch”).

It is guarded by the feature gate dropck_eyepatch.

The eyepatch is similar to unsafe_destructor_blind_to_params: it is part of the Drop implementation, and it is meant to assert that a destructor is guaranteed not to access certain kinds of data accessible via self.

The main difference is that the eyepatch is applied to a single generic parameter: #[may_dangle] ARG. This specifies exactly what the destructor is blind to (i.e., what will dropck treat as inaccessible from the destructor for this type).

There are two things one can supply as the ARG for a given eyepatch: one of the type parameters for the type, or one of the lifetime parameters for the type.

When used on a type, e.g. #[may_dangle] T, the programmer is asserting the only uses of values of that type will be to move or drop them. Thus, no fields will be accessed nor methods called on values of such a type (apart from any access performed by the destructor for the type when the values are dropped). This ensures that no dangling references (such as when T is instantiated with &'a u32) are ever accessed in the scenario where 'a has the same lifetime as the value being currently destroyed (and thus the precise order of destruction between the two is unknown to the compiler).

When used on a lifetime, e.g. #[may_dangle] 'a, the programmer is asserting that no data behind a reference of lifetime 'a will be accessed by the destructor. Thus, no fields will be accessed nor methods called on values of type &'a Struct, ensuring that again no dangling references are ever accessed by the destructor.

Require unsafe on Drop implementations using the eyepatch

The final detail is to add an additional check to the compiler to ensure that any use of #[may_dangle] on a Drop implementation imposes a requirement that that implementation block use unsafe impl.2

This reflects the fact that use of #[may_dangle] is a programmer-provided assertion about the behavior of the Drop implementation that must be valided manually by the programmer. It is analogous to other uses of unsafe impl (apart from the fact that the Drop trait itself is not an unsafe trait).

Examples adapted from the Rustonomicon

So, adapting some examples from the Rustonomicon Drop Check chapter, we would be able to write the following.

Example of eyepatch on a lifetime parameter::

struct InspectorA<'a>(&'a u8, &'static str);

unsafe impl<#[may_dangle] 'a> Drop for InspectorA<'a> {
    fn drop(&mut self) {
        println!("InspectorA(_, {}) knows when *not* to inspect.", self.1);
    }
}

Example of eyepatch on a type parameter:

use std::fmt;

struct InspectorB<T: fmt::Display>(T, &'static str);

unsafe impl<#[may_dangle] T: fmt::Display> Drop for InspectorB<T> {
    fn drop(&mut self) {
        println!("InspectorB(_, {}) knows when *not* to inspect.", self.1);
    }
}

Both of the above two examples are much the same as if we had used the old unsafe_destructor_blind_to_params UGEH attribute.

Example: RawVec

To generalize RawVec from the motivation with an Allocator correctly (that is, soundly and without breaking existing code), we would now write:

unsafe impl<#[may_dangle]T, A:Allocator> Drop for RawVec<T, A> {
    /// Frees the memory owned by the RawVec *without* trying to Drop its contents.
    fn drop(&mut self) {
        [... free memory using self.alloc ...]
    }
}

The use of #[may_dangle] T here asserts that even though the destructor may access borrowed data through A (and thus dropck must impose drop-ordering constraints for lifetimes occurring in the type of A), the developer is guaranteeing that no access to borrowed data will occur via the type T.

The latter is not expressible today even with unsafe_destructor_blind_to_params; there is no way to say that a type will not access T in its destructor while also ensuring the proper drop-ordering relationship between RawVec<T, A> and A.

Example; Multiple Lifetimes

Example: The above InspectorA carried a &'static str that was always safe to access from the destructor.

If we wanted to generalize this type a bit, we might write:

struct InspectorC<'a,'b,'c>(&'a str, &'b str, &'c str);

unsafe impl<#[may_dangle] 'a, 'b, #[may_dangle] 'c> Drop for InspectorC<'a,'b,'c> {
    fn drop(&mut self) {
        println!("InspectorA(_, {}, _) knows when *not* to inspect.", self.1);
    }
}

This type, like InspectorA, is careful to only access the &str that it holds in its destructor; but now the borrowed string slice does not have 'static lifetime, so we must make sure that we do not claim that we are blind to its lifetime ('b).

(This example also illustrates that one can attach multiple instances of the eyepatch attribute to a destructor, each with a distinct input for its ARG.)

Given the definition above, this code will compile and run properly:

fn this_will_work() {
    let b; // ensure that `b` strictly outlives `i`.
    let (i,a,c);
    a = format!("a");
    b = format!("b");
    c = format!("c");
    i = InspectorC(a, b, c);
}

while this code will be rejected by the compiler:

fn this_will_not_work() {
    let (a,c);
    let (i,b); // OOPS: `b` not guaranteed to survive for `i`'s destructor.
    a = format!("a");
    b = format!("b");
    c = format!("c");
    i = InspectorC(a, b, c);
}

Semantics

How does this work, you might ask?

The idea is actually simple: the dropck rule stays mostly the same, except for a small twist.

The Drop-Check rule at this point essentially says:

if the type of v owns data of type D, where

(1.) the impl Drop for D is either type-parametric, or lifetime-parametric over 'a, and (2.) the structure of D can reach a reference of type &'a _,

then 'a must strictly outlive the scope of v

The main change we want to make is to the second condition. Instead of just saying “the structure of D can reach a reference of type &'a _”, we want first to replace eyepatched lifetimes and types within D with 'static and (), respectively. Call this revised type patched(D).

Then the new condition is:

(2.) the structure of patched(D) can reach a reference of type &'a _,

Everything else is the same.

In particular, the patching substitution is only applied with respect to a particular destructor. Just because Vec<T> is blind to T does not mean that we will ignore the actual type instantiated at T in terms of drop-ordering constraints.

For example, in Vec<InspectorC<'a,'name,'c>>, even though Vec itself is blind to the whole type InspectorC<'a, 'name, 'c> when we are considering the impl Drop for Vec, we still honor the constraint that 'name must strictly outlive the Vec (because we continue to consider all D that is data owned by a value v, including when D == InspectorC<'a,'name,'c>).

Prototype

pnkfelix has implemented a proof-of-concept implementation of the #[may_dangle] attribute. It uses the substitution machinery we already have in the compiler to express the semantics above.

Limitations of prototype (not part of design)

Here we note a few limitations of the current prototype. These limitations are not being proposed as part of the specification of the feature.

2. The compiler does not yet enforce (or even allow) the use of unsafe impl for Drop implementations that use the #[may_dangle] attribute.

Fixing the above limitations should just be a matter of engineering, not a fundamental hurdle to overcome in the feature’s design in the context of the language.

Drawbacks

Ugliness

This attribute, like the original unsafe_destructor_blind_to_params UGEH attribute, is ugly.

Unchecked assertions boo

It would be nicer if to actually change the language in a way where we could check the assertions being made by the programmer, rather than trusting them. (pnkfelix has some thoughts on this, which are mostly reflected in what he wrote in the RFC 1238 alternatives.)

Alternatives

Note: The alternatives section for this RFC is particularly note-worthy because the ideas here may serve as the basis for a more comprehensive long-term approach.

Make dropck “see again” via (focused) where-clauses

The idea is that we keep the UGEH attribute, blunt hammer that it is. You first opt out of the dropck ordering constraints via that, and then you add back in ordering constraints via where clauses.

(The ordering constraints in question would normally be implied by the dropck analysis; the point is that UGEH is opting out of that analysis, and so we are now adding them back in.)

Here is the allocator example expressed in this fashion:

impl<T, A:Allocator> Drop for RawVec<T, A> {
    #[unsafe_destructor_blind_to_params]
    /// Frees the memory owned by the RawVec *without* trying to Drop its contents.
    fn drop<'s>(&'s mut self) where A: 's {
    //                        ~~~~~~~~~~~
    //                             |
    //                             |
    // This constraint (that `A` outlives `'s`), and other conditions
    // relating `'s` and `Self` are normally implied by Rust's type
    // system, but `unsafe_destructor_blind_to_params` opts out of
    // enforcing them. This `where`-clause is opting back into *just*
    // the `A:'s` again.
    //
    // Note we are *still* opting out of `T: 's` via
    // `unsafe_destructor_blind_to_params`, and thus our overall
    // goal (of not breaking code that relies on `T` not having to
    // survive the destructor call) is accomplished.

        [... free memory using self.alloc ...]
    }
}

This approach, if we can make it work, seems fine to me. It certainly avoids a number of problems that the eyepatch attribute has.

Advantages of fn-drop-with-where-clauses:

  • Since the eyepatch attribute is to be limited to type and lifetime parameters, this approach is more expressive, since it would allow one to put type-projections into the constraints.

Drawbacks of fn-drop-with-where-clauses:

  • Its not 100% clear what our implementation strategy will be for it, while the eyepatch attribute does have a prototype.

    I actually do not give this drawback much weight; resolving this may be merely a matter of just trying to do it: e.g., build up the set of where-clauses when we make the ADT’s representations, and then have dropck insert instantiate and insert them as needed.

  • It might have the wrong ergonomics for developers: It seems bad to have the blunt hammer introduce all sorts of potential unsoundness, and rely on the developer to keep the set of where-clauses on the fn drop up to date.

    This would be a pretty bad drawback, if the language and compiler were to stagnate. But my intention/goal is to eventually put in a sound compiler analysis. In other words, in the future, I will be more concerned about the ergonomics of the code that uses the sound analysis. I will not be concerned about “gotcha’s” associated with the UGEH escape hatch.

(The most important thing I want to convey is that I believe that both the eyepatch attribute and fn-drop-with-where-clauses are capable of resolving the real issues that I face today, and I would be happy for either proposal to be accepted.)

Wait for proper parametricity

As alluded to in the drawbacks, in principle we could provide similar expressiveness to that offered by the eyepatch (which is acting as a fine-grained escape hatch from dropck) by instead offering some language extension where the compiler would actually analyze the code based on programmer annotations indicating which types and lifetimes are not used by a function.

In my opinion I am of two minds on this (but they are both in favor this RFC rather than waiting for a sound compiler analysis):

  1. We will always need an escape hatch. The programmer will always need a way to assert something that she knows to be true, even if the compiler cannot prove it. (A simple example: Calling a third-party API that has not yet added the necessary annotations.)

    This RFC is proposing that we keep an escape hatch, but we make it more expressive.

  2. If we eventually do have a sound compiler analysis, I see the compiler changes and library annotations suggested by this RFC as being in line with what that compiler analysis would end up using anyway. In other words: Assume we did add some way for the programmer to write that T is parametric (e.g. T: ?Special in the RFC 1238 alternatives). Even then, we would still need the compiler changes suggested by this RFC, and at that point hopefully the task would be for the programmer to mechanically replace occurrences of #[may_dangle] T with T: ?Special (and then see if the library builds).

    In other words, I see the form suggested by this RFC as being a step towards a proper analysis, in the sense that it is getting programmers used to thinking about the individual parameters and their relationship with the container, rather than just reasoning about the container on its own without any consideration of each type/lifetime parameter.

Do nothing

If we do nothing, then we cannot add Vec<T, A:Allocator> soundly.

Unresolved questions

Is the definition of the drop-check rule sound with this patched(D) variant? (We have not proven any previous variation of the rule sound; I think it would be an interesting student project though.)

Summary

When a thread panics in Rust, the unwinding runtime currently prints a message to standard error containing the panic argument as well as the filename and line number corresponding to the location from which the panic originated. This RFC proposes a mechanism to allow user code to replace this logic with custom handlers that will run before unwinding begins.

Motivation

The default behavior is not always ideal for all programs:

  • Programs with command line interfaces do not want their output polluted by random panic messages.
  • Programs using a logging framework may want panic messages to be routed into that system so that they can be processed like other events.
  • Programs with graphical user interfaces may not have standard error attached at all and want to be notified of thread panics to potentially display an internal error dialog to the user.

The standard library previously supported (in unstable code) the registration of a set of panic handlers. This API had several issues:

  • The system supported a fixed but unspecified number of handlers, and a handler could never be unregistered once added.
  • The callbacks were raw function pointers rather than closures.
  • Handlers would be invoked on nested panics, which would result in a stack overflow if a handler itself panicked.
  • The callbacks were specified to take the panic message, file name and line number directly. This would prevent us from adding more functionality in the future, such as access to backtrace information. In addition, the presence of file names and line numbers for all panics causes some amount of binary bloat and we may want to add some avenue to allow for the omission of those values in the future.

Detailed design

A new module, std::panic, will be created with a panic handling API:

/// Unregisters the current panic handler, returning it.
///
/// If no custom handler is registered, the default handler will be returned.
///
/// # Panics
///
/// Panics if called from a panicking thread. Note that this will be a nested
/// panic and therefore abort the process.
pub fn take_handler() -> Box<Fn(&PanicInfo) + 'static + Sync + Send> { ... }

/// Registers a custom panic handler, replacing any that was previously
/// registered.
///
/// # Panics
///
/// Panics if called from a panicking thread. Note that this will be a nested
/// panic and therefore abort the process.
pub fn set_handler<F>(handler: F) where F: Fn(&PanicInfo) + 'static + Sync + Send { ... }

/// A struct providing information about a panic.
pub struct PanicInfo { ... }

impl PanicInfo {
    /// Returns the payload associated with the panic.
    ///
    /// This will commonly, but not always, be a `&'static str` or `String`.
    pub fn payload(&self) -> &Any + Send { ... }

    /// Returns information about the location from which the panic originated,
    /// if available.
    pub fn location(&self) -> Option<Location> { ... }
}

/// A struct containing information about the location of a panic.
pub struct Location<'a> { ... }

impl<'a> Location<'a> {
    /// Returns the name of the source file from which the panic originated.
    pub fn file(&self) -> &str { ... }

    /// Returns the line number from which the panic originated.
    pub fn line(&self) -> u32 { ... }
}

When a panic occurs, but before unwinding begins, the runtime will call the registered panic handler. After the handler returns, the runtime will then unwind the thread. If a thread panics while panicking (a “double panic”), the panic handler will not be invoked and the process will abort. Note that the thread is considered to be panicking while the panic handler is running, so a panic originating from the panic handler will result in a double panic.

The take_handler method exists to allow for handlers to “chain” by closing over the previous handler and calling into it:

let old_handler = panic::take_handler();
panic::set_handler(move |info| {
    println!("uh oh!");
    old_handler(info);
});

This is obviously a racy operation, but as a single global resource, the global panic handler should only be adjusted by applications rather than libraries, most likely early in the startup process.

The implementation of set_handler and take_handler will have to be carefully synchronized to ensure that a handler is not replaced while executing in another thread. This can be accomplished in a manner similar to that used by the log crate. take_handler and set_handler will wait until no other threads are currently running the panic handler, at which point they will atomically swap the handler out as appropriate.

Note that location will always return Some in the current implementation. It returns an Option to hedge against possible future changes to the panic system that would allow a crate to be compiled with location metadata removed to minimize binary size.

Prior Art

C++ has a std::set_terminate function which registers a handler for uncaught exceptions, returning the old one. The handler takes no arguments.

Python passes uncaught exceptions to the global handler sys.excepthook which can be set by user code.

In Java, uncaught exceptions can be handled by handlers registered on an individual Thread, by the Thread’s, ThreadGroup, and by a handler registered globally. The handlers are provided with the Throwable that triggered the handler.

Drawbacks

The more infrastructure we add to interact with panics, the more attractive it becomes to use them as a more normal part of control flow.

Alternatives

Panic handlers could be run after a panicking thread has unwound rather than before. This is perhaps a more intuitive arrangement, and allows catch_panic to prevent panic handlers from running. However, running handlers before unwinding allows them access to more context, for example, the ability to take a stack trace.

PanicInfo::location could be split into PanicInfo::file and PanicInfo::line to cut down on the API size, though that would require handlers to deal with weird cases like a line number but no file being available.

RFC 1100 proposed an API based around thread-local handlers. While there are reasonable use cases for the registration of custom handlers on a per-thread basis, most of the common uses for custom handlers want to have a single set of behavior cover all threads in the process. Being forced to remember to register a handler in every thread spawned in a program is tedious and error prone, and not even possible in many cases for threads spawned in libraries the author has no control over.

While out of scope for this RFC, a future extension could add thread-local handlers on top of the global one proposed here in a straightforward manner.

The implementation could be simplified by altering the API to store, and take_logger to return, an Arc<Fn(&PanicInfo) + 'static + Sync + Send> or a bare function pointer. This seems like a somewhat weirder API, however, and the implementation proposed above should not end up complex enough to justify the change.

Unresolved questions

None at the moment.

Summary

Grammar of the Rust language should not be rustc implementation-defined. We have a formal grammar at src/grammar which is to be used as the canonical and formal representation of the Rust language.

Motivation

In many RFCs proposing syntactic changes (#1228, #1219 and #1192 being some of more recently merged RFCs) the changes are described rather informally and are hard to both implement and discuss which also leads to discussions containing a lot of guess-work.

Making src/grammar to be the canonical grammar and demanding for description of syntactic changes to be presented in terms of changes to the formal grammar should greatly simplify both the discussion and implementation of the RFCs. Using a formal grammar also allows us to discover and rule out existence of various issues with the grammar changes (e.g. grammar ambiguities) during design phase rather than implementation phase or, even worse, after the stabilisation.

Detailed design

Sadly, the grammar in question is not quite equivalent to the implementation in rustc yet. We cannot possibly hope to catch all the quirks in the rustc parser implementation, therefore something else needs to be done.

This RFC proposes following approach to making src/grammar the canonical Rust language grammar:

  1. Fix the already known discrepancies between implementation and src/grammar;
  2. Make src/grammar a semi-canonical grammar;
  3. After a period of time transition src/grammar to a fully-canonical grammar.

Semi-canonical grammar

Once all known discrepancies between the src/grammar and rustc parser implementation are resolved, src/grammar enters the state of being semi-canonical grammar of the Rust language.

Semi-canonical means that all new development involving syntax changes are made and discussed in terms of changes to the src/grammar and src/grammar is in general regarded to as the canonical grammar except when new discrepancies are discovered. These discrepancies must be swiftly resolved, but resolution will depend on what kind of discrepancy it is:

  1. For syntax changes/additions introduced after src/grammar gained the semi-canonical state, the src/grammar is canonical;
  2. For syntax that was present before src/grammar gained the semi-canonical state, in most cases the implementation is canonical.

This process is sure to become ambiguous over time as syntax is increasingly adjusted (it is harder to “blame” syntax changes compared to syntax additions), therefore the resolution process of discrepancies will also depend more on a decision from the Rust team.

Fully-canonical grammar

After some time passes, src/grammar will transition to the state of fully canonical grammar. After src/grammar transitions into this state, for any discovered discrepancies the rustc parser implementation must be adjusted to match the src/grammar, unless decided otherwise by the RFC process.

RFC process changes for syntactic changes and additions

Once the src/grammar enters semi-canonical state, all RFCs must describe syntax additions and changes in terms of the formal src/grammar. Discussion about these changes are also expected (but not necessarily will) to become more formal and easier to follow.

Drawbacks

This RFC introduces a period of ambiguity during which neither implementation nor src/grammar are truly canonical representation of the Rust language. This will be less of an issue over time as discrepancies are resolved, but its an issue nevertheless.

Alternatives

One alternative would be to immediately make src/grammar a fully-canonical grammar of the Rust language at some arbitrary point in the future.

Another alternative is to simply forget idea of having a formal grammar be the canonical grammar of the Rust language.

Unresolved questions

How much time should pass between src/grammar becoming semi-canonical and fully-canonical?

Summary

Extend the existing #[repr] attribute on structs with an align = "N" option to specify a custom alignment for struct types.

Motivation

The alignment of a type is normally not worried about as the compiler will “do the right thing” of picking an appropriate alignment for general use cases. There are situations, however, where a nonstandard alignment may be desired when operating with foreign systems. For example these sorts of situations tend to necessitate or be much easier with a custom alignment:

  • Hardware can often have obscure requirements such as “this structure is aligned to 32 bytes” when it in fact is only composed of 4-byte values. While this can typically be manually calculated and managed, it’s often also useful to express this as a property of a type to get the compiler to do a little extra work instead.
  • C compilers like gcc and clang offer the ability to specify a custom alignment for structures, and Rust can much more easily interoperate with these types if Rust can also mirror the request for a custom alignment (e.g. passing a structure to C correctly is much easier).
  • Custom alignment can often be used for various tricks here and there and is often convenient as “let’s play around with an implementation” tool. For example this can be used to statically allocate page tables in a kernel or create an at-least cache-line-sized structure easily for concurrent programming.

Currently these sort of situations are possible in Rust but aren’t necessarily the most ergonomic as programmers must manually manage alignment. The purpose of this RFC is to provide a lightweight annotation to alter the compiler-inferred alignment of a structure to enable these situations much more easily.

Detailed design

The #[repr] attribute on structs will be extended to include a form such as:

#[repr(align = "16")]
struct MoreAligned(i32);

This structure will still have an alignment of 16 (as returned by mem::align_of), and in this case the size will also be 16.

Syntactically, the repr meta list will be extended to accept a meta item name/value pair with the name “align” and the value as a string which can be parsed as a u64. The restrictions on where this attribute can be placed along with the accepted values are:

  • Custom alignment can only be specified on struct declarations for now. Specifying a different alignment on perhaps enum or type definitions should be a backwards-compatible extension.
  • Alignment values must be a power of two.

Multiple #[repr(align = "..")] directives are accepted on a struct declaration, and the actual alignment of the structure will be the maximum of all align directives and the natural alignment of the struct itself.

Semantically, it will be guaranteed (modulo unsafe code) that custom alignment will always be respected. If a pointer to a non-aligned structure exists and is used then it is considered unsafe behavior. Local variables, objects in arrays, statics, etc, will all respect the custom alignment specified for a type.

For now, it will be illegal for any #[repr(packed)] struct to transitively contain a struct with #[repr(align)]. Specifically, both attributes cannot be applied on the same struct, and a #[repr(packed)] struct cannot transitively contain another struct with #[repr(align)]. The flip side, including a #[repr(packed)] structure inside of a #[repr(align)] one will be allowed. The behavior of MSVC and gcc differ in how these properties interact, and for now we’ll just yield an error while we get experience with the two attributes.

Some examples of #[repr(align)] are:

// Raising alignment
#[repr(align = "16")]
struct Align16(i32);

assert_eq!(mem::align_of::<Align16>(), 16);
assert_eq!(mem::size_of::<Align16>(), 16);

// Lowering has no effect
#[repr(align = "1")]
struct Align1(i32);

assert_eq!(mem::align_of::<Align1>(), 4);
assert_eq!(mem::size_of::<Align1>(), 4);

// Multiple attributes take the max
#[repr(align = "8", align = "4")]
#[repr(align = "16")]
struct AlignMany(i32);

assert_eq!(mem::align_of::<AlignMany>(), 16);
assert_eq!(mem::size_of::<AlignMany>(), 16);

// Raising alignment may not alter size.
#[repr(align = "8")]
struct Align8Many {
    a: i32,
    b: i32,
    c: i32,
    d: u8,
}

assert_eq!(mem::align_of::<Align8Many>(), 8);
assert_eq!(mem::size_of::<Align8Many>(), 16);

Drawbacks

Specifying a custom alignment isn’t always necessarily easy to do so via a literal integer value. It may require usage of #[cfg_attr] in some situations and may otherwise be much more convenient to name a different type instead. Working with a raw integer, however, should provide the building block for building up other abstractions and should be maximally flexible. It also provides a relatively straightforward implementation and understanding of the attribute at hand.

This also currently does not allow for specifying the custom alignment of a struct field (as C compilers also allow doing) without the usage of a newtype structure. Currently #[repr] is not recognized here, but it would be a backwards compatible extension to start reading it on struct fields.

Alternatives

Instead of using the #[repr] attribute as the “house” for the custom alignment, there could instead be a new #[align = "..."] attribute. This is perhaps more extensible to alignment in other locations such as a local variable (with attributes on expressions), a struct field (where #[repr] is more of an “outer attribute”), or enum variants perhaps.

Unresolved questions

  • It is likely best to simply match the semantics of C/C++ in the regard of custom alignment, but is it ensured that this RFC is the same as the behavior of standard C compilers?

Summary

Add two methods to the std::os::unix::process::CommandExt trait to provide more control over how processes are spawned on Unix, specifically:

fn exec(&mut self) -> io::Error;
fn before_exec<F>(&mut self, f: F) -> &mut Self
    where F: FnOnce() -> io::Result<()> + Send + Sync + 'static;

Motivation

Although the standard library’s implementation of spawning processes on Unix is relatively complex, it unfortunately doesn’t provide the same flexibility as calling fork and exec manually. For example, these sorts of use cases are not possible with the Command API:

  • The exec function cannot be called without fork. It’s often useful on Unix in doing this to avoid spawning processes or improve debuggability if the pre-exec code was some form of shim.
  • Execute other flavorful functions between the fork/exec if necessary. For example some proposed extensions to the standard library are dealing with the controlling tty or dealing with session leaders. In theory any sort of arbitrary code can be run between these two syscalls, and it may not always be the case the standard library can provide a suitable abstraction.

Note that neither of these pieces of functionality are possible on Windows as there is no equivalent of the fork or exec syscalls in the standard APIs, so these are specifically proposed as methods on the Unix extension trait.

Detailed design

The following two methods will be added to the std::os::unix::process::CommandExt trait:

/// Performs all the required setup by this `Command`, followed by calling the
/// `execvp` syscall.
///
/// On success this function will not return, and otherwise it will return an
/// error indicating why the exec (or another part of the setup of the
/// `Command`) failed.
///
/// Note that the process may be in a "broken state" if this function returns in
/// error. For example the working directory, environment variables, signal
/// handling settings, various user/group information, or aspects of stdio
/// file descriptors may have changed. If a "transactional spawn" is required to
/// gracefully handle errors it is recommended to use the cross-platform `spawn`
/// instead.
fn exec(&mut self) -> io::Error;

/// Schedules a closure to be run just before the `exec` function is invoked.
///
/// This closure will be run in the context of the child process after the
/// `fork` and other aspects such as the stdio file descriptors and working
/// directory have successfully been changed. Note that this is often a very
/// constrained environment where normal operations like `malloc` or acquiring a
/// mutex are not guaranteed to work (due to other threads perhaps still running
/// when the `fork` was run).
///
/// The closure is allowed to return an I/O error whose OS error code will be
/// communicated back to the parent and returned as an error from when the spawn
/// was requested.
///
/// Multiple closures can be registered and they will be called in order of
/// their registration. If a closure returns `Err` then no further closures will
/// be called and the spawn operation will immediately return with a failure.
fn before_exec<F>(&mut self, f: F) -> &mut Self
    where F: FnOnce() -> io::Result<()> + Send + Sync + 'static;

The exec function is relatively straightforward as basically the entire spawn operation minus the fork. The stdio handles will be inherited by default if not otherwise configured. Note that a configuration of piped will likely just end up with a broken half of a pipe on one of the file descriptors.

The before_exec function has extra-restrictive bounds to preserve the same qualities that the Command type has (notably Send, Sync, and 'static). This also happens after all other configuration has happened to ensure that libraries can take advantage of the other operations on Command without having to reimplement them manually in some circumstances.

Drawbacks

This change is possible to be a breaking change to Command as it will no longer implement all marker traits by default (due to it containing closure trait objects). While the common marker traits are handled here, it’s possible that there are some traits in the wild in use which this could break.

Much of the functionality which may initially get funneled through before_exec may actually be best implemented as functions in the standard library itself. It’s likely that many operations are well known across unixes and aren’t niche enough to stay outside the standard library.

Alternatives

Instead of souping up Command the type could instead provide accessors to all of the configuration that it contains. This would enable this sort of functionality to be built on crates.io first instead of requiring it to be built into the standard library to start out with. Note that this may want to end up in the standard library regardless, however.

Unresolved questions

  • Is it appropriate to run callbacks just before the exec? Should they instead be run before any standard configuration like stdio has run?
  • Is it possible to provide “transactional semantics” to the exec function such that it is safe to recover from? Perhaps it’s worthwhile to provide partial transactional semantics in the form of “this can be recovered from so long as all stdio is inherited”.

Summary

Improve the target-specific dependency experience in Cargo by leveraging the same #[cfg] syntax that Rust has.

Motivation

Currently in Cargo it’s relatively painful to list target-specific dependencies. This can only be done by listing out the entire target string as opposed to using the more-convenient #[cfg] annotations that Rust source code has access to. Consequently a Windows-specific dependency ends up having to be defined for four triples: {i686,x86_64}-pc-windows-{gnu,msvc}, and this is unfortunately not forwards compatible as well!

As a result most crates end up unconditionally depending on target-specific dependencies and rely on the crates themselves to have the relevant #[cfg] to only be compiled for the right platforms. This experience leads to excessive downloads, excessive compilations, and overall “unclean methods” to have a platform specific dependency.

This RFC proposes leveraging the same familiar syntax used in Rust itself to define these dependencies.

Detailed design

The target-specific dependency syntax in Cargo will be expanded to include not only full target strings but also #[cfg] expressions:

[target."cfg(windows)".dependencies]
winapi = "0.2"

[target."cfg(unix)".dependencies]
unix-socket = "0.4"

[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.2"

Specifically, the “target” listed here is considered special if it starts with the string “cfg(” and ends with “)”. If this is not true then Cargo will continue to treat it as an opaque string and pass it to the compiler via --target (Cargo’s current behavior).

Cargo will implement its own parser of this syntax inside the cfg expression, it will not rely on the compiler itself. The grammar, however, will be the same as the compiler for now:

cfg := "cfg(" meta-item * ")"
meta-item := ident |
             ident "=" string |
             ident "(" meta-item * ")"

Like Rust, Cargo will implement the any, all, and not operators for the ident(list) syntax. The last missing piece is simply understand what ident and ident = "string" values are defined for a particular target. To learn this information Cargo will query the compiler via a new command line flag:

$ rustc --print cfg
unix
target_os="apple"
target_pointer_width="64"
...

$ rustc --print cfg --target i686-pc-windows-msvc
windows
target_os="windows"
target_pointer_width="32"
...

The --print cfg command line flag will print out all built-in #[cfg] directives defined by the compiler onto standard output. Each cfg will be printed on its own line to allow external parsing. Cargo will use this to call the compiler once (or twice if an explicit target is requested) when resolution starts, and it will use these key/value pairs to execute the cfg queries in the dependency graph being constructed.

Drawbacks

This is not a forwards-compatible extension to Cargo, so this will break compatibility with older Cargo versions. If a crate is published with a Cargo that supports this cfg syntax, it will not be buildable by a Cargo that does not understand the cfg syntax. The registry itself is prepared to handle this sort of situation as the “target” string is just opaque, however.

This can be perhaps mitigated via a number of strategies:

  1. Have crates.io reject the cfg syntax until the implementation has landed on stable Cargo for at least one full cycle. Applications, path dependencies, and git dependencies would still be able to use this syntax, but crates.io wouldn’t be able to leverage it immediately.
  2. Crates on crates.io wishing for compatibility could simply hold off on using this syntax until this implementation has landed in stable Cargo for at least a full cycle. This would mean that everyone could use it immediately but “big crates” would be advised to hold off for compatibility for awhile.
  3. Have crates.io rewrite dependencies as they’re published. If you publish a crate with a cfg(windows) dependency then crates.io could expand this to all known triples which match cfg(windows) when storing the metadata internally. This would mean that crates using cfg syntax would continue to be compatible with older versions of Cargo so long as they were only used as a crates.io dependency.

For ease of implementation this RFC would recommend strategy (1) to help ease this into the ecosystem without too much pain in terms of compatibility or implementation.

Alternatives

Instead of using Rust’s #[cfg] syntax, Cargo could support other options such as patterns over the target string. For example it could accept something along the lines of:

[target."*-pc-windows-*".dependencies]
winapi = "0.2"

[target."*-apple-*".dependencies]
core-foundation = "0.2"

While certainly more flexible than today’s implementation, it unfortunately is relatively error prone and doesn’t cover all the use cases one may want:

  • Matching against a string isn’t necessarily guaranteed to be robust moving forward into the future.
  • This doesn’t support negation and other operators, e.g. all(unix, not(osx)).
  • This doesn’t support meta-families like cfg(unix).

Another possible alternative would be to have Cargo supply pre-defined families such as windows and unix as well as the above pattern matching, but this eventually just moves into the territory of what #[cfg] already provides but may not always quite get there.

Unresolved questions

  • This is not the only change that’s known to Cargo which is known to not be forwards-compatible, so it may be best to lump them all together into one Cargo release instead of releasing them over time, but should this be blocked on those ideas? (note they have not been formed into an RFC yet)

Summary

Add a standard allocator interface and support for user-defined allocators, with the following goals:

  1. Allow libraries (in libstd and elsewhere) to be generic with respect to the particular allocator, to support distinct, stateful, per-container allocators.

  2. Require clients to supply metadata (such as block size and alignment) at the allocation and deallocation sites, to ensure hot-paths are as efficient as possible.

  3. Provide high-level abstraction over the layout of an object in memory.

Regarding GC: We plan to allow future allocators to integrate themselves with a standardized reflective GC interface, but leave specification of such integration for a later RFC. (The design describes a way to add such a feature in the future while ensuring that clients do not accidentally opt-in and risk unsound behavior.)

Motivation

As noted in RFC PR 39 (and reiterated in RFC PR 244), modern general purpose allocators are good, but due to the design tradeoffs they must make, cannot be optimal in all contexts. (It is worthwhile to also read discussion of this claim in papers such as Reconsidering Custom Malloc.)

Therefore, the standard library should allow clients to plug in their own allocator for managing memory.

Allocators are used in C++ system programming

The typical reasons given for use of custom allocators in C++ are among the following:

  1. Speed: A custom allocator can be tailored to the particular memory usage profiles of one client. This can yield advantages such as:

    • A bump-pointer based allocator, when available, is faster than calling malloc.

    • Adding memory padding can reduce/eliminate false sharing of cache lines.

  2. Stability: By segregating different sub-allocators and imposing hard memory limits upon them, one has a better chance of handling out-of-memory conditions.

    If everything comes from a single global heap, it becomes much harder to handle out-of-memory conditions because by the time the handler runs, it is almost certainly going to be unable to allocate any memory for its own work.

  3. Instrumentation and debugging: One can swap in a custom allocator that collects data such as number of allocations, or time for requests to be serviced.

Allocators should feel “rustic”

In addition, for Rust we want an allocator API design that leverages the core type machinery and language idioms (e.g. using Result to propagate dynamic error conditions), and provides premade functions for common patterns for allocator clients (such as allocating either single instances of a type, or arrays of some types of dynamically-determined length).

Garbage Collection integration

Finally, we want our allocator design to allow for a garbage collection (GC) interface to be added in the future.

At the very least, we do not want to accidentally disallow GC by choosing an allocator API that is fundamentally incompatible with it.

(However, this RFC does not actually propose a concrete solution for how to integrate allocators with GC.)

Detailed design

The Allocator trait at a glance

The source code for the Allocator trait prototype is provided in an appendix. But since that section is long, here we summarize the high-level points of the Allocator API.

(See also the walk thru section, which actually links to individual sections of code.)

  • Basic implementation of the trait requires just two methods (alloc and dealloc). You can get an initial implementation off the ground with relatively little effort.

  • All methods that can fail to satisfy a request return a Result (rather than building in an assumption that they panic or abort).

    • Furthermore, allocator implementations are discouraged from directly panicking or aborting on out-of-memory (OOM) during calls to allocation methods; instead, clients that do wish to report that OOM occurred via a particular allocator can do so via the Allocator::oom() method.

    • OOM is not the only type of error that may occur in general; allocators can inject more specific error types to indicate why an allocation failed.

  • The metadata for any allocation is captured in a Layout abstraction. This type carries (at minimum) the size and alignment requirements for a memory request.

    • The Layout type provides a large family of functional construction methods for building up the description of how memory is laid out.

      • Any sized type T can be mapped to its Layout, via Layout::new::<T>(),

      • Heterogenous structure; e.g. layout1.extend(layout2),

      • Homogeneous array types: layout.repeat(n) (for n: usize),

      • There are packed and unpacked variants for the latter two methods.

    • Helper Allocator methods like fn alloc_one and fn alloc_array allow client code to interact with an allocator without ever directly constructing a Layout.

  • Once an Allocator implementor has the fn alloc and fn dealloc methods working, it can provide overrides of the other methods, providing hooks that take advantage of specific details of how your allocator is working underneath the hood.

    • In particular, the interface provides a few ways to let clients potentially reuse excess memory associated with a block

    • fn realloc is a common pattern (where the client hopes that the method will reuse the original memory when satisfying the realloc request).

    • fn alloc_excess and fn usable_size provide an alternative pattern, where your allocator tells the client about the excess memory provided to satisfy a request, and the client can directly expand into that excess memory, without doing round-trip requests through the allocator itself.

Semantics of allocators and their memory blocks

In general, an allocator provide access to a memory pool that owns some amount of backing storage. The pool carves off chunks of that storage and hands it out, via the allocator, as individual blocks of memory to service client requests. (A “client” here is usually some container library, like Vec or HashMap, that has been suitably parameterized so that it has an A:Allocator type parameter.)

So, an interaction between a program, a collection library, and an allocator might look like this:

If you cannot see the SVG linked here, try the [ASCII art version][ascii-art] appendix. Also, if you have suggestions for changes to the SVG, feel free to write them as a comment in that appendix; (but be sure to be clear that you are pointing out a suggestion for the SVG).

In general, an allocator might be the backing memory pool itself; or an allocator might merely be a handle that references the memory pool. In the former case, when the allocator goes out of scope or is otherwise dropped, the memory pool is dropped as well; in the latter case, dropping the allocator has no effect on the memory pool.

  • One allocator that acts as a handle is the global heap allocator, whose associated pool is the low-level #[allocator] crate.

  • Another allocator that acts as a handle is a &'a Pool, where Pool is some structure implementing a sharable backing store. The big example section shows an instance of this.

  • An allocator that is its own memory pool would be a type analogous to Pool that implements the Allocator interface directly, rather than via &'a Pool.

  • A case in the middle of the two extremes might be something like an allocator of the form Rc<RefCell<Pool>>. This reflects shared ownership between a collection of allocators handles: dropping one handle will not drop the pool as long as at least one other handle remains, but dropping the last handle will drop the pool itself.

    FIXME: RefCell<Pool> is not going to work with the allocator API envisaged here; see comment from gankro. We will need to address this (perhaps just by pointing out that it is illegal and suggesting a standard pattern to work around it) before this RFC can be accepted.

A client that is generic over all possible A:Allocator instances cannot know which of the above cases it falls in. This has consequences in terms of the restrictions that must be met by client code interfacing with an allocator, which we discuss in a later section on lifetimes.

Example Usage

Lets jump into a demo. Here is a (super-dumb) bump-allocator that uses the Allocator trait.

Implementing the Allocator trait

First, the bump-allocator definition itself: each such allocator will have its own name (for error reports from OOM), start and limit pointers (ptr and end, respectively) to the backing storage it is allocating into, as well as the byte alignment (align) of that storage, and an avail: AtomicPtr<u8> for the cursor tracking how much we have allocated from the backing storage. (The avail field is an atomic because eventually we want to try sharing this demo allocator across scoped threads.)

#[derive(Debug)]
pub struct DumbBumpPool {
    name: &'static str,
    ptr: *mut u8,
    end: *mut u8,
    avail: AtomicPtr<u8>,
    align: usize,
}

The initial implementation is pretty straight forward: just immediately allocate the whole pool’s backing storage.

(If we wanted to be really clever we might layer this type on top of another allocator. For this demo I want to try to minimize cleverness, so we will use heap::allocate to grab the backing storage instead of taking an Allocator of our own.)

impl DumbBumpPool {
    pub fn new(name: &'static str,
               size_in_bytes: usize,
               start_align: usize) -> DumbBumpPool {
        unsafe {
            let ptr = heap::allocate(size_in_bytes, start_align);
            if ptr.is_null() { panic!("allocation failed."); }
            let end = ptr.offset(size_in_bytes as isize);
            DumbBumpPool {
                name: name,
                ptr: ptr, end: end, avail: AtomicPtr::new(ptr),
                align: start_align
            }
        }
    }
}

Since clients are not allowed to have blocks that outlive their associated allocator (see the lifetimes section), it is sound for us to always drop the backing storage for an allocator when the allocator itself is dropped (regardless of what sequence of alloc/dealloc interactions occurred with the allocator’s clients).

impl Drop for DumbBumpPool {
    fn drop(&mut self) {
        unsafe {
            let size = self.end as usize - self.ptr as usize;
            heap::deallocate(self.ptr, size, self.align);
        }
    }
}

Here are some other design choices of note:

  • Our Bump Allocator is going to use a most simple-minded deallocation policy: calls to fn dealloc are no-ops. Instead, every request takes up fresh space in the backing storage, until the pool is exhausted. (This was one reason I use the word “Dumb” in its name.)

  • Since we want to be able to share the bump-allocator amongst multiple (lifetime-scoped) threads, we will implement the Allocator interface as a handle pointing to the pool; in this case, a simple reference.

  • Since the whole point of this particular bump-allocator is to shared across threads (otherwise there would be no need to use AtomicPtr for the avail field), we will want to implement the (unsafe) Sync trait on it (doing this signals that it is safe to send &DumbBumpPool to other threads).

Here is that impl Sync.

/// Note of course that this impl implies we must review all other
/// code for DumbBumpPool even more carefully.
unsafe impl Sync for DumbBumpPool { }

Here is the demo implementation of Allocator for the type.

unsafe impl<'a> Allocator for &'a DumbBumpPool {
    unsafe fn alloc(&mut self, layout: alloc::Layout) -> Result<Address, AllocErr> {
        let align = layout.align();
        let size = layout.size();

        let mut curr_addr = self.avail.load(Ordering::Relaxed);
        loop {
            let curr = curr_addr as usize;
            let (sum, oflo) = curr.overflowing_add(align - 1);
            let curr_aligned = sum & !(align - 1);
            let remaining = (self.end as usize) - curr_aligned;
            if oflo || remaining < size {
                return Err(AllocErr::Exhausted { request: layout.clone() });
            }

            let curr_aligned = curr_aligned as *mut u8;
            let new_curr = curr_aligned.offset(size as isize);

            let attempt = self.avail.compare_and_swap(curr_addr, new_curr, Ordering::Relaxed);
            // If the allocation attempt hits interference ...
            if curr_addr != attempt {
                curr_addr = attempt;
                continue; // .. then try again
            } else {
                println!("alloc finis ok: 0x{:x} size: {}", curr_aligned as usize, size);
                return Ok(curr_aligned);
            }
        }
    }

    unsafe fn dealloc(&mut self, _ptr: Address, _layout: alloc::Layout) {
        // this bump-allocator just no-op's on dealloc
    }

    fn oom(&mut self, err: AllocErr) -> ! {
        let remaining = self.end as usize - self.avail.load(Ordering::Relaxed) as usize;
        panic!("exhausted memory in {} on request {:?} with avail: {}; self: {:?}",
               self.name, err, remaining, self);
    }

}

(Niko Matsakis has pointed out that this particular allocator might avoid interference errors by using fetch-and-add rather than compare-and-swap. The devil’s in the details as to how one might accomplish that while still properly adjusting for alignment; in any case, the overall point still holds in cases outside of this specific demo.)

And that is it; we are done with our allocator implementation.

Using an A:Allocator from the client side

We assume that Vec has been extended with a new_in method that takes an allocator argument that it uses to satisfy its allocation requests.

fn demo_alloc<A1:Allocator, A2:Allocator, F:Fn()>(a1:A1, a2: A2, print_state: F) {
    let mut v1 = Vec::new_in(a1);
    let mut v2 = Vec::new_in(a2);
    println!("demo_alloc, v1; {:?} v2: {:?}", v1, v2);
    for i in 0..10 {
        v1.push(i as u64 * 1000);
        v2.push(i as u8);
        v2.push(i as u8);
    }
    println!("demo_alloc, v1; {:?} v2: {:?}", v1, v2);
    print_state();
    for i in 10..100 {
        v1.push(i as u64 * 1000);
        v2.push(i as u8);
        v2.push(i as u8);
    }
    println!("demo_alloc, v1.len: {} v2.len: {}", v1.len(), v2.len());
    print_state();
    for i in 100..1000 {
        v1.push(i as u64 * 1000);
        v2.push(i as u8);
        v2.push(i as u8);
    }
    println!("demo_alloc, v1.len: {} v2.len: {}", v1.len(), v2.len());
    print_state();
}

fn main() {
    use std::thread::catch_panic;

    if let Err(panicked) = catch_panic(|| {
        let alloc = DumbBumpPool::new("demo-bump", 4096, 1);
        demo_alloc(&alloc, &alloc, || println!("alloc: {:?}", alloc));
    }) {
        match panicked.downcast_ref::<String>() {
            Some(msg) => {
                println!("DumbBumpPool panicked: {}", msg);
            }
            None => {
                println!("DumbBumpPool panicked");
            }
        }
    }

    // // The below will be (rightly) rejected by compiler when
    // // all pieces are properly in place: It is not valid to
    // // have the vector outlive the borrowed allocator it is
    // // referencing.
    //
    // let v = {
    //     let alloc = DumbBumpPool::new("demo2", 4096, 1);
    //     let mut v = Vec::new_in(&alloc);
    //     for i in 1..4 { v.push(i); }
    //     v
    // };

    let alloc = DumbBumpPool::new("demo-bump", 4096, 1);
    for i in 0..100 {
        let r = ::std::thread::scoped(|| {
            let v = Vec::new_in(&alloc);
            for j in 0..10 {
                v.push(j);
            }
        });
    }

    println!("got here");
}

And that’s all to the demo, folks.

What about standard library containers?

The intention of this RFC is that the Rust standard library will be extended with parameteric allocator support: Vec, HashMap, etc should all eventually be extended with the ability to use an alternative allocator for their backing storage.

However, this RFC does not prescribe when or how this should happen.

Under the design of this RFC, Allocators parameters are specified via a generic type parameter on the container type. This strongly implies that Vec<T> and HashMap<K, V> will need to be extended with an allocator type parameter, i.e.: Vec<T, A:Allocator> and HashMap<K, V, A:Allocator>.

There are two reasons why such extension is left to later work, after this RFC.

Default type parameter fallback

On its own, such a change would be backwards incompatible (i.e. a huge breaking change), and also would simply be just plain inconvenient for typical use cases. Therefore, the newly added type parameters will almost certainly require a default type: Vec<T: A:Allocator=HeapAllocator> and HashMap<K,V,A:Allocator=HeapAllocator>.

Default type parameters themselves, in the context of type definitions, are a stable part of the Rust language.

However, the exact semantics of how default type parameters interact with inference is still being worked out (in part because allocators are a motivating use case), as one can see by reading the following:

  • RFC 213, “Finalize defaulted type parameters”: https://github.com/rust-lang/rfcs/blob/master/text/0213-defaulted-type-params.md

  • Tracking Issue for RFC 213: Default Type Parameter Fallback: https://github.com/rust-lang/rust/issues/27336

  • Feature gate defaulted type parameters appearing outside of types: https://github.com/rust-lang/rust/pull/30724

Fully general container integration needs Dropck Eyepatch

The previous problem was largely one of programmer ergonomics. However, there is also a subtle soundness issue that arises due to an current implementation artifact.

Standard library types like Vec<T> and HashMap<K,V> allow instantiating the generic parameters T, K, V with types holding lifetimes that do not strictly outlive that of the container itself. (I will refer to such instantiations of Vec and HashMap “same-lifetime instances” as a shorthand in this discussion.)

Same-lifetime instance support is currently implemented for Vec and HashMap via an unstable attribute that is too coarse-grained. Therefore, we cannot soundly add the allocator parameter to Vec and HashMap while also continuing to allow same-lifetime instances without first addressing this overly coarse attribute. I have an open RFC to address this, the “Dropck Eyepatch” RFC; that RFC explains in more detail why this problem arises, using allocators as a specific motivating use case.

  • Concrete code illustrating this exact example (part of Dropck Eyepatch RFC): https://github.com/pnkfelix/rfcs/blob/dropck-eyepatch/text/0000-dropck-param-eyepatch.md#example-vect-aallocatordefaultallocator

  • Nonparametric dropck RFC https://github.com/rust-lang/rfcs/blob/master/text/1238-nonparametric-dropck.md

Standard library containers conclusion

Rather than wait for the above issues to be resolved, this RFC proposes that we at least stabilize the Allocator trait interface; then we will at least have a starting point upon which to prototype standard library integration.

Allocators and lifetimes

As mentioned above, allocators provide access to a memory pool. An allocator can be the pool (in the sense that the allocator owns the backing storage that represents the memory blocks it hands out), or an allocator can just be a handle that points at the pool.

Some pools have indefinite extent. An example of this is the global heap allocator, requesting memory directly from the low-level #[allocator] crate. Clients of an allocator with such a pool need not think about how long the allocator lives; instead, they can just freely allocate blocks, use them at will, and deallocate them at arbitrary points in the future. Memory blocks that come from such a pool will leak if it is not explicitly deallocated.

Other pools have limited extent: they are created, they build up infrastructure to manage their blocks of memory, and at some point, such pools are torn down. Memory blocks from such a pool may or may not be returned to the operating system during that tearing down.

There is an immediate question for clients of an allocator with the latter kind of pool (i.e. one of limited extent): whether it should attempt to spend time deallocating such blocks, and if so, at what time to do so?

Again, note:

  • generic clients (i.e. that accept any A:Allocator) cannot know what kind of pool they have, or how it relates to the allocator it is given,

  • dropping the client’s allocator may or may not imply the dropping of the pool itself!

That is, code written to a specific Allocator implementation may be able to make assumptions about the relationship between the memory blocks and the allocator(s), but the generic code we expect the standard library to provide cannot make such assumptions.

To satisfy the above scenarios in a sane, consistent, general fashion, the Allocator trait assumes/requires all of the following conditions. (Note: this list of conditions uses the phrases “should”, “must”, and “must not” in a formal manner, in the style of IETF RFC 2119.)

  1. (for allocator impls and clients): in the absence of other information (e.g. specific allocator implementations), all blocks from a given pool have lifetime equivalent to the lifetime of the pool.

    This implies if a client is going to read from, write to, or otherwise manipulate a memory block, the client must do so before its associated pool is torn down.

    (It also implies the converse: if a client can prove that the pool for an allocator is still alive, then it can continue to work with a memory block from that allocator even after the allocator is dropped.)

  2. (for allocator impls): an allocator must not outlive its associated pool.

    All clients can assume this in their code.

    (This constraint provides generic clients the preconditions they need to satisfy the first condition. In particular, even though clients do not generally know what kind of pool is associated with its allocator, it can conservatively assume that all blocks will live at least as long as the allocator itself.)

  3. (for allocator impls and clients): all clients of an allocator should eventually call the dealloc method on every block they want freed (otherwise, memory may leak).

    However, allocator implementations must remain sound even if this condition is not met: If dealloc is not invoked for all blocks and this condition is somehow detected, then an allocator can panic (or otherwise signal failure), but that sole violation must not cause undefined behavior.

    (This constraint is to encourage generic client authors to write code that will not leak memory when instantiated with allocators of indefinite extent, such as the global heap allocator.)

  4. (for allocator impls): moving an allocator value must not invalidate its outstanding memory blocks.

    All clients can assume this in their code.

    So if a client allocates a block from an allocator (call it a1) and then a1 moves to a new place (e.g. vialet a2 = a1;), then it remains sound for the client to deallocate that block via a2.

    Note that this implies that it is not sound to implement an allocator that embeds its own pool structurally inline.

    E.g. this is not a legal allocator:

    struct MegaEmbedded { pool: [u8; 1024*1024], cursor: usize, ... }
    impl Allocator for MegaEmbedded { ... } // INVALID IMPL

    The latter impl is simply unreasonable (at least if one is intending to satisfy requests by returning pointers into self.bytes).

    Note that an allocator that owns its pool indirectly (i.e. does not have the pool’s state embedded in the allocator) is fine:

    struct MegaIndirect { pool: *mut [u8; 1024*1024], cursor: usize, ... }
    impl Allocator for MegaIndirect { ... } // OKAY

    (I originally claimed that impl Allocator for &mut MegaEmbedded would also be a legal example of an allocator that is an indirect handle to an unembedded pool, but others pointed out that handing out the addresses pointing into that embedded pool could end up violating our aliasing rules for &mut. I obviously did not expect that outcome; I would be curious to see what the actual design space is here.)

  5. (for allocator impls and clients) if an allocator is cloneable, the client can assume that all clones are interchangeably compatible in terms of their memory blocks: if allocator a2 is a clone of a1, then one can allocate a block from a1 and return it to a2, or vice versa, or use a2.realloc on the block, et cetera.

    This essentially means that any cloneable allocator must be a handle indirectly referencing a pool of some sort. (Though do remember that such handles can collectively share ownership of their pool, such as illustrated in the Rc<RefCell<Pool>> example given earlier.)

    (Note: one might be tempted to further conclude that this also implies that allocators implementing Copy must have pools of indefinite extent. While this seems reasonable for Rust as it stands today, I am slightly worried whether it would continue to hold e.g. in a future version of Rust with something like Gc<GcPool>: Copy, where the GcPool and its blocks is reclaimed (via finalization) sometime after being determined to be globally unreachable. Then again, perhaps it would be better to simply say “we will not support that use case for the allocator API”, so that clients would be able to employ the reasoning outlined in the outset of this paragraph.)

A walk through the Allocator trait

Role-Based Type Aliases

Allocation code often needs to deal with values that boil down to a usize in the end. But there are distinct roles (e.g. “size”, “alignment”) that such values play, and I decided those roles would be worth hard-coding into the method signatures.

  • Therefore, I made type aliases for Size, Capacity, Alignment, and Address.

Basic implementation

An instance of an allocator has many methods, but an implementor of the trait need only provide two method bodies: alloc and dealloc.

(This is only somewhat analogous to the Iterator trait in Rust. It is currently very uncommon to override any methods of Iterator except for fn next. However, I expect it will be much more common for Allocator to override at least some of the other methods, like fn realloc.)

The alloc method returns an Address when it succeeds, and dealloc takes such an address as its input. But the client must also provide metadata for the allocated block like its size and alignment. This is encapsulated in the Layout argument to alloc and dealloc.

Memory layouts

A Layout just carries the metadata necessary for satisfying an allocation request. Its (current, private) representation is just a size and alignment.

The more interesting thing about Layout is the family of public methods associated with it for building new layouts via composition; these are shown in the layout api.

Reallocation Methods

Of course, real-world allocation often needs more than just alloc/dealloc: in particular, one often wants to avoid extra copying if the existing block of memory can be conceptually expanded in place to meet new allocation needs. In other words, we want realloc, plus alternatives to it (alloc_excess) that allow clients to avoid round-tripping through the allocator API.

For this, the memory reuse family of methods is appropriate.

Type-based Helper Methods

Some readers might skim over the Layout API and immediately say “yuck, all I wanted to do was allocate some nodes for a tree-structure and let my clients choose how the backing memory is chosen! Why do I have to wrestle with this Layout business?”

I agree with the sentiment; that’s why the Allocator trait provides a family of methods capturing common usage patterns, for example, a.alloc_one::<T>() will return a Unique<T> (or error).

Unchecked variants

Almost all of the methods above return Result, and guarantee some amount of input validation. (This is largely because I observed code duplication doing such validation on the client side; or worse, such validation accidentally missing.)

However, some clients will want to bypass such checks (and do it without risking undefined behavior, namely by ensuring the method preconditions hold via local invariants in their container type).

For these clients, the Allocator trait provides “unchecked” variants of nearly all of its methods; so a.alloc_unchecked(layout) will return an Option<Address> (where None corresponds to allocation failure).

The idea here is that Allocator implementors are encouraged to streamline the implementations of such methods by assuming that all of the preconditions hold.

  • However, to ease initial impl Allocator development for a given type, all of the unchecked methods have default implementations that call out to their checked counterparts.

  • (In other words, “unchecked” is in some sense a privilege being offered to impl’s; but there is no guarantee that an arbitrary impl takes advantage of the privilege.)

Object-oriented Allocators

Finally, we get to object-oriented programming.

In general, we expect allocator-parametric code to opt not to use trait objects to generalize over allocators, but instead to use generic types and instantiate those types with specific concrete allocators.

Nonetheless, it is an option to write Box<Allocator> or &Allocator.

  • (The allocator methods that are not object-safe, like fn alloc_one<T>(&mut self), have a clause where Self: Sized to ensure that their presence does not cause the Allocator trait as a whole to become non-object-safe.)

Why this API

Here are some quick points about how this API was selected

Why not just free(ptr) for deallocation?

As noted in RFC PR 39 (and reiterated in RFC PR 244), the basic malloc interface {malloc(size) -> ptr, free(ptr), realloc(ptr, size) -> ptr} is lacking in a number of ways: malloc lacks the ability to request a particular alignment, and realloc lacks the ability to express a copy-free “reuse the input, or do nothing at all” request. Another problem with the malloc interface is that it burdens the allocator with tracking the sizes of allocated data and re-extracting the allocated size from the ptr in free and realloc calls (the latter can be very cheap, but there is still no reason to pay that cost in a language like Rust where the relevant size is often already immediately available as a compile-time constant).

Therefore, in the name of (potential best-case) speed, we want to require client code to provide the metadata like size and alignment to both the allocation and deallocation call sites.

Why not just alloc/dealloc (or alloc/dealloc/realloc)?

  • The alloc_one/dealloc_one and alloc_array/dealloc_array capture a very common pattern for allocation of memory blocks where a simple value or array type is being allocated.

  • The alloc_array_unchecked and dealloc_array_unchecked likewise capture a common pattern, but are “less safe” in that they put more of an onus on the caller to validate the input parameters before calling the methods.

  • The alloc_excess and realloc_excess methods provide a way for callers who can make use of excess memory to avoid unnecessary calls to realloc.

Why the Layout abstraction?

While we do want to require clients to hand the allocator the size and alignment, we have found that the code to compute such things follows regular patterns. It makes more sense to factor those patterns out into a common abstraction; this is what Layout provides: a high-level API for describing the memory layout of a composite structure by composing the layout of its subparts.

Why return Result rather than a raw pointer?

My hypothesis is that the standard allocator API should embrace Result as the standard way for describing local error conditions in Rust.

  • A previous version of this RFC attempted to ensure that the use of the Result type could avoid any additional overhead over a raw pointer return value, by using a NonZero address type and a zero-sized error type attached to the trait via an associated Error type. But during the RFC process we decided that this was not necessary.

Why return Result rather than directly oom on failure

Again, my hypothesis is that the standard allocator API should embrace Result as the standard way for describing local error conditions in Rust.

I want to leave it up to the clients to decide if they can respond to out-of-memory (OOM) conditions on allocation failure.

However, since I also suspect that some programs would benefit from contextual information about which allocator is reporting memory exhaustion, I have made oom a method of the Allocator trait, so that allocator clients have the option of calling that on error.

Why is usable_size ever needed? Why not call layout.size() directly, as is done in the default implementation?

layout.size() returns the minimum required size that the client needs. In a block-based allocator, this may be less than the actual size that the allocator would ever provide to satisfy that kind of request. Therefore, usable_size provides a way for clients to observe what the minimum actual size of an allocated block for thatlayout would be, for a given allocator.

(Note that the documentation does say that in general it is better for clients to use alloc_excess and realloc_excess instead, if they can, as a way to directly observe the actual amount of slop provided by the particular allocator.)

Why is Allocator an unsafe trait?

It just seems like a good idea given how much of the standard library is going to assume that allocators are implemented according to their specification.

(I had thought that unsafe fn for the methods would suffice, but that is putting the burden of proof (of soundness) in the wrong direction…)

The GC integration strategy

One of the main reasons that RFC PR 39 was not merged as written was because it did not account for garbage collection (GC).

In particular, assuming that we eventually add support for GC in some form, then any value that holds a reference to an object on the GC’ed heap will need some linkage to the GC. In particular, if the only such reference (i.e. the one with sole ownership) is held in a block managed by a user-defined allocator, then we need to ensure that all such references are found when the GC does its work.

The Rust project has control over the libstd provided allocators, so the team can adapt them as necessary to fit the needs of whatever GC designs come around. But the same is not true for user-defined allocators: we want to ensure that adding support for them does not inadvertently kill any chance for adding GC later.

The inspiration for Layout

Some aspects of the design of this RFC were selected in the hopes that it would make such integration easier. In particular, the introduction of the relatively high-level Kind abstraction was developed, in part, as a way that a GC-aware allocator would build up a tracing method associated with a layout.

Then I realized that the Kind abstraction may be valuable on its own, without GC: It encapsulates important patterns when working with representing data as memory records.

(Later we decided to rename Kind to Layout, in part to avoid confusion with the use of the word “kind” in the context of higher-kinded types (HKT).)

So, this RFC offers the Layout abstraction without promising that it solves the GC problem. (It might, or it might not; we don’t know yet.)

Forwards-compatibility

So what is the solution for forwards-compatibility?

It is this: Rather than trying to build GC support into the Allocator trait itself, we instead assume that when GC support comes, it may come with a new trait (call it GcAwareAllocator).

  • (Perhaps we will instead use an attribute; the point is, whatever option we choose can be incorporated into the meta-data for a crate.)

Allocators that are GC-compatible have to explicitly declare themselves as such, by implementing GcAwareAllocator, which will then impose new conditions on the methods of Allocator, for example ensuring e.g. that allocated blocks of memory can be scanned (i.e. “parsed”) by the GC (if that in fact ends up being necessary).

This way, we can deploy an Allocator trait API today that does not provide the necessary reflective hooks that a GC would need to access.

Crates that define their own Allocator implementations without also claiming them to be GC-compatible will be forbidden from linking with crates that require GC support. (In other words, when GC support comes, we assume that the linking component of the Rust compiler will be extended to check such compatibility requirements.)

Drawbacks

The API may be over-engineered.

The core set of methods (the ones without unchecked) return Result and potentially impose unwanted input validation overhead.

  • The _unchecked variants are intended as the response to that, for clients who take care to validate the many preconditions themselves in order to minimize the allocation code paths.

Alternatives

Just adopt RFC PR 39 with this RFC’s GC strategy

The GC-compatibility strategy described here (in gc integration) might work with a large number of alternative designs, such as that from RFC PR 39.

While that is true, it seems like it would be a little short-sighted. In particular, I have neither proven nor disproven the value of Layout system described here with respect to GC integration.

As far as I know, it is the closest thing we have to a workable system for allowing client code of allocators to accurately describe the layout of values they are planning to allocate, which is the main ingredient I believe to be necessary for the kind of dynamic reflection that a GC will require of a user-defined allocator.

Make Layout an associated type of Allocator trait

I explored making an AllocLayout bound and then having

pub unsafe trait Allocator {
    /// Describes the sort of records that this allocator can
    /// construct.
    type Layout: AllocLayout;

    ...
}

Such a design might indeed be workable. (I found it awkward, which is why I abandoned it.)

But the question is: What benefit does it bring?

The main one I could imagine is that it might allow us to introduce a division, at the type-system level, between two kinds of allocators: those that are integrated with the GC (i.e., have an associated Allocator::Layout that ensures that all allocated blocks are scannable by a GC) and allocators that are not integrated with the GC (i.e., have an associated Allocator::Layout that makes no guarantees about one will know how to scan the allocated blocks.

However, no such design has proven itself to be “obviously feasible to implement,” and therefore it would be unreasonable to make the Layout an associated type of the Allocator trait without having at least a few motivating examples that are clearly feasible and useful.

Variations on the Layout API

  • Should Layout offer a fn resize(&self, new_size: usize) -> Layout constructor method? (Such a method would rule out deriving GC tracers from layouts; but we could maybe provide it as an unsafe method.)

  • Should Layout ensure an invariant that its associated size is always a multiple of its alignment?

    • Doing this would allow simplifying a small part of the API, namely the distinct Layout::repeat (returns both a layout and an offset) versus Layout::array (where the offset is derivable from the input T).

    • Such a constraint would have precedent; in particular, the aligned_alloc function of C11 requires the given size be a multiple of the alignment.

    • On the other hand, both the system and jemalloc allocators seem to support more flexible allocation patterns. Imposing the above invariant implies a certain loss of expressiveness over what we already provide today.

  • Should Layout ensure an invariant that its associated size is always positive?

    • Pro: Removes something that allocators would need to check about input layouts (the backing memory allocators will tend to require that the input sizes are positive).

    • Con: Requiring positive size means that zero-sized types do not have an associated Layout. That’s not the end of the world, but it does make the Layout API slightly less convenient (e.g. one cannot use extend with a zero-sized layout to forcibly inject padding, because zero-sized layouts do not exist).

  • Should Layout::align_to add padding to the associated size? (Probably not; this would make it impossible to express certain kinds of patteerns.)

  • Should the Layout methods that might “fail” return Result instead of Option?

Variations on the Allocator API

  • Should the allocator methods take &self or self rather than &mut self.

    As noted during in the RFC comments, nearly every trait goes through a bit of an identity crisis in terms of deciding what kind of self parameter is appropriate.

    The justification for &mut self is this:

    • It does not restrict allocator implementors from making sharable allocators: to do so, just do impl<'a> Allocator for &'a MySharedAlloc, as illustrated in the DumbBumpPool example.

    • &mut self is better than &self for simple allocators that are not sharable. &mut self ensures that the allocation methods have exclusive access to the underlying allocator state, without resorting to a lock. (Another way of looking at it: It moves the onus of using a lock outward, to the allocator clients.)

    • One might think that the points made above apply equally well to self (i.e., if you want to implement an allocator that wants to take itself via a &mut-reference when the methods take self, then do impl<'a> Allocator for &'a mut MyUniqueAlloc).

      However, the problem with self is that if you want to use an allocator for more than one allocation, you will need to call clone() (or make the allocator parameter implement Copy). This means in practice all allocators will need to support Clone (and thus support sharing in general, as discussed in the Allocators and lifetimes section).

      (Remember, I’m thinking about allocator-parametric code like Vec<T, A:Allocator>, which does not know if the A is a &mut-reference. In that context, therefore one cannot assume that reborrowing machinery is available to the client code.)

      Put more simply, requiring that allocators implement Clone means that it will not be practical to do impl<'a> Allocator for &'a mut MyUniqueAlloc.

      By using &mut self for the allocation methods, we can encode the expected use case of an unshared allocator that is used repeatedly in a linear fashion (e.g. vector that needs to reallocate its backing storage).

  • Should the types representing allocated storage have lifetimes attached? (E.g. fn alloc<'a>(&mut self, layout: &alloc::Layout) -> Address<'a>.)

    I think Gankro put it best:

    This is a low-level unsafe interface, and the expected usecases make it both quite easy to avoid misuse, and impossible to use lifetimes (you want a struct to store the allocator and the allocated elements). Any time we’ve tried to shove more lifetimes into these kinds of interfaces have just been an annoying nuisance necessitating copy-lifetime/transmute nonsense.

  • Should Allocator::alloc be safe instead of unsafe fn?

    • Clearly fn dealloc and fn realloc need to be unsafe, since feeding in improper inputs could cause unsound behavior. But is there any analogous input to fn alloc that could cause unsoundness (assuming that the Layout struct enforces invariants like “the associated size is non-zero”)?

    • (I left it as unsafe fn alloc just to keep the API uniform with dealloc and realloc.)

  • Should Allocator::realloc not require that new_layout.align() evenly divide layout.align()? In particular, it is not too expensive to check if the two layouts are not compatible, and fall back on alloc/dealloc in that case.

  • Should Allocator not provide unchecked variants on fn alloc, fn realloc, et cetera? (To me it seems having them does no harm, apart from potentially misleading clients who do not read the documentation about what scenarios yield undefined behavior.

    • Another option here would be to provide a trait UncheckedAllocator: Allocator that carries the unchecked methods, so that clients who require such micro-optimized paths can ensure that their clients actually pass them an implementation that has the checks omitted.
  • On the flip-side of the previous bullet, should Allocator provide fn alloc_one_unchecked and fn dealloc_one_unchecked ? I think the only check that such variants would elide would be that T is not zero-sized; I’m not sure that’s worth it. (But the resulting uniformity of the whole API might shift the balance to “worth it”.)

  • Should the precondition of allocation methods be loosened to accept zero-sized types?

    Right now, there is a requirement that the allocation requests denote non-zero sized types (this requirement is encoded in two ways: for Layout-consuming methods like alloc, it is enforced via the invariant that the Size is a NonZero, and this is enforced by checks in the Layout construction code; for the convenience methods like alloc_one, they will return Err if the allocation request is zero-sized).

    The main motivation for this restriction is some underlying system allocators, like jemalloc, explicitly disallow zero-sized inputs. Therefore, to remove all unnecessary control-flow branches between the client and the underlying allocator, the Allocator trait is bubbling that restriction up and imposing it onto the clients, who will presumably enforce this invariant via container-specific means.

    But: pre-existing container types (like Vec<T>) already allow zero-sized T. Therefore, there is an unfortunate mismatch between the ideal API those container would prefer for their allocators and the actual service that this Allocator trait is providing.

    So: Should we lift this precondition of the allocation methods, and allow zero-sized requests (which might be handled by a global sentinel value, or by an allocator-specific sentinel value, or via some other means – this would have to be specified as part of the Allocator API)?

    (As a middle ground, we could lift the precondition solely for the convenience methods like fn alloc_one and fn alloc_array; that way, the most low-level methods like fn alloc would continue to minimize the overhead they add over the underlying system allocator, while the convenience methods would truly be convenient.)

  • Should oom be a free-function rather than a method on Allocator? (The reason I want it on Allocator is so that it can provide feedback about the allocator’s state at the time of the OOM. Zoxc has argued on the RFC thread that some forms of static analysis, to prove oom is never invoked, would prefer it to be a free function.)

Unresolved questions

  • Since we cannot do RefCell<Pool> (see FIXME above), what is our standard recommendation for what to do instead?

  • Should Layout be an associated type of Allocator (see alternatives section for discussion). (In fact, most of the “Variations correspond to potentially unresolved questions.)

  • Are the type definitions for Size, Capacity, Alignment, and Address an abuse of the NonZero type? (Or do we just need some constructor for NonZero that asserts that the input is non-zero)?

  • Do we need Allocator::max_size and Allocator::max_align ?

  • Should default impl of Allocator::max_align return None, or is there more suitable default? (perhaps e.g. PLATFORM_PAGE_SIZE?)

    The previous allocator documentation provided by Daniel Micay suggest that we should specify that behavior unspecified if allocation is too large, but if that is the case, then we should definitely provide some way to observe that threshold.)

    From what I can tell, we cannot currently assume that all low-level allocators will behave well for large alignments. See https://github.com/rust-lang/rust/issues/30170

  • Should Allocator::oom also take a std::fmt::Arguments<'a> parameter so that clients can feed in context-specific information that is not part of the original input Layout argument? (I have not done this mainly because I do not want to introduce a dependency on libstd.)

Change History

  • Changed fn usable_size to return (l, m) rather than just m.

  • Removed fn is_transient from trait AllocError, and removed discussion of transient errors from the API.

  • Made fn dealloc method infallible (i.e. removed its Result return type).

  • Alpha-renamed alloc::Kind type to alloc::Layout, and made it non-Copy.

  • Revised fn oom method to take the Self::Error as an input (so that the allocator can, indirectly, feed itself information about what went wrong).

  • Removed associated Error type from Allocator trait; all methods now use AllocErr for error type. Removed AllocError trait and MemoryExhausted error.

  • Removed fn max_size and fn max_align methods; we can put them back later if someone demonstrates a need for them.

  • Added fn realloc_in_place.

  • Removed uses of NonZero. Made Layout able to represent zero-sized layouts. A given Allocator may or may not support zero-sized layouts.

  • Various other API revisions were made during development of PR 42313, “allocator integration”. See the nightly API docs rather than using RFC document as a sole reference.

Appendices

Bibliography

RFC Pull Request #39: Allocator trait

Daniel Micay, 2014. RFC: Allocator trait. https://github.com/thestinger/rfcs/blob/ad4cdc2662cc3d29c3ee40ae5abbef599c336c66/active/0000-allocator-trait.md

RFC Pull Request #244: Allocator RFC, take II

Felix Klock, 2014, Allocator RFC, take II, https://github.com/pnkfelix/rfcs/blob/d3c6068e823f495ee241caa05d4782b16e5ef5d8/active/0000-allocator.md

Dynamic Storage Allocation: A Survey and Critical Review

Paul R. Wilson, Mark S. Johnstone, Michael Neely, and David Boles, 1995. Dynamic Storage Allocation: A Survey and Critical Review ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps . Slightly modified version appears in Proceedings of 1995 International Workshop on Memory Management (IWMM ’95), Kinross, Scotland, UK, September 27–29, 1995 Springer Verlag LNCS

Reconsidering custom memory allocation

Emery D. Berger, Benjamin G. Zorn, and Kathryn S. McKinley. 2002. Reconsidering custom memory allocation. In Proceedings of the 17th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications (OOPSLA ’02).

The memory fragmentation problem: solved?

Mark S. Johnstone and Paul R. Wilson. 1998. The memory fragmentation problem: solved?. In Proceedings of the 1st international symposium on Memory management (ISMM ’98).

EASTL: Electronic Arts Standard Template Library

Paul Pedriana. 2007. EASTL – Electronic Arts Standard Template Library. Document number: N2271=07-0131

Towards a Better Allocator Model

Pablo Halpern. 2005. Towards a Better Allocator Model. Document number: N1850=05-0110

Various allocators

jemalloc, tcmalloc, Hoard

ASCII art version of Allocator message sequence chart

This is an ASCII art version of the SVG message sequence chart from the semantics of allocators section.

Program             Vec<Widget, &mut Allocator>         Allocator
  ||
  ||
   +--------------- create allocator -------------------> ** (an allocator is born)
  *| <------------ return allocator A ---------------------+
  ||                                                       |
  ||                                                       |
   +- create vec w/ &mut A -> ** (a vec is born)           |
  *| <------return vec V ------+                           |
  ||                           |                           |
   *------- push W_1 -------> *|                           |
   |                          ||                           |
   |                          ||                           |
   |                           +--- allocate W array ---> *|
   |                           |                          ||
   |                           |                          ||
   |                           |                           +---- (request system memory if necessary)
   |                           |                          *| <-- ...
   |                           |                          ||
   |                          *| <--- return *W block -----+
   |                          ||                           |
   |                          ||                           |
  *| <------- (return) -------+|                           |
  ||                           |                           |
   +------- push W_2 -------->+|                           |
   |                          ||                           |
  *| <------- (return) -------+|                           |
  ||                           |                           |
   +------- push W_3 -------->+|                           |
   |                          ||                           |
  *| <------- (return) -------+|                           |
  ||                           |                           |
   +------- push W_4 -------->+|                           |
   |                          ||                           |
  *| <------- (return) -------+|                           |
  ||                           |                           |
   +------- push W_5 -------->+|                           |
   |                          ||                           |
   |                           +---- realloc W array ---> *|
   |                           |                          ||
   |                           |                          ||
   |                           |                           +---- (request system memory if necessary)
   |                           |                          *| <-- ...
   |                           |                          ||
   |                          *| <--- return *W block -----+
  *| <------- (return) -------+|                           |
  ||                           |                           |
  ||                           |                           |
   .                           .                           .
   .                           .                           .
   .                           .                           .
  ||                           |                           |
  ||                           |                           |
  || (end of Vec scope)        |                           |
  ||                           |                           |
   +------ drop Vec --------> *|                           |
   |                          || (Vec destructor)          |
   |                          ||                           |
   |                           +---- dealloc W array -->  *|
   |                           |                          ||
   |                           |                           +---- (potentially return system memory)
   |                           |                          *| <-- ...
   |                           |                          ||
   |                          *| <------- (return) --------+
  *| <------- (return) --------+                           |
  ||                                                       |
  ||                                                       |
  ||                                                       |
  || (end of Allocator scope)                              |
  ||                                                       |
   +------------------ drop Allocator ------------------> *|
   |                                                      ||
   |                                                      |+---- (return any remaining associated memory)
   |                                                      *| <-- ...
   |                                                      ||
  *| <------------------ (return) -------------------------+
  ||
  ||
   .
   .
   .

Transcribed Source for Allocator trait API

Here is the whole source file for my prototype allocator API, sub-divided roughly accordingly to functionality.

(We start with the usual boilerplate…)

// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

#![unstable(feature = "allocator_api",
            reason = "the precise API and guarantees it provides may be tweaked \
                      slightly, especially to possibly take into account the \
                      types being stored to make room for a future \
                      tracing garbage collector",
            issue = "27700")]

use core::cmp;
use core::mem;
use core::nonzero::NonZero;
use core::ptr::{self, Unique};

Type Aliases

pub type Size = usize;
pub type Capacity = usize;
pub type Alignment = usize;

pub type Address = *mut u8;

/// Represents the combination of a starting address and
/// a total capacity of the returned block.
pub struct Excess(Address, Capacity);

fn size_align<T>() -> (usize, usize) {
    (mem::size_of::<T>(), mem::align_of::<T>())
}

Layout API

/// Category for a memory record.
///
/// An instance of `Layout` describes a particular layout of memory.
/// You build a `Layout` up as an input to give to an allocator.
///
/// All layouts have an associated non-negative size and positive alignment.
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Layout {
    // size of the requested block of memory, measured in bytes.
    size: Size,
    // alignment of the requested block of memory, measured in bytes.
    // we ensure that this is always a power-of-two, because API's
    ///like `posix_memalign` require it and it is a reasonable
    // constraint to impose on Layout constructors.
    //
    // (However, we do not analogously require `align >= sizeof(void*)`,
    //  even though that is *also* a requirement of `posix_memalign`.)
    align: Alignment,
}


// FIXME: audit default implementations for overflow errors,
// (potentially switching to overflowing_add and
//  overflowing_mul as necessary).

impl Layout {
    // (private constructor)
    fn from_size_align(size: usize, align: usize) -> Layout {
        assert!(align.is_power_of_two());
        assert!(align > 0);
        Layout { size: size, align: align }
    }

    /// The minimum size in bytes for a memory block of this layout.
    pub fn size(&self) -> usize { self.size }

    /// The minimum byte alignment for a memory block of this layout.
    pub fn align(&self) -> usize { self.align }

    /// Constructs a `Layout` suitable for holding a value of type `T`.
    pub fn new<T>() -> Self {
        let (size, align) = size_align::<T>();
        Layout::from_size_align(size, align)
    }

    /// Produces layout describing a record that could be used to
    /// allocate backing structure for `T` (which could be a trait
    /// or other unsized type like a slice).
    pub fn for_value<T: ?Sized>(t: &T) -> Self {
        let (size, align) = (mem::size_of_val(t), mem::align_of_val(t));
        Layout::from_size_align(size, align)
    }

    /// Creates a layout describing the record that can hold a value
    /// of the same layout as `self`, but that also is aligned to
    /// alignment `align` (measured in bytes).
    ///
    /// If `self` already meets the prescribed alignment, then returns
    /// `self`.
    ///
    /// Note that this method does not add any padding to the overall
    /// size, regardless of whether the returned layout has a different
    /// alignment. In other words, if `K` has size 16, `K.align_to(32)`
    /// will *still* have size 16.
    pub fn align_to(&self, align: Alignment) -> Self {
        if align > self.align {
            let pow2_align = align.checked_next_power_of_two().unwrap();
            debug_assert!(pow2_align > 0); // (this follows from self.align > 0...)
            Layout { align: pow2_align,
                     ..*self }
        } else {
            self.clone()
        }
    }

    /// Returns the amount of padding we must insert after `self`
    /// to ensure that the following address will satisfy `align`
    /// (measured in bytes).
    ///
    /// Behavior undefined if `align` is not a power-of-two.
    ///
    /// Note that in practice, this is only useable if `align <=
    /// self.align` otherwise, the amount of inserted padding would
    /// need to depend on the particular starting address for the
    /// whole record, because `self.align` would not provide
    /// sufficient constraint.
    pub fn padding_needed_for(&self, align: Alignment) -> usize {
        debug_assert!(align <= self.align());
        let len = self.size();
        let len_rounded_up = (len + align - 1) & !(align - 1);
        return len_rounded_up - len;
    }

    /// Creates a layout describing the record for `n` instances of
    /// `self`, with a suitable amount of padding between each to
    /// ensure that each instance is given its requested size and
    /// alignment. On success, returns `(k, offs)` where `k` is the
    /// layout of the array and `offs` is the distance between the start
    /// of each element in the array.
    ///
    /// On arithmetic overflow, returns `None`.
    pub fn repeat(&self, n: usize) -> Option<(Self, usize)> {
        let padded_size = match self.size.checked_add(self.padding_needed_for(self.align)) {
            None => return None,
            Some(padded_size) => padded_size,
        };
        let alloc_size = match padded_size.checked_mul(n) {
            None => return None,
            Some(alloc_size) => alloc_size,
        };
        Some((Layout::from_size_align(alloc_size, self.align), padded_size))
    }

    /// Creates a layout describing the record for `self` followed by
    /// `next`, including any necessary padding to ensure that `next`
    /// will be properly aligned. Note that the result layout will
    /// satisfy the alignment properties of both `self` and `next`.
    ///
    /// Returns `Some((k, offset))`, where `k` is layout of the concatenated
    /// record and `offset` is the relative location, in bytes, of the
    /// start of the `next` embedded witnin the concatenated record
    /// (assuming that the record itself starts at offset 0).
    ///
    /// On arithmetic overflow, returns `None`.
    pub fn extend(&self, next: Self) -> Option<(Self, usize)> {
        let new_align = cmp::max(self.align, next.align);
        let realigned = Layout { align: new_align, ..*self };
        let pad = realigned.padding_needed_for(new_align);
        let offset = self.size() + pad;
        let new_size = offset + next.size();
        Some((Layout::from_size_align(new_size, new_align), offset))
    }

    /// Creates a layout describing the record for `n` instances of
    /// `self`, with no padding between each instance.
    ///
    /// On arithmetic overflow, returns `None`.
    pub fn repeat_packed(&self, n: usize) -> Option<Self> {
        let scaled = match self.size().checked_mul(n) {
            None => return None,
            Some(scaled) => scaled,
        };
        let size = { assert!(scaled > 0); scaled };
        Some(Layout { size: size, align: self.align })
    }

    /// Creates a layout describing the record for `self` followed by
    /// `next` with no additional padding between the two. Since no
    /// padding is inserted, the alignment of `next` is irrelevant,
    /// and is not incorporated *at all* into the resulting layout.
    ///
    /// Returns `(k, offset)`, where `k` is layout of the concatenated
    /// record and `offset` is the relative location, in bytes, of the
    /// start of the `next` embedded witnin the concatenated record
    /// (assuming that the record itself starts at offset 0).
    ///
    /// (The `offset` is always the same as `self.size()`; we use this
    ///  signature out of convenience in matching the signature of
    ///  `fn extend`.)
    ///
    /// On arithmetic overflow, returns `None`.
    pub fn extend_packed(&self, next: Self) -> Option<(Self, usize)> {
        let new_size = match self.size().checked_add(next.size()) {
            None => return None,
            Some(new_size) => new_size,
        };
        Some((Layout { size: new_size, ..*self }, self.size()))
    }

    // Below family of methods *assume* inputs are pre- or
    // post-validated in some manner. (The implementations here
    ///do indirectly validate, but that is not part of their
    /// specification.)
    //
    // Since invalid inputs could yield ill-formed layouts, these
    // methods are `unsafe`.

    /// Creates layout describing the record for a single instance of `T`.
    pub unsafe fn new_unchecked<T>() -> Self {
        let (size, align) = size_align::<T>();
        Layout::from_size_align(size, align)
    }


    /// Creates a layout describing the record for `self` followed by
    /// `next`, including any necessary padding to ensure that `next`
    /// will be properly aligned. Note that the result layout will
    /// satisfy the alignment properties of both `self` and `next`.
    ///
    /// Returns `(k, offset)`, where `k` is layout of the concatenated
    /// record and `offset` is the relative location, in bytes, of the
    /// start of the `next` embedded witnin the concatenated record
    /// (assuming that the record itself starts at offset 0).
    ///
    /// Requires no arithmetic overflow from inputs.
    pub unsafe fn extend_unchecked(&self, next: Self) -> (Self, usize) {
        self.extend(next).unwrap()
    }

    /// Creates a layout describing the record for `n` instances of
    /// `self`, with a suitable amount of padding between each.
    ///
    /// Requires non-zero `n` and no arithmetic overflow from inputs.
    /// (See also the `fn array` checked variant.)
    pub unsafe fn repeat_unchecked(&self, n: usize) -> (Self, usize) {
        self.repeat(n).unwrap()
    }

    /// Creates a layout describing the record for `n` instances of
    /// `self`, with no padding between each instance.
    ///
    /// Requires non-zero `n` and no arithmetic overflow from inputs.
    /// (See also the `fn array_packed` checked variant.)
    pub unsafe fn repeat_packed_unchecked(&self, n: usize) -> Self {
        self.repeat_packed(n).unwrap()
    }

    /// Creates a layout describing the record for `self` followed by
    /// `next` with no additional padding between the two. Since no
    /// padding is inserted, the alignment of `next` is irrelevant,
    /// and is not incorporated *at all* into the resulting layout.
    ///
    /// Returns `(k, offset)`, where `k` is layout of the concatenated
    /// record and `offset` is the relative location, in bytes, of the
    /// start of the `next` embedded witnin the concatenated record
    /// (assuming that the record itself starts at offset 0).
    ///
    /// (The `offset` is always the same as `self.size()`; we use this
    ///  signature out of convenience in matching the signature of
    ///  `fn extend`.)
    ///
    /// Requires no arithmetic overflow from inputs.
    /// (See also the `fn extend_packed` checked variant.)
    pub unsafe fn extend_packed_unchecked(&self, next: Self) -> (Self, usize) {
        self.extend_packed(next).unwrap()
    }

    /// Creates a layout describing the record for a `[T; n]`.
    ///
    /// On zero `n`, zero-sized `T`, or arithmetic overflow, returns `None`.
    pub fn array<T>(n: usize) -> Option<Self> {
        Layout::new::<T>()
            .repeat(n)
            .map(|(k, offs)| {
                debug_assert!(offs == mem::size_of::<T>());
                k
            })
    }

    /// Creates a layout describing the record for a `[T; n]`.
    ///
    /// Requires nonzero `n`, nonzero-sized `T`, and no arithmetic
    /// overflow; otherwise behavior undefined.
    pub fn array_unchecked<T>(n: usize) -> Self {
        Layout::array::<T>(n).unwrap()
    }

}

AllocErr API

/// The `AllocErr` error specifies whether an allocation failure is
/// specifically due to resource exhaustion or if it is due to
/// something wrong when combining the given input arguments with this
/// allocator.
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum AllocErr {
    /// Error due to hitting some resource limit or otherwise running
    /// out of memory. This condition strongly implies that *some*
    /// series of deallocations would allow a subsequent reissuing of
    /// the original allocation request to succeed.
    Exhausted { request: Layout },

    /// Error due to allocator being fundamentally incapable of
    /// satisfying the original request. This condition implies that
    /// such an allocation request will never succeed on the given
    /// allocator, regardless of environment, memory pressure, or
    /// other contextual conditions.
    ///
    /// For example, an allocator that does not support zero-sized
    /// blocks can return this error variant.
    Unsupported { details: &'static str },
}

impl AllocErr {
    pub fn invalid_input(details: &'static str) -> Self {
        AllocErr::Unsupported { details: details }
    }
    pub fn is_memory_exhausted(&self) -> bool {
        if let AllocErr::Exhausted { .. } = *self { true } else { false }
    }
    pub fn is_request_unsupported(&self) -> bool {
        if let AllocErr::Unsupported { .. } = *self { true } else { false }
    }
}

/// The `CannotReallocInPlace` error is used when `fn realloc_in_place`
/// was unable to reuse the given memory block for a requested layout.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct CannotReallocInPlace;

Allocator trait header

/// An implementation of `Allocator` can allocate, reallocate, and
/// deallocate arbitrary blocks of data described via `Layout`.
///
/// Some of the methods require that a layout *fit* a memory block.
/// What it means for a layout to "fit" a memory block means is that
/// the following two conditions must hold:
///
/// 1. The block's starting address must be aligned to `layout.align()`.
///
/// 2. The block's size must fall in the range `[use_min, use_max]`, where:
///
///    * `use_min` is `self.usable_size(layout).0`, and
///
///    * `use_max` is the capacity that was (or would have been)
///      returned when (if) the block was allocated via a call to
///      `alloc_excess` or `realloc_excess`.
///
/// Note that:
///
///  * the size of the layout most recently used to allocate the block
///    is guaranteed to be in the range `[use_min, use_max]`, and
///
///  * a lower-bound on `use_max` can be safely approximated by a call to
///    `usable_size`.
///
pub unsafe trait Allocator {

Allocator core alloc and dealloc

    /// Returns a pointer suitable for holding data described by
    /// `layout`, meeting its size and alignment guarantees.
    ///
    /// The returned block of storage may or may not have its contents
    /// initialized. (Extension subtraits might restrict this
    /// behavior, e.g. to ensure initialization.)
    ///
    /// Returning `Err` indicates that either memory is exhausted or `layout` does
    /// not meet allocator's size or alignment constraints.
    ///
    /// Implementations are encouraged to return `Err` on memory
    /// exhaustion rather than panicking or aborting, but this is
    /// not a strict requirement. (Specifically: it is *legal* to use
    /// this trait to wrap an underlying native allocation library
    /// that aborts on memory exhaustion.)
    unsafe fn alloc(&mut self, layout: Layout) -> Result<Address, AllocErr>;

    /// Deallocate the memory referenced by `ptr`.
    ///
    /// `ptr` must have previously been provided via this allocator,
    /// and `layout` must *fit* the provided block (see above);
    /// otherwise yields undefined behavior.
    unsafe fn dealloc(&mut self, ptr: Address, layout: Layout);

    /// Allocator-specific method for signalling an out-of-memory
    /// condition.
    ///
    /// Implementations of the `oom` method are discouraged from
    /// infinitely regressing in nested calls to `oom`. In
    /// practice this means implementors should eschew allocating,
    /// especially from `self` (directly or indirectly).
    ///
    /// Implementations of this trait's allocation methods are discouraged
    /// from panicking (or aborting) in the event of memory exhaustion;
    /// instead they should return an appropriate error from the
    /// invoked method, and let the client decide whether to invoke
    /// this `oom` method.
    fn oom(&mut self, _: AllocErr) -> ! {
        unsafe { ::core::intrinsics::abort() }
    }

Allocator-specific quantities and limits

    // == ALLOCATOR-SPECIFIC QUANTITIES AND LIMITS ==
    // usable_size

    /// Returns bounds on the guaranteed usable size of a successful
    /// allocation created with the specified `layout`.
    ///
    /// In particular, for a given layout `k`, if `usable_size(k)` returns
    /// `(l, m)`, then one can use a block of layout `k` as if it has any
    /// size in the range `[l, m]` (inclusive).
    ///
    /// (All implementors of `fn usable_size` must ensure that
    /// `l <= k.size() <= m`)
    ///
    /// Both the lower- and upper-bounds (`l` and `m` respectively) are
    /// provided: An allocator based on size classes could misbehave
    /// if one attempts to deallocate a block without providing a
    /// correct value for its size (i.e., one within the range `[l, m]`).
    ///
    /// Clients who wish to make use of excess capacity are encouraged
    /// to use the `alloc_excess` and `realloc_excess` instead, as
    /// this method is constrained to conservatively report a value
    /// less than or equal to the minimum capacity for *all possible*
    /// calls to those methods.
    ///
    /// However, for clients that do not wish to track the capacity
    /// returned by `alloc_excess` locally, this method is likely to
    /// produce useful results.
    unsafe fn usable_size(&self, layout: &Layout) -> (Capacity, Capacity) {
        (layout.size(), layout.size())
    }

Allocator methods for memory reuse

    // == METHODS FOR MEMORY REUSE ==
    // realloc. alloc_excess, realloc_excess
    
    /// Returns a pointer suitable for holding data described by
    /// `new_layout`, meeting its size and alignment guarantees. To
    /// accomplish this, this may extend or shrink the allocation
    /// referenced by `ptr` to fit `new_layout`.
    ///
    /// * `ptr` must have previously been provided via this allocator.
    ///
    /// * `layout` must *fit* the `ptr` (see above). (The `new_layout`
    ///   argument need not fit it.)
    ///
    /// Behavior undefined if either of latter two constraints are unmet.
    ///
    /// In addition, `new_layout` should not impose a different alignment
    /// constraint than `layout`. (In other words, `new_layout.align()`
    /// should equal `layout.align()`.)
    /// However, behavior is well-defined (though underspecified) when
    /// this constraint is violated; further discussion below.
    ///
    /// If this returns `Ok`, then ownership of the memory block
    /// referenced by `ptr` has been transferred to this
    /// allocator. The memory may or may not have been freed, and
    /// should be considered unusable (unless of course it was
    /// transferred back to the caller again via the return value of
    /// this method).
    ///
    /// Returns `Err` only if `new_layout` does not meet the allocator's
    /// size and alignment constraints of the allocator or the
    /// alignment of `layout`, or if reallocation otherwise fails. (Note
    /// that did not say "if and only if" -- in particular, an
    /// implementation of this method *can* return `Ok` if
    /// `new_layout.align() != old_layout.align()`; or it can return `Err`
    /// in that scenario, depending on whether this allocator
    /// can dynamically adjust the alignment constraint for the block.)
    ///
    /// If this method returns `Err`, then ownership of the memory
    /// block has not been transferred to this allocator, and the
    /// contents of the memory block are unaltered.
    unsafe fn realloc(&mut self,
                      ptr: Address,
                      layout: Layout,
                      new_layout: Layout) -> Result<Address, AllocErr> {
        let (min, max) = self.usable_size(&layout);
        let s = new_layout.size();
        // All Layout alignments are powers of two, so a comparison
        // suffices here (rather than resorting to a `%` operation).
        if min <= s && s <= max && new_layout.align() <= layout.align() {
            return Ok(ptr);
        } else {
            let new_size = new_layout.size();
            let old_size = layout.size();
            let result = self.alloc(new_layout);
            if let Ok(new_ptr) = result {
                ptr::copy(ptr as *const u8, new_ptr, cmp::min(old_size, new_size));
                self.dealloc(ptr, layout);
            }
            result
        }
    }

    /// Behaves like `fn alloc`, but also returns the whole size of
    /// the returned block. For some `layout` inputs, like arrays, this
    /// may include extra storage usable for additional data.
    unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
        let usable_size = self.usable_size(&layout);
        self.alloc(layout).map(|p| Excess(p, usable_size.1))
    }

    /// Behaves like `fn realloc`, but also returns the whole size of
    /// the returned block. For some `layout` inputs, like arrays, this
    /// may include extra storage usable for additional data.
    unsafe fn realloc_excess(&mut self,
                             ptr: Address,
                             layout: Layout,
                             new_layout: Layout) -> Result<Excess, AllocErr> {
        let usable_size = self.usable_size(&new_layout);
        self.realloc(ptr, layout, new_layout)
            .map(|p| Excess(p, usable_size.1))
    }

    /// Attempts to extend the allocation referenced by `ptr` to fit `new_layout`.
    ///
    /// * `ptr` must have previously been provided via this allocator.
    ///
    /// * `layout` must *fit* the `ptr` (see above). (The `new_layout`
    ///   argument need not fit it.)
    ///
    /// Behavior undefined if either of latter two constraints are unmet.
    ///
    /// If this returns `Ok`, then the allocator has asserted that the
    /// memory block referenced by `ptr` now fits `new_layout`, and thus can
    /// be used to carry data of that layout. (The allocator is allowed to
    /// expend effort to accomplish this, such as extending the memory block to
    /// include successor blocks, or virtual memory tricks.)
    ///
    /// If this returns `Err`, then the allocator has made no assertion
    /// about whether the memory block referenced by `ptr` can or cannot
    /// fit `new_layout`.
    ///
    /// In either case, ownership of the memory block referenced by `ptr`
    /// has not been transferred, and the contents of the memory block
    /// are unaltered.
    unsafe fn realloc_in_place(&mut self,
                               ptr: Address,
                               layout: Layout,
                               new_layout: Layout) -> Result<(), CannotReallocInPlace> {
        let (_, _, _) = (ptr, layout, new_layout);
        Err(CannotReallocInPlace)
    }

Allocator convenience methods for common usage patterns

    // == COMMON USAGE PATTERNS ==
    // alloc_one, dealloc_one, alloc_array, realloc_array. dealloc_array
    
    /// Allocates a block suitable for holding an instance of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// The returned block is suitable for passing to the
    /// `alloc`/`realloc` methods of this allocator.
    ///
    /// May return `Err` for zero-sized `T`.
    unsafe fn alloc_one<T>(&mut self) -> Result<Unique<T>, AllocErr>
        where Self: Sized {
        let k = Layout::new::<T>();
        if k.size() > 0 {
            self.alloc(k).map(|p|Unique::new(*p as *mut T))
        } else {
            Err(AllocErr::invalid_input("zero-sized type invalid for alloc_one"))
        }
    }

    /// Deallocates a block suitable for holding an instance of `T`.
    ///
    /// The given block must have been produced by this allocator,
    /// and must be suitable for storing a `T` (in terms of alignment
    /// as well as minimum and maximum size); otherwise yields
    /// undefined behavior.
    ///
    /// Captures a common usage pattern for allocators.
    unsafe fn dealloc_one<T>(&mut self, mut ptr: Unique<T>)
        where Self: Sized {
        let raw_ptr = ptr.get_mut() as *mut T as *mut u8;
        self.dealloc(raw_ptr, Layout::new::<T>());
    }

    /// Allocates a block suitable for holding `n` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// The returned block is suitable for passing to the
    /// `alloc`/`realloc` methods of this allocator.
    ///
    /// May return `Err` for zero-sized `T` or `n == 0`.
    ///
    /// Always returns `Err` on arithmetic overflow.
    unsafe fn alloc_array<T>(&mut self, n: usize) -> Result<Unique<T>, AllocErr>
        where Self: Sized {
        match Layout::array::<T>(n) {
            Some(ref layout) if layout.size() > 0 => {
                self.alloc(layout.clone())
                    .map(|p| {
                        println!("alloc_array layout: {:?} yielded p: {:?}", layout, p);
                        Unique::new(p as *mut T)
                    })
            }
            _ => Err(AllocErr::invalid_input("invalid layout for alloc_array")),
        }
    }

    /// Reallocates a block previously suitable for holding `n_old`
    /// instances of `T`, returning a block suitable for holding
    /// `n_new` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// The returned block is suitable for passing to the
    /// `alloc`/`realloc` methods of this allocator.
    ///
    /// May return `Err` for zero-sized `T` or `n == 0`.
    ///
    /// Always returns `Err` on arithmetic overflow.
    unsafe fn realloc_array<T>(&mut self,
                               ptr: Unique<T>,
                               n_old: usize,
                               n_new: usize) -> Result<Unique<T>, AllocErr>
        where Self: Sized {
        match (Layout::array::<T>(n_old), Layout::array::<T>(n_new), *ptr) {
            (Some(ref k_old), Some(ref k_new), ptr) if k_old.size() > 0 && k_new.size() > 0 => {
                self.realloc(ptr as *mut u8, k_old.clone(), k_new.clone())
                    .map(|p|Unique::new(p as *mut T))
            }
            _ => {
                Err(AllocErr::invalid_input("invalid layout for realloc_array"))
            }
        }
    }

    /// Deallocates a block suitable for holding `n` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    unsafe fn dealloc_array<T>(&mut self, ptr: Unique<T>, n: usize) -> Result<(), AllocErr>
        where Self: Sized {
        let raw_ptr = *ptr as *mut u8;
        match Layout::array::<T>(n) {
            Some(ref k) if k.size() > 0 => {
                Ok(self.dealloc(raw_ptr, k.clone()))
            }
            _ => {
                Err(AllocErr::invalid_input("invalid layout for dealloc_array"))
            }
        }
    }

Allocator unchecked method variants

    // UNCHECKED METHOD VARIANTS

    /// Returns a pointer suitable for holding data described by
    /// `layout`, meeting its size and alignment guarantees.
    ///
    /// The returned block of storage may or may not have its contents
    /// initialized. (Extension subtraits might restrict this
    /// behavior, e.g. to ensure initialization.)
    ///
    /// Returns `None` if request unsatisfied.
    ///
    /// Behavior undefined if input does not meet size or alignment
    /// constraints of this allocator.
    unsafe fn alloc_unchecked(&mut self, layout: Layout) -> Option<Address> {
        // (default implementation carries checks, but impl's are free to omit them.)
        self.alloc(layout).ok()
    }

    /// Returns a pointer suitable for holding data described by
    /// `new_layout`, meeting its size and alignment guarantees. To
    /// accomplish this, may extend or shrink the allocation
    /// referenced by `ptr` to fit `new_layout`.
    ////
    /// (In other words, ownership of the memory block associated with
    /// `ptr` is first transferred back to this allocator, but the
    /// same block may or may not be transferred back as the result of
    /// this call.)
    ///
    /// * `ptr` must have previously been provided via this allocator.
    ///
    /// * `layout` must *fit* the `ptr` (see above). (The `new_layout`
    ///   argument need not fit it.)
    ///
    /// * `new_layout` must meet the allocator's size and alignment
    ///    constraints. In addition, `new_layout.align()` must equal
    ///    `layout.align()`. (Note that this is a stronger constraint
    ///    that that imposed by `fn realloc`.)
    ///
    /// Behavior undefined if any of latter three constraints are unmet.
    ///
    /// If this returns `Some`, then the memory block referenced by
    /// `ptr` may have been freed and should be considered unusable.
    ///
    /// Returns `None` if reallocation fails; in this scenario, the
    /// original memory block referenced by `ptr` is unaltered.
    unsafe fn realloc_unchecked(&mut self,
                                ptr: Address,
                                layout: Layout,
                                new_layout: Layout) -> Option<Address> {
        // (default implementation carries checks, but impl's are free to omit them.)
        self.realloc(ptr, layout, new_layout).ok()
    }

    /// Behaves like `fn alloc_unchecked`, but also returns the whole
    /// size of the returned block. 
    unsafe fn alloc_excess_unchecked(&mut self, layout: Layout) -> Option<Excess> {
        self.alloc_excess(layout).ok()
    }

    /// Behaves like `fn realloc_unchecked`, but also returns the
    /// whole size of the returned block.
    unsafe fn realloc_excess_unchecked(&mut self,
                                       ptr: Address,
                                       layout: Layout,
                                       new_layout: Layout) -> Option<Excess> {
        self.realloc_excess(ptr, layout, new_layout).ok()
    }


    /// Allocates a block suitable for holding `n` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// Requires inputs are non-zero and do not cause arithmetic
    /// overflow, and `T` is not zero sized; otherwise yields
    /// undefined behavior.
    unsafe fn alloc_array_unchecked<T>(&mut self, n: usize) -> Option<Unique<T>>
        where Self: Sized {
        let layout = Layout::array_unchecked::<T>(n);
        self.alloc_unchecked(layout).map(|p|Unique::new(*p as *mut T))
    }

    /// Reallocates a block suitable for holding `n_old` instances of `T`,
    /// returning a block suitable for holding `n_new` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// Requires inputs are non-zero and do not cause arithmetic
    /// overflow, and `T` is not zero sized; otherwise yields
    /// undefined behavior.
    unsafe fn realloc_array_unchecked<T>(&mut self,
                                         ptr: Unique<T>,
                                         n_old: usize,
                                         n_new: usize) -> Option<Unique<T>>
        where Self: Sized {
        let (k_old, k_new, ptr) = (Layout::array_unchecked::<T>(n_old),
                                   Layout::array_unchecked::<T>(n_new),
                                   *ptr);
        self.realloc_unchecked(ptr as *mut u8, k_old, k_new)
            .map(|p|Unique::new(*p as *mut T))
    }

    /// Deallocates a block suitable for holding `n` instances of `T`.
    ///
    /// Captures a common usage pattern for allocators.
    ///
    /// Requires inputs are non-zero and do not cause arithmetic
    /// overflow, and `T` is not zero sized; otherwise yields
    /// undefined behavior.
    unsafe fn dealloc_array_unchecked<T>(&mut self, ptr: Unique<T>, n: usize)
        where Self: Sized {
        let layout = Layout::array_unchecked::<T>(n);
        self.dealloc(*ptr as *mut u8, layout);
    }
}

Summary

Extend the existing #[repr] attribute on structs with a packed = "N" option to specify a custom packing for struct types.

Motivation

Many C/C++ compilers allow a packing to be specified for structs which effectively lowers the alignment for a struct and its fields (for example with MSVC there is #pragma pack(N)). Such packing is used extensively in certain C/C++ libraries (such as Windows API which uses it pervasively making writing Rust libraries such as winapi challenging).

At the moment the only way to work around the lack of a proper #[repr(packed = "N")] attribute is to use #[repr(packed)] and then manually fill in padding which is a burdensome task. Even then that isn’t quite right because the overall alignment of the struct would end up as 1 even though it needs to be N (or the default if that is smaller than N), so this fills in a gap which is impossible to do in Rust at the moment.

Detailed design

The #[repr] attribute on structs will be extended to include a form such as:

#[repr(packed = "2")]
struct LessAligned(i16, i32);

This structure will have an alignment of 2 and a size of 6, as well as the second field having an offset of 2 instead of 4 from the base of the struct. This is in contrast to without the attribute where the structure would have an alignment of 4 and a size of 8, and the second field would have an offset of 4 from the base of the struct.

Syntactically, the repr meta list will be extended to accept a meta item name/value pair with the name “packed” and the value as a string which can be parsed as a u64. The restrictions on where this attribute can be placed along with the accepted values are:

  • Custom packing can only be specified on struct declarations for now. Specifying a different packing on perhaps enum or type definitions should be a backwards-compatible extension.
  • Packing values must be a power of two.

By specifying this attribute, the alignment of the struct would be the smaller of the specified packing and the default alignment of the struct. The alignments of each struct field for the purpose of positioning fields would also be the smaller of the specified packing and the alignment of the type of that field. If the specified packing is greater than or equal to the default alignment of the struct, then the alignment and layout of the struct should be unaffected.

When combined with #[repr(C)] the size alignment and layout of the struct should match the equivalent struct in C.

#[repr(packed)] and #[repr(packed = "1")] should have identical behavior.

Because this lowers the effective alignment of fields in the same way that #[repr(packed)] does (which caused issue #27060), while accessing a field should be safe, borrowing a field should be unsafe.

Specifying #[repr(packed)] and #[repr(packed = "N")] where N is not 1 should result in an error.

Specifying #[repr(packed = "A")] and #[repr(align = "B")] should still pack together fields with the packing specified, but then increase the overall alignment to the alignment specified. Depends on RFC #1358 landing.

Drawbacks

Alternatives

  • The alternative is not doing this and forcing people to continue using #[repr(packed)] with manual padding, although such structs would always have an alignment of 1 which is often wrong.
  • Alternatively a new attribute could be used such as #[pack].

Unresolved questions

  • The behavior specified here should match the behavior of MSVC at least. Does it match the behavior of other C/C++ compilers as well?
  • Should it still be safe to borrow fields whose alignment is less than or equal to the specified packing or should all field borrows be unsafe?
  • Feature Name: rvalue_static_promotion
  • Start Date: 2015-12-18
  • RFC PR: #1414
  • Rust Issue: #38865

Summary

Promote constexpr rvalues to values in static memory instead of stack slots, and expose those in the language by being able to directly create 'static references to them. This would allow code like let x: &'static u32 = &42 to work.

Motivation

Right now, when dealing with constant values, you have to explicitly define const or static items to create references with 'static lifetime, which can be unnecessarily verbose if those items never get exposed in the actual API:

fn return_x_or_a_default(x: Option<&u32>) -> &u32 {
    if let Some(x) = x {
        x
    } else {
        static DEFAULT_X: u32 = 42;
        &DEFAULT_X
    }
}
fn return_binop() -> &'static Fn(u32, u32) -> u32 {
    const STATIC_TRAIT_OBJECT: &'static Fn(u32, u32) -> u32
        = &|x, y| x + y;
    STATIC_TRAIT_OBJECT
}

This workaround also has the limitation of not being able to refer to type parameters of a containing generic functions, eg you can’t do this:

fn generic<T>() -> &'static Option<T> {
    const X: &'static Option<T> = &None::<T>;
    X
}

However, the compiler already special cases a small subset of rvalue const expressions to have static lifetime - namely the empty array expression:

let x: &'static [u8] = &[];

And though they don’t have to be seen as such, string literals could be regarded as the same kind of special sugar:

let b: &'static [u8; 4] = b"test";
// could be seen as `= &[116, 101, 115, 116]`

let s: &'static str = "foo";
// could be seen as `= &str([102, 111, 111])`
// given `struct str([u8]);` and the ability to construct compound
// DST structs directly

With the proposed change, those special cases would instead become part of a general language feature usable for custom code.

Detailed design

Inside a function body’s block:

  • If a shared reference to a constexpr rvalue is taken. (&<constexpr>)
  • And the constexpr does not contain a UnsafeCell { ... } constructor.
  • And the constexpr does not contain a const fn call returning a type containing a UnsafeCell.
  • Then instead of translating the value into a stack slot, translate it into a static memory location and give the resulting reference a 'static lifetime.

The UnsafeCell restrictions are there to ensure that the promoted value is truly immutable behind the reference.

Examples:

// OK:
let a: &'static u32 = &32;
let b: &'static Option<UnsafeCell<u32>> = &None;
let c: &'static Fn() -> u32 = &|| 42;

let h: &'static u32 = &(32 + 64);

fn generic<T>() -> &'static Option<T> {
    &None::<T>
}

// BAD:
let f: &'static Option<UnsafeCell<u32>> = &Some(UnsafeCell { data: 32 });
let g: &'static Cell<u32> = &Cell::new(); // assuming conf fn new()

These rules above should be consistent with the existing rvalue promotions in const initializer expressions:

// If this compiles:
const X: &'static T = &<constexpr foo>;

// Then this should compile as well:
let x: &'static T = &<constexpr foo>;

Implementation

The necessary changes in the compiler did already get implemented as part of codegen optimizations (emitting references-to or memcopies-from values in static memory instead of embedding them in the code).

All that is left to do is “throw the switch” for the new lifetime semantic by removing these lines: https://github.com/rust-lang/rust/blob/29ea4eef9fa6e36f40bc1f31eb1e56bf5941ee72/src/librustc/middle/mem_categorization.rs#L801-L807

(And of course fixing any fallout/bitrot that might have happened, adding tests, etc.)

Drawbacks

One more feature with seemingly ad-hoc rules to complicate the language…

Alternatives, Extensions

It would be possible to extend support to &'static mut references, as long as there is the additional constraint that the referenced type is zero sized.

This again has precedence in the array reference constructor:

// valid code today
let y: &'static mut [u8] = &mut [];

The rules would be similar:

  • If a mutable reference to a constexpr rvalue is taken. (&mut <constexpr>)
  • And the constexpr does not contain a UnsafeCell { ... } constructor.
  • And the constexpr does not contain a const fn call returning a type containing a UnsafeCell.
  • And the type of the rvalue is zero-sized.
  • Then instead of translating the value into a stack slot, translate it into a static memory location and give the resulting reference a 'static lifetime.

The zero-sized restriction is there because aliasing mutable references are only safe for zero sized types (since you never dereference the pointer for them).

Example:

fn return_fn_mut_or_default(&mut self) -> &FnMut(u32, u32) -> u32 {
    self.operator.unwrap_or(&mut |x, y| x * y)
    // ^ would be okay, since it would be translated like this:
    // const STATIC_TRAIT_OBJECT: &'static mut FnMut(u32, u32) -> u32
    //     = &mut |x, y| x * y;
    // self.operator.unwrap_or(STATIC_TRAIT_OBJECT)
}

let d: &'static mut () = &mut ();
let e: &'static mut Fn() -> u32 = &mut || 42;

There are two ways this could be taken further with zero-sized types:

  1. Remove the UnsafeCell restriction if the type of the rvalue is zero-sized.
  2. The above, but also remove the constexpr restriction, applying to any zero-sized rvalue instead.

Both cases would work because one can’t cause memory unsafety with a reference to a zero sized value, and they would allow more safe code to compile.

However, they might complicated reasoning about the rules more, especially with the last one also being possibly confusing in regards to side-effects.

Not doing this means:

  • Relying on static and const items to create 'static references, which won’t work in generics.
  • Empty-array expressions would remain special cased.
  • It would also not be possible to safely create &'static mut references to zero-sized types, though that part could also be achieved by allowing mutable references to zero-sized types in constants.

Unresolved questions

None, beyond “Should we do alternative 1 instead?”.

Summary

Deprecate type aliases and structs in std::os::$platform::raw in favor of trait-based accessors which return Rust types rather than the equivalent C type aliases.

Motivation

RFC 517 set forth a vision for the raw modules in the standard library to perform lowering operations on various Rust types to their platform equivalents. For example the fs::Metadata structure can be lowered to the underlying sys::stat structure. The rationale for this was to enable building abstractions externally from the standard library by exposing all of the underlying data that is obtained from the OS.

This strategy, however, runs into a few problems:

  • For some libc structures, such as stat, there’s not actually one canonical definition. For example on 32-bit Linux the definition of stat will change depending on whether LFS is enabled (via the -D_FILE_OFFSET_BITS macro). This means that if std is advertises these raw types as being “FFI compatible with libc”, it’s not actually correct in all circumstances!
  • Intricately exporting raw underlying interfaces (such as &stat from &fs::Metadata) makes it difficult to change the implementation over time. Today the 32-bit Linux standard library doesn’t use LFS functions, so files over 4GB cannot be opened. Changing this, however, would involve changing the stat structure and may be difficult to do.
  • Trait extensions in the raw module attempt to return the libc aliased type on all platforms, for example DirEntryExt::ino returns a type of ino_t. The ino_t type is billed as being FFI compatible with the libc ino_t type, but not all platforms store the d_ino field in dirent with the ino_t type. For example on Android the definition of ino_t is u32 but the actual stored value is u64. This means that on Android we’re actually silently truncating the return value!

Over time it’s basically turned out that exporting the somewhat-messy details of libc has gotten a little messy in the standard library as well. Exporting this functionality (e.g. being able to access all of the fields), is quite useful however! This RFC proposes tweaking the design of the extensions in std::os::*::raw to allow the same level of information exposure that happens today but also cut some of the tie from libc to std to give us more freedom to change these implementation details and work around weird platforms.

Detailed design

First, the types and type aliases in std::os::*::raw will all be deprecated. For example stat, ino_t, dev_t, mode_t, etc, will all be deprecated (in favor of their definitions in the libc crate). Note that the C integer types, c_int and friends, will not be deprecated.

Next, all existing extension traits will cease to return platform specific type aliases (such as the DirEntryExt::ino function). Instead they will return u64 across the board unless it’s 100% known for sure that fewer bits will suffice. This will improve consistency across platforms as well as avoid truncation problems such as those Android is experiencing. Furthermore this frees std from dealing with any odd FFI compatibility issues, punting that to the libc crate itself it the values are handed back into C.

The std::os::*::fs::MetadataExt will have its as_raw_stat method deprecated, and it will instead grow functions to access all the associated fields of the underlying stat structure. This means that there will now be a trait-per-platform to expose all this information. Also note that all the methods will likely return u64 in accordance with the above modification.

With these modifications to what std::os::*::raw includes and how it’s defined, it should be easy to tweak existing implementations and ensure values are transmitted in a lossless fashion. The changes, however, are both breaking changes and don’t immediately enable fixing bugs like using LFS on Linux:

  • Code such as let a: ino_t = entry.ino() would break as the ino() function will return u64, but the definition of ino_t may not be u64 for all platforms.
  • The stat structure itself on 32-bit Linux still uses 32-bit fields (e.g. it doesn’t mirror stat64 in libc).

To help with these issues, more extensive modifications can be made to the platform specific modules. All type aliases can be switched over to u64 and the stat structure could simply be redefined to stat64 on Linux (minus keeping the same name). This would, however, explicitly mean that std::os::raw is no longer FFI compatible with C.

This breakage can be clearly indicated in the deprecation messages, however. Additionally, this fits within std’s breaking changes policy as a local as cast should be all that’s needed to patch code that breaks to straddle versions of Rust.

Drawbacks

As mentioned above, this RFC is strictly-speaking a breaking change. It is expected that not much code will break, but currently there is no data supporting this.

Returning u64 across the board could be confusing in some circumstances as it may wildly differ both in terms of signedness as well as size from the underlying C type. Converting it back to the appropriate type runs the risk of being onerous, but accessing these raw fields in theory happens quite rarely as std should primarily be exporting cross-platform accessors for the various fields here and there.

Alternatives

  • The documentation of the raw modules in std could be modified to indicate that the types contained within are intentionally not FFI compatible, and the same structure could be preserved today with the types all being rewritten to what they would be anyway if this RFC were implemented. For example ino_t on Android would change to u64 and stat on 32-bit Linux would change to stat64. In doing this, however, it’s not clear why we’d keep around all the C namings and structure.

  • Instead of breaking existing functionality, new accessors and types could be added to acquire the “lossless” version of a type. For example we could add a ino64 function on DirEntryExt which returns a u64, and for stat we could add as_raw_stat64. This would, however, force Metadata to store two different stat structures, and the breakage in practice this will cause may be small enough to not warrant these great lengths.

Unresolved questions

  • Is the policy of almost always returning u64 too strict? Should types like mode_t be allowed as i32 explicitly? Should the sign at least attempt to always be preserved?

Summary

Safe memcpy from one slice to another of the same type and length.

Motivation

Currently, the only way to quickly copy from one non-u8 slice to another is to use a loop, or unsafe methods like std::ptr::copy_nonoverlapping. This allows us to guarantee a memcpy for Copy types, and is safe.

Detailed design

Add one method to Primitive Type slice.

impl<T> [T] where T: Copy {
    pub fn copy_from_slice(&mut self, src: &[T]);
}

copy_from_slice asserts that src.len() == self.len(), then memcpys the members into self from src. Calling copy_from_slice is semantically equivalent to a memcpy. self shall have exactly the same members as src after a call to copy_from_slice.

Drawbacks

One new method on slice.

Alternatives

copy_from_slice could be called copy_to, and have the order of the arguments switched around. This would follow ptr::copy_nonoverlapping ordering, and not dst = src or .clone_from_slice() ordering.

copy_from_slice could panic only if dst.len() < src.len(). This would be the same as what came before, but we would also lose the guarantee that an uninitialized slice would be fully initialized.

copy_from_slice could be a free function, as it was in the original draft of this document. However, there was overwhelming support for it as a method.

copy_from_slice could be not merged, and clone_from_slice could be specialized to memcpy in cases of T: Copy. I think it’s good to have a specific function to do this, however, which asserts that T: Copy.

Unresolved questions

None, as far as I can tell.

Summary

Expand the current pub/non-pub categorization of items with the ability to say “make this item visible solely to a (named) module tree.”

The current crate is one such tree, and would be expressed via: pub(crate) item. Other trees can be denoted via a path employed in a use statement, e.g. pub(a::b) item, or pub(super) item.

Motivation

Right now, if you have a definition for an item X that you want to use in many places in a module tree, you can either (1.) define X at the root of the tree as a non-pub item, or (2.) you can define X as a pub item in some submodule (and import into the root of the module tree via use).

But: Sometimes neither of these options is really what you want.

There are scenarios where developers would like an item to be visible to a particular module subtree (or a whole crate in its entirety), but it is not possible to move the item’s (non-pub) definition to the root of that subtree (which would be the usual way to expose an item to a subtree without making it pub).

If the definition of X itself needs access to other private items within a submodule of the tree, then X cannot be put at the root of the module tree. Illustration:

// Intent: `a` exports `I`, `bar`, and `foo`, but nothing else.
pub mod a {
    pub const I: i32 = 3;

    // `semisecret` will be used "many" places within `a`, but
    // is not meant to be exposed outside of `a`.
    fn semisecret(x: i32) -> i32  { use self::b::c::J; x + J }

    pub fn bar(z: i32) -> i32 { semisecret(I) * z }
    pub fn foo(y: i32) -> i32 { semisecret(I) + y }

    mod b {
        mod c {
            const J: i32 = 4; // J is meant to be hidden from the outside world.
        }
    }
}

(Note: the pub mod a is meant to be at the root of some crate.)

The latter code fails to compile, due to the privacy violation where the body of fn semisecret attempts to access a::b::c::J, which is not visible in the context of a.

A standard way to deal with this today is to use the second approach described above (labelled “(2.)”): move fn semisecret down into the place where it can access J, marking fn semisecret as pub so that it can still be accessed within the items of a, and then re-exporting semisecret as necessary up the module tree.

// Intent: `a` exports `I`, `bar`, and `foo`, but nothing else.
pub mod a {
    pub const I: i32 = 3;

    // `semisecret` will be used "many" places within `a`, but
    // is not meant to be exposed outside of `a`.
    // (If we put `pub use` here, then *anyone* could access it.)
    use self::b::semisecret;

    pub fn bar(z: i32) -> i32 { semisecret(I) * z }
    pub fn foo(y: i32) -> i32 { semisecret(I) + y }

    mod b {
        pub use self::c::semisecret;
        mod c {
            const J: i32 = 4; // J is meant to be hidden from the outside world.
            pub fn semisecret(x: i32) -> i32  { x + J }
        }
    }
}

This works, but there is a serious issue with it: One cannot easily tell exactly how “public” fn semisecret is. In particular, understanding who can access semisecret requires reasoning about (1.) all of the pub use’s (aka re-exports) of semisecret, and (2.) the pub-ness of every module in a path leading to fn semisecret or one of its re-exports.

This RFC seeks to remedy the above problem via two main changes.

  1. Give the user a way to explicitly restrict the intended scope of where a pub-licized item can be used.

  2. Modify the privacy rules so that pub-restricted items cannot be used nor re-exported outside of their respective restricted areas.

Impact

This difficulty in reasoning about the “publicness” of a name is not just a problem for users; it also complicates efforts within the compiler to verify that a surface API for a type does not itself use or expose any private names.

There are a number of bugs filed against privacy checking; some are simply implementation issues, but the comment threads in the issues make it clear that in some cases, different people have very different mental models about how privacy interacts with aliases (e.g. type declarations) and re-exports.

In theory, we can add the changes of this RFC without breaking any old code. (That is, in principle the only affected code is that for item definitions that use pub(restriction). This limited addition would still provide value to users in their reasoning about the visibility of such items.)

In practice, I expect that as part of the implementation of this RFC, we will probably fix pre-existing bugs in the parts of privacy checking verifying that surface API’s do not use or expose private names.

Important: No such fixes to such pre-existing bugs are being concretely proposed by this RFC; I am merely musing that by adding a more expressive privacy system, we will open the door to fix bugs whose exploits, under the old system, were the only way to express certain patterns of interest to developers.

Detailed design

The main problem identified in the motivation section is this:

From an module-internal definition like

pub mod a { [...] mod b { [...] pub fn semisecret(x: i32) -> i32  { x + J } [...] } }

one cannot readily tell exactly how “public” the fn semisecret is meant to be.

As already stated, this RFC seeks to remedy the above problem via two main changes.

  1. Give the user a way to explicitly restrict the intended scope of where a pub-licized item can be used.

  2. Modify the privacy rules so that pub-restricted items cannot be used nor re-exported outside of their respective restricted areas.

Syntax

The new feature is to restrict the scope by adding the module subtree (which acts as the restricted area) in parentheses after the pub keyword, like so:

pub(a::b::c) item;

The path in the restriction is resolved just like a use statement: it is resolved absolutely, from the crate root.

Just like use statements, one can also write relative paths, by starting them with self or a sequence of super’s.

pub(super::super) item;
// or
pub(self) item; // (semantically equiv to no `pub`; see below)

In addition to the forms analogous to use, there is one new form:

pub(crate) item;

In other words, the grammar is changed like so:

old:

VISIBILITY ::= <empty> | `pub`

new:

VISIBILITY ::= <empty> | `pub` | `pub` `(` USE_PATH `)` | `pub` `(` `crate` `)`

One can use these pub(restriction) forms anywhere that one can currently use pub. In particular, one can use them on item definitions, methods in an impl, the fields of a struct definition, and on pub use re-exports.

Semantics

The meaning of pub(restriction) is as follows: The definition of every item, method, field, or name (e.g. a re-export) is associated with a restriction.

A restriction is either: the universe of all crates (aka “unrestricted”), the current crate, or an absolute path to a module sub-hierarchy in the current crate. A restricted thing cannot be directly “used” in source code outside of its restricted area. (The term “used” here is meant to cover both direct reference in the source, and also implicit reference as the inferred type of an expression or pattern.)

  • pub written with no explicit restriction means that there is no restriction, or in other words, the restriction is the universe of all crates.

  • pub(crate) means that the restriction is the current crate.

  • pub(<path>) means that the restriction is the module sub-hierarchy denoted by <path>, resolved in the context of the occurrence of the pub modifier. (This is to ensure that super and self make sense in such paths.)

As noted above, the definition means that pub(self) item is the same as if one had written just item.

  • The main reason to support this level of generality (which is otherwise just “redundant syntax”) is macros: one can write a macro that expands to pub($arg) item, and a macro client can pass in self as the $arg to get the effect of a non-pub definition.

NOTE: even if the restriction of an item or name indicates that it is accessible in some context, it may still be impossible to reference it. In particular, we will still keep our existing rules regarding pub items defined in non-pub modules; such items would have no restriction, but still may be inaccessible if they are not re-exported in some manner.

Revised Example

In the running example, one could instead write:

// Intent: `a` exports `I`, `bar`, and `foo`, but nothing else.
pub mod a {
    pub const I: i32 = 3;

    // `semisecret` will be used "many" places within `a`, but
    // is not meant to be exposed outside of `a`.
    // (`pub use` would be *rejected*; see Note 1 below)
    use self::b::semisecret;

    pub fn bar(z: i32) -> i32 { semisecret(I) * z }
    pub fn foo(y: i32) -> i32 { semisecret(I) + y }

    mod b {
        pub(a) use self::c::semisecret;
        mod c {
            const J: i32 = 4; // J is meant to be hidden from the outside world.

            // `pub(a)` means "usable within hierarchy of `mod a`, but not
            // elsewhere."
            pub(a) fn semisecret(x: i32) -> i32  { x + J }
        }
    }
}

Note 1: The compiler would reject the variation of the above written as:

pub mod a { [...] pub use self::b::semisecret; [...] }

because pub(a) fn semisecret says that it cannot be used outside of a, and therefore it be incorrect (or at least useless) to reexport semisecret outside of a.

Note 2: The most direct interpretation of the rules here leads me to conclude that b’s re-export of semisecret needs to be restricted to a as well. However, it may be possible to loosen things so that the re-export could just stay as pub with no extra restriction; see discussion of “IRS:PUNPM” in Unresolved Questions.

This richer notion of privacy does offer us some other ways to re-write the running example; instead of defining fn semisecret within c so that it can access J, we might instead expose J to mod b and then put fn semisecret, like so:

pub mod a {
    [...]
    mod b {
        use self::c::J;
        pub(a) fn semisecret(x: i32) -> i32  { x + J }
        mod c {
            pub(b) const J: i32 = 4;
        }
    }
}

(This RFC takes no position on which of the above two structures is “better”; a toy example like this does not provide enough context to judge.)

Restrictions

Lets discuss what the restrictions actually mean.

Some basic definitions: An item is just as it is declared in the Rust reference manual: a component of a crate, located at a fixed path (potentially at the “outermost” anonymous module) within the module tree of the crate.

Every item can be thought of as having some hidden implementation component(s) along with an exposed surface API.

So, for example, in pub fn foo(x: Input) -> Output { Body }, the surface of foo includes Input and Output, while the Body is hidden.

The pre-existing privacy rules (both prior to and after this RFC) try to enforce two things: (1.) when a item references a path, all of the names on that path need to be visible (in terms of privacy) in the referencing context and, (2.) private items should not be exposed in the surface of public API’s.

  • I am using the term “surface” rather than “signature” deliberately, since I think the term “signature” is too broad to be used to accurately describe the current semantics of rustc. See my recent Surface blog post for further discussion.

This RFC is expanding the scope of (2.) above, so that the rules are now:

  1. when a item references a path (in its implementation or in its signature), all of the names on that path must be visible in the referencing context.

  2. items restricted to an area R should not be exposed in the surface API of names or items that can themselves be exported beyond R. (Privacy is now a special case of this more general notion.)

    For convenience, it is legal to declare a field (or inherent method) with a strictly larger area of restriction than its self. See discussion in the examples.

In principle, validating (1.) can be done via the pre-existing privacy code. (However, it may make sense to do it by mapping each name to its associated restriction; I don’t think that will change the outcome, but it might make the checking code simpler. But I am not an expert on the current state of the privacy checking code.)

Validating (2.) requires traversing the surface API for each item and comparing the restriction for every reference to the restriction of the item itself.

Trait methods

Currently, trait associated item syntax carries no pub modifier.

A question arises when trying to apply the terminology of this RFC: are trait associated items implicitly pub, in the sense that they are unrestricted?

The simple answer is: No, associated items are not implicitly pub; at least, not in general. (They are not in general implicitly pub today either, as discussed in RFC 136.) (If they were implicitly pub, things would be difficult; further discussion in attached appendix.)

However, since this RFC is introducing multiple kinds of pub, we should address the topic of what is the pub-ness of associated items.

  • When analyzing a trait definition, then associated items should be considered to inherit the pub-ness, if any, of their defining trait.

    We want to make sure that this code continues to work:

    mod a {
        struct S(String);
        trait Trait {
            fn make_s(&self) -> S; // referencing `S` is ok, b/c `Trait` is not `pub`
        }
    }

    And under this RFC, we now allow this as well:

    mod a {
        struct S(String);
        mod b {
            pub(a) trait Trait {
                fn mk_s(&self) -> ::a::S;
                // referencing `::a::S` is ok, b/c `Trait` is restricted to `::a`
            }
        }
        use self::b::Trait;
    }

    Note that in stable Rust today, it is an error to declare the latter trait within mod b as non-pub (since the use self::b::Trait would be referencing a private item), and in the Rust nightly channel it is a warning to declare it as pub trait Trait { ... }.

    The point of this RFC is to give users a sensible way to declare such traits within b, without allowing them to be exposed outside of a.

  • When analyzing an impl Trait for Type, there may be distinct restrictions assigned to the Trait and the Type. However, since both the Trait and the Type must be visible in the context of the module where the impl occurs, there should be a subtree relationship between the two restrictions; in other words, one restriction should be less than (or equal to) the other.

    So just use the minimum of the two restrictions when analyzing the right-hand sides of the associated items in the impl.

    Note: I am largely adopting this rule in an attempt to be consistent with RFC 136. I invite discussion of whether this rule actually makes sense as phrased here.

More examples!

These examples meant to explore the syntax a bit. They are not meant to provide motivation for the feature (i.e. I am not claiming that the feature is making this code cleaner or easier to reason about).

Impl item example

pub struct S(i32);

mod a {
    pub fn call_foo(s: &super::S) { s.foo(); }

    mod b {
        fn some_method_private_to_b() {
            println!("inside some_method_private_to_b");
        }

        impl super::super::S {
            pub(a) fn foo(&self) {
                some_method_private_to_b();
                println!("only callable within `a`: {}", self.0);
            }
        }
    }
}

fn rejected(s: &S) {
    s.foo(); //~ ERROR: `S::foo` not visible outside of module `a`
}

(You may be wondering: “Could we move that impl S out to the top-level, out of mod a?” Well … see discussion in the unresolved questions.)

Restricting fields example

mod a {
    #[derive(Default)]
    struct Priv(i32);

    pub mod b {
        use a::Priv as Priv_a;

        #[derive(Default)]
        pub struct F {
            pub    x: i32,
                   y: Priv_a,
            pub(a) z: Priv_a,
        }

        #[derive(Default)]
        pub struct G(pub i32, Priv_a, pub(a) Priv_a);

        // ... accesses to F.{x,y,z} ...
        // ... accesses to G.{0,1,2} ...
    }
    // ... accesses to F.{x,z} ...
    // ... accesses to G.{0,2} ...
}

mod k {
    use a::b::{F, G};
    // ... accesses to F and F.x ...
    // ... accesses to G and G.0 ...
}

Fields and inherent methods more public than self

In Rust today, one can write

mod a { struct X { pub y: i32, } }

This RFC was crafted to say that fields and inherent methods can have an associated restriction that is larger than the restriction of its self. This was both to keep from breaking the above code, and also because it would be annoying to be forced to write:

mod a { struct X { pub(a) y: i32, } }

(This RFC is not an attempt to resolve things like Rust Issue 30079; the decision of how to handle that issue can be dealt with orthogonally, in my opinion.)

So, under this RFC, the following is legal:

mod a {
    pub use self::b::stuff_with_x;
    mod b {
        struct X { pub y: i32, pub(a) z: i32 }
        mod c {
            impl super::X {
                pub(c) fn only_in_c(&mut self) { self.y += 1; }

                pub fn callanywhere(&mut self) {
                    self.only_in_c();
                    println!("X.y is now: {}", self.y);
                }
            }
        }
        pub fn stuff_with_x() {
            let mut x = X { y: 10, z: 20};
            x.callanywhere();
        }
    }
}

In particular:

  • It is okay that the fields y and z and the inherent method fn callanywhere are more publicly visible than X.

    (Just because we declare something pub does not mean it will actually be possible to reach it from arbitrary contexts. Whether or not such access is possible will depend on many things, including but not limited to the restriction attached and also future decisions about issues like issue 30079.)

  • We are allowed to restrict an inherent method, fn only_in_c, to a subtree of the module tree where X is itself visible.

Re-exports

Here is an example of a pub use re-export using the new feature, including both correct and invalid uses of the extended form.

mod a {
    mod b {
        pub(a) struct X { pub y: i32, pub(a) z: i32 } // restricted to `mod a` tree
        mod c {
            pub mod d {
                pub(super) use a::b::X as P; // ok: a::b::c is submodule of `a`
            }

            fn swap_ok(x: d::P) -> d::P { // ok: `P` accessible here
                X { z: x.y, y: x.z }
            }
        }

        fn swap_bad(x: c::d::P) -> c::d::P { //~ ERROR: `c::d::P` not visible outside `a::b::c`
            X { z: x.y, y: x.z }
        }

        mod bad {
            pub use super::X; //~ ERROR: `X` cannot be reexported outside of `a`
        }
    }

    fn swap_ok2(x: X) -> X { // ok: `X` accessible from `mod a`.
        X { z: x.y, y: x.z }
    }
}

Crate restricted visibility

This is a concrete illusration of how one might use the pub(crate) item form, (which is perhaps quite similar to Java’s default “package visibility”).

Crate c1:

pub mod a {
    struct Priv(i32);

    pub(crate) struct R { pub y: i32, z: Priv } // ok: field allowed to be more public
    pub        struct S { pub y: i32, z: Priv }

    pub fn to_r_bad(s: S) -> R { ... } //~ ERROR: `R` restricted solely to this crate

    pub(crate) fn to_r(s: S) -> R { R { y: s.y, z: s.z } } // ok: restricted to crate
}

use a::{R, S}; // ok: `a::R` and `a::S` are both visible

pub use a::R as ReexportAttempt; //~ ERROR: `a::R` restricted solely to this crate

Crate c2:

extern crate c1;

use c1::a::S; // ok: `S` is unrestricted

use c1::a::R; //~ ERROR: `c1::a::R` not visible outside of its crate

Precedent

When I started on this I was not sure if this form of delimited access to a particular module subtree had a precedent; the closest thing I could think of was C++ friend modifiers (but friend is far more ad-hoc and free-form than what is being proposed here).

Scala

It has since been pointed out to me that Scala has scoped access modifiers protected[Y] and private[Y], which specify that access is provided upto Y (where Y can be a package, class or singleton object).

The feature proposed by this RFC appears to be similar in intent to Scala’s scoped access modifiers.

Having said that, I will admit that I am not clear on what distinction, if any, Scala draws between protected[Y] and private[Y] when Y is a package, which is the main analogy for our purposes, or if they just allow both forms as synonyms for convenience.

(I can imagine a hypothetical distinction in Scala when Y is a class, but my skimming online has not provided insight as to what the actual distinction is.)

Even if there is some distinction drawn between the two forms in Scala, I suspect Rust does not need an analogous distinction in it’s pub(restricted)

Drawbacks

Obviously, pub(restriction) item complicates the surface syntax of the language.

  • However, my counter-argument to this drawback is that this feature in fact simplifies the developer’s mental model. It is easier to directly encode the expected visibility of an item via pub(restriction) than to figure out the right concoction via a mix of nested mod and pub use statements. And likewise, it is easier to read it too.

Developers may misuse this form and make it hard to access the tasty innards of other modules.

  • This is true, but I claim it is irrelevant.

    The effect of this change is solely on the visibility of items within a crate. No rules for inter-crate access change.

    From the perspective of cross-crate development, this RFC changes nothing, except that it may lead some crate authors to make some things no longer universally pub that they were forced to make visible before due to earlier limitations. I claim that in such cases, those crate authors probably always intended for such items to be non-pub, but language limitations were forcing their hand.

    As for intra-crate access: My expectation is that an individual crate will be made by a team of developers who can work out what mutual visibility they want and how it should evolve over time. This feature may affect their work flow to some degree, but they can choose to either use it or not, based on their own internal policies.

Alternatives

Do not extend the language!

  • Change privacy rules and make privacy analysis “smarter” (e.g. global reachabiliy analysis)

    The main problem with this approach is that we tried it, and it did not work well: The implementation was buggy, and the user-visible error messages were hard to understand.

    See discussion when the team was discussing the public items amendment

  • “Fix” the mental model of privacy (if necessary) without extending the language.

    The alternative is basically saying: “Our existing system is fine; all of the problems with it are due to bugs in the implementation”

    I am sympathetic to this response. However, I think it doesn’t quite hold up. Some users want to be able to define items that are exposed outside of their module but still restrict the scope of where they can be referenced, as discussed in the motivation section, and I do not think the current model can be “fixed” to support that use case, at least not without adding some sort of global reachability analysis as discussed in the previous bullet.

In addition, these two alternatives do not address the main point being made in the motivation section: one cannot tell exactly how “public” a pub item is, without working backwards through the module tree for all of its re-exports.

Curb your ambitions!

  • Instead of adding support for restricting to arbitrary module subtrees, narrow the feature to just pub(crate) item, so that one chooses either “module private” (by adding no modifier), or “universally visible” (by adding pub), or “visible to just the current crate” (by adding pub(crate)).

    This would be somewhat analogous to Java’s relatively coarse grained privacy rules, where one can choose public, private, protected, or the unnamed “package” visibility.

    I am all for keeping the implementation simple. However, the reason that we should support arbitrary module subtrees is that doing so will enable certain refactorings. Namely, if I decide I want to inline the definition for one or more crates A1, A2, … into client crate C (i.e. replacing extern crate A1; with an suitably defined mod A1 { ... }, but I do not want to worry about whether doing so will risk future changes violating abstraction boundaries that were previously being enforced via pub(crate), then I believe allowing pub(path) will allow a mechanical tool to do the inline refactoring, rewriting each pub(crate) as pub(A1) as necessary.

Be more ambitious!

This feature could be extended in various ways.

For example:

  • As mentioned on the RFC comment thread, we could allow multiple paths in the restriction-specification: pub(path1, path2, path3).

    This, for better or worse, would start to look a lot like friend declarations from C++.

  • Also as mentioned on the RFC comment thread, the pub(restricted) form does not have any variant where the restrction-specification denotes the whole universe. In other words, there’s no current way to get the same effect as pub item via pub(restricted) item; you cannot say pub(universe) item (even though I do so in a tongue-in-cheek manner elsewhere in this RFC).

    Some future syntaxes to support this have been proposed in the RFC comment thread, such as pub(::). But this RFC is leaving the actual choice to add such an extension (and what syntax to use for it) up to a later amendment in the future.

Unresolved questions

Can definition site fall outside restriction?

For example, is it illegal to do the following:

mod a {
  mod child { }
  mod b { pub(super::child) const J: i32 = 3; }
}

Or does it just mean that J, despite being defined in mod b, is itself not accessible in mod b?

pnkfelix is personally inclined to make this sort of thing illegal, mainly because he finds it totally unintuitive, but is interested in hearing counter-arguments.

Implicit Restriction Satisfaction (IRS:PUNPM)

If a re-export occurs within a non-pub module, can we treat it as implicitly satisfying a restriction to super imposed by the item it is re-exporting?

In particular, the revised example included:

// Intent: `a` exports `I` and `foo`, but nothing else.
pub mod a {
    [...]
    mod b {
        pub(a) use self::c::semisecret;
        mod c { pub(a) fn semisecret(x: i32) -> i32  { x + J } }
    }
}

However, since b is non-pub, its pub items and re-exports are solely accessible via the subhierarchy of its module parent (i.e., mod a, as long as no entity attempts to re-export them to a broader scope.

In other words, in some sense mod b { pub use item; } could implicitly satisfy a restriction to super imposed by item (if we chose to allow it).

Note: If it were pub mod b or pub(restrict) mod b, then the above reasoning would not hold. Therefore, this discussion is limited to re-exports from non-pub modules.

If we do not allow such implicit restriction satisfaction for pub use re-exports from non-pub modules (IRS:PUNPM), then:

pub mod a {
    [...]
    mod b {
        pub use self::c::semisecret;
        mod c { pub(a) fn semisecret(x: i32) -> i32  { x + J } }
    }
}

would be rejected, and one would be expected to write either:

        pub(super) use self::c::semisecret;

or

        pub(a) use self::c::semisecret;

(Side note: I am not saying that under IRS:PUNPM, the two forms pub use item and pub(super) use item would be considered synonymous, even in the context of a non-pub module like mod b. In particular, pub(super) use item may be imposing a new restriction on the re-exported name that was not part of its original definition.)

Interaction with Globs

Glob re-exports currently only re-export pub (as in pub(universe) items).

What should glob-reepxorts do with respect to pub(restricted)?

Here is an illustrating example pointed out by petrochenkov in the comment thread:

mod m {
    /*priv*/ pub(m) struct S1;
    pub(super) S2;
    pub(foo::bar) S3;
    pub S4;

    mod n {

        // What is reexported here?
        // Just `S4`?
        // Anything in `m` visible
        //  to `n` (which is not consistent with the current treatment of
        `pub` by globs).

        pub use m::*;
    }
}

// What is reexported here?
pub use m::*;
pub(baz::qux) use m::*;

This remains an unresolved question, but my personal inclination, at least for the initial implementation, is to make globs only import purely pub items; no non-pub, and no pub(restricted).

After we get more experience with pub(restricted) (and perhaps make other changes that may come in future RFCs), we will be in a better position to evaluate what to do here.

Appendices

Associated Items Digression

If associated items were implicitly pub, in the sense that they are unrestricted, then that would conflict with the rules imposed by this RFC, in the sense that the surface API of a non-pub trait is composed of its associated items, and so if all associated items were implicitly pub and unrestricted, then this code would be rejected:

mod a {
    struct S(String);
    trait Trait {
        fn mk_s(&self) -> S; // is this implicitly `pub` and unrestricted?
    }
    impl Trait for () { fn mk_s(&self) -> S { S(format!("():()")) } }
    impl Trait for i32 { fn mk_s(&self) -> S { S(format!("{}:i32", self)) } }
    pub fn foo(x:i32) -> String { format!("silly{}{}", ().mk_s().0, x.mk_s().0) }
}

If associated items were implicitly pub and unrestricted, then the above code would be rejected under direct interpretation of the rules of this RFC (because fn make_s is implicitly unrestricted, but the surface of fn make_s references S, a non-pub item). This would be backwards-incompatible (and just darn inconvenient too).

So, to be clear, this RFC is not suggesting that associated items be implicitly pub and unrestricted.

Summary

Add a splice method to Vec<T> and String removes a range of elements, and replaces it in place with a given sequence of values. The new sequence does not necessarily have the same length as the range it replaces. In the Vec case, this method returns an iterator of the elements being moved out, like drain.

Motivation

An implementation of this operation is either slow or dangerous.

The slow way uses Vec::drain, and then Vec::insert repeatedly. The latter part takes quadratic time: potentially many elements after the replaced range are moved by one offset potentially many times, once for each new element.

The dangerous way, detailed below, takes linear time but involves unsafely moving generic values with std::ptr::copy. This is non-trivial unsafe code, where a bug could lead to double-dropping elements or exposing uninitialized elements. (Or for String, breaking the UTF-8 invariant.) It therefore benefits form having a shared, carefully-reviewed implementation rather than leaving it to every potential user to do it themselves.

While it could be an external crate on crates.io, this operation is general-purpose enough that I think it belongs in the standard library, similar to Vec::drain.

Detailed design

An example implementation is below.

The proposal is to have inherent methods instead of extension traits. (Traits are used to make this testable outside of std and to make a point in Unresolved Questions below.)

#![feature(collections, collections_range, str_char)]

extern crate collections;

use collections::range::RangeArgument;
use std::ptr;

trait VecSplice<T> {
    fn splice<R, I>(&mut self, range: R, iterable: I) -> Splice<I>
    where R: RangeArgument<usize>, I: IntoIterator<Item=T>;
}

impl<T> VecSplice<T> for Vec<T> {
    fn splice<R, I>(&mut self, range: R, iterable: I) -> Splice<I>
    where R: RangeArgument<usize>, I: IntoIterator<Item=T>
    {
        unimplemented!() // FIXME: Fill in when exact semantics are decided.
    }
}

struct Splice<I: IntoIterator> {
    vec: &mut Vec<I::Item>,
    range: Range<usize>
    iter: I::IntoIter,
    // FIXME: Fill in when exact semantics are decided.
}

impl<I: IntoIterator> Iterator for Splice<I> {
    type Item = I::Item;
    fn next(&mut self) -> Option<Self::Item> {
        unimplemented!() // FIXME: Fill in when exact semantics are decided.
    }
}

impl<I: IntoIterator> Drop for Splice<I> {
    fn drop(&mut self) {
        unimplemented!() // FIXME: Fill in when exact semantics are decided.
    }
}

trait StringSplice {
    fn splice<R>(&mut self, range: R, s: &str) where R: RangeArgument<usize>;
}

impl StringSplice for String {
    fn splice<R>(&mut self, range: R, s: &str) where R: RangeArgument<usize> {
        if let Some(&start) = range.start() {
            assert!(self.is_char_boundary(start));
        }
        if let Some(&end) = range.end() {
            assert!(self.is_char_boundary(end));
        }
        unsafe {
            self.as_mut_vec()
        }.splice(range, s.bytes())
    }
}

#[test]
fn it_works() {
    let mut v = vec![1, 2, 3, 4, 5];
    v.splice(2..4, [10, 11, 12].iter().cloned());
    assert_eq!(v, &[1, 2, 10, 11, 12, 5]);
    v.splice(1..3, Some(20));
    assert_eq!(v, &[1, 20, 11, 12, 5]);
    let mut s = "Hello, world!".to_owned();
    s.splice(7.., "世界!");
    assert_eq!(s, "Hello, 世界!");
}

#[test]
#[should_panic]
fn char_boundary() {
    let mut s = "Hello, 世界!".to_owned();
    s.splice(..8, "")
}

The elements of the vector after the range first be moved by an offset of the lower bound of Iterator::size_hint minus the length of the range. Then, depending on the real length of the iterator:

  • If it’s the same as the lower bound, we’re done.
  • If it’s lower than the lower bound (which was then incorrect), the elements will be moved once more.
  • If it’s higher, the extra iterator items well be collected into a temporary Vec in order to know exactly how many there are, and the elements after will be moved once more.

Drawbacks

Same as for any addition to std: not every program needs it, and standard library growth has a maintenance cost.

Alternatives

  • Status quo: leave it to every one who wants this to do it the slow way or the dangerous way.
  • Publish a crate on crates.io. Individual crates tend to be not very discoverable, so not this situation would not be so different from the status quo.

Unresolved questions

  • Should the input iterator be consumed incrementally at each Splice::next call, or only in Splice::drop?

  • It would be nice to be able to Vec::splice with a slice without writing .iter().cloned() explicitly. This is possible with the same trick as for the Extend trait (RFC 839): accept iterators of &T as well as iterators of T:

    impl<'a, T: 'a> VecSplice<&'a T> for Vec<T> where T: Copy {
        fn splice<R, I>(&mut self, range: R, iterable: I)
        where R: RangeArgument<usize>, I: IntoIterator<Item=&'a T>
        {
            self.splice(range, iterable.into_iter().cloned())
        }
    }

    However, this trick can not be used with an inherent method instead of a trait. (By the way, what was the motivation for Extend being a trait rather than inherent methods, before RFC 839?)

  • If coherence rules and backward-compatibility allow it, this functionality could be added to Vec::insert and String::insert by overloading them / making them more generic. This would probably require implementing RangeArgument for usize representing an empty range, though a range of length 1 would maybe make more sense for Vec::drain (another user of RangeArgument).

Summary

Implement a method, contains(), for Range, RangeFrom, and RangeTo, checking if a number is in the range.

Note that the alternatives are just as important as the main proposal.

Motivation

The motivation behind this is simple: To be able to write simpler and more expressive code. This RFC introduces a “syntactic sugar” without doing so.

Detailed design

Implement a method, contains(), for Range, RangeFrom, and RangeTo. This method will check if a number is bound by the range. It will yield a boolean based on the condition defined by the range.

The implementation is as follows (placed in libcore, and reexported by libstd):

use core::ops::{Range, RangeTo, RangeFrom};

impl<Idx> Range<Idx> where Idx: PartialOrd<Idx> {
    fn contains(&self, item: Idx) -> bool {
        self.start <= item && self.end > item
    }
}

impl<Idx> RangeTo<Idx> where Idx: PartialOrd<Idx> {
    fn contains(&self, item: Idx) -> bool {
        self.end > item
    }
}

impl<Idx> RangeFrom<Idx> where Idx: PartialOrd<Idx> {
    fn contains(&self, item: Idx) -> bool {
        self.start <= item
    }
}

Drawbacks

Lacks of generics (see Alternatives).

Alternatives

Add a Contains trait

This trait provides the method .contains() and implements it for all the Range types.

Add a .contains<I: PartialEq<Self::Item>>(i: I) iterator method

This method returns a boolean, telling if the iterator contains the item given as parameter. Using method specialization, this can achieve the same performance as the method suggested in this RFC.

This is more flexible, and provide better performance (due to specialization) than just passing a closure comparing the items to a any() method.

Make .any() generic over a new trait

Call this trait, ItemPattern<Item>. This trait is implemented for Item and FnMut(Item) -> bool. This is, in a sense, similar to std::str::pattern::Pattern.

Then let .any() generic over this trait (T: ItemPattern<Self::Item>) to allow any() taking Self::Item searching through the iterator for this particular value.

This will not achieve the same performance as the other proposals.

Unresolved questions

None.

Summary

Allow types with destructors to be used in static items, const items, and const functions.

Motivation

Some of the collection types do not allocate any memory when constructed empty (most notably Vec). With the change to make leaking safe, the restriction on static or const items with destructors is no longer required to be a hard error (as it is safe and accepted that these destructors may never run).

Allowing types with destructors to be directly used in const functions and stored in statics or consts will remove the need to have runtime-initialization for global variables.

Detailed design

  • Lift the restriction on types with destructors being used in static or const items.
    • statics containing Drop-types will not run the destructor upon program/thread exit.
    • consts containing Drop-types will run the destructor at the appropriate point in the program.
    • (Optionally adding a lint that warn about the possibility of resource leak)
  • Allow instantiating structures with destructors in constant expressions.
  • Allow const fn to return types with destructors.
  • Disallow constant expressions that require destructors to run during compile-time constant evaluation (i.e: a drop(foo) in a const fn).

Examples

Assuming that RwLock and Vec have const fn new methods, the following example is possible and avoids runtime validity checks.

/// Logging output handler
trait LogHandler: Send + Sync {
    // ...
}
/// List of registered logging handlers
static S_LOGGERS: RwLock<Vec< Box<LogHandler> >> = RwLock::new( Vec::new() );

/// Just an empty byte vector.
const EMPTY_BYTE_VEC: Vec<u8> = Vec::new();

Disallowed code

static VAL: usize = (Vec::<u8>::new(), 0).1;	// The `Vec` would be dropped

const fn sample(_v: Vec<u8>) -> usize {
	0	// Discards the input vector, dropping it
}

Drawbacks

Destructors do not run on static items (by design), so this can lead to unexpected behavior when a type’s destructor has effects outside the program (e.g. a RAII temporary folder handle, which deletes the folder on drop). However, this can already happen using the lazy_static crate.

A const item’s destructor will run at each point where the const item is used. If a const item is never used, its destructor will never run. These behaviors may be unexpected.

Alternatives

  • Runtime initialization of a raw pointer can be used instead (as the lazy_static crate currently does on stable).
  • On nightly, a bug related to static and UnsafeCell<Option<T>> can be used to remove the dynamic allocation.
    • Both of these alternatives require runtime initialization, and incur a checking overhead on subsequent accesses.
  • Leaking of objects could be addressed by using C++-style .dtors support
    • This is undesirable, as it introduces confusion around destructor execution order.

Unresolved questions

  • TBD

Summary

Rust currently provides a compare_and_swap method on atomic types, but this method only exposes a subset of the functionality of the C++11 equivalents compare_exchange_strong and compare_exchange_weak:

  • compare_and_swap maps to the C++11 compare_exchange_strong, but there is no Rust equivalent for compare_exchange_weak. The latter is allowed to fail spuriously even when the comparison succeeds, which allows the compiler to generate better assembly code when the compare and swap is used in a loop.

  • compare_and_swap only has a single memory ordering parameter, whereas the C++11 versions have two: the first describes the memory ordering when the operation succeeds while the second one describes the memory ordering on failure.

Motivation

While all of these variants are identical on x86, they can allow more efficient code to be generated on architectures such as ARM:

  • On ARM, the strong variant of compare and swap is compiled into an LDREX / STREX loop which restarts the compare and swap when a spurious failure is detected. This is unnecessary for many lock-free algorithms since the compare and swap is usually already inside a loop and a spurious failure is often caused by another thread modifying the atomic concurrently, which will probably cause the compare and swap to fail anyways.

  • When Rust lowers compare_and_swap to LLVM, it uses the same memory ordering type for success and failure, which on ARM adds extra memory barrier instructions to the failure path. Most lock-free algorithms which make use of compare and swap in a loop only need relaxed ordering on failure since the operation is going to be restarted anyways.

Detailed design

Since compare_and_swap is stable, we can’t simply add a second memory ordering parameter to it. This RFC proposes deprecating the compare_and_swap function and replacing it with compare_exchange and compare_exchange_weak, which match the names of the equivalent C++11 functions (with the _strong suffix removed).

compare_exchange

A new method is instead added to atomic types:

fn compare_exchange(&self, current: T, new: T, success: Ordering, failure: Ordering) -> T;

The restrictions on the failure ordering are the same as C++11: only SeqCst, Acquire and Relaxed are allowed and it must be equal or weaker than the success ordering. Passing an invalid memory ordering will result in a panic, although this can often be optimized away since the ordering is usually statically known.

The documentation for the original compare_and_swap is updated to say that it is equivalent to compare_exchange with the following mapping for memory orders:

OriginalSuccessFailure
RelaxedRelaxedRelaxed
AcquireAcquireAcquire
ReleaseReleaseRelaxed
AcqRelAcqRelAcquire
SeqCstSeqCstSeqCst

compare_exchange_weak

A new method is instead added to atomic types:

fn compare_exchange_weak(&self, current: T, new: T, success: Ordering, failure: Ordering) -> (T, bool);

compare_exchange does not need to return a success flag because it can be inferred by checking if the returned value is equal to the expected one. This is not possible for compare_exchange_weak because it is allowed to fail spuriously, which means that it could fail to perform the swap even though the returned value is equal to the expected one.

A lock free algorithm using a loop would use the returned bool to determine whether to break out of the loop, and if not, use the returned value for the next iteration of the loop.

Intrinsics

These are the existing intrinsics used to implement compare_and_swap:

    pub fn atomic_cxchg<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_acq<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_rel<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_acqrel<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_relaxed<T>(dst: *mut T, old: T, src: T) -> T;

The following intrinsics need to be added to support relaxed memory orderings on failure:

    pub fn atomic_cxchg_acqrel_failrelaxed<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_failacq<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_failrelaxed<T>(dst: *mut T, old: T, src: T) -> T;
    pub fn atomic_cxchg_acq_failrelaxed<T>(dst: *mut T, old: T, src: T) -> T;

The following intrinsics need to be added to support compare_exchange_weak:

    pub fn atomic_cxchg_weak<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_acq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_rel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_acqrel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_relaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_acqrel_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_failacq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
    pub fn atomic_cxchg_weak_acq_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);

Drawbacks

Ideally support for failure memory ordering would be added by simply adding an extra parameter to the existing compare_and_swap function. However this is not possible because compare_and_swap is stable.

This RFC proposes deprecating a stable function, which may not be desirable.

Alternatives

One alternative for supporting failure orderings is to add new enum variants to Ordering instead of adding new methods with two ordering parameters. The following variants would need to be added: AcquireFailRelaxed, AcqRelFailRelaxed, SeqCstFailRelaxed, SeqCstFailAcquire. The downside is that the names are quite ugly and are only valid for compare_and_swap, not other atomic operations. It is also a breaking change to a stable enum.

Another alternative is to not deprecate compare_and_swap and instead add compare_and_swap_explicit, compare_and_swap_weak and compare_and_swap_weak_explicit. However the distiniction between the explicit and non-explicit isn’t very clear and can lead to some confusion.

Not doing anything is also a possible option, but this will cause Rust to generate worse code for some lock-free algorithms.

Unresolved questions

None

Summary

Provide native support for C-compatible unions, defined via a new “contextual keyword” union, without breaking any existing code that uses union as an identifier.

Note: This RFC has been partially superseded by unions-and-drop.

Motivation

Many FFI interfaces include unions. Rust does not currently have any native representation for unions, so users of these FFI interfaces must define multiple structs and transmute between them via std::mem::transmute. The resulting FFI code must carefully understand platform-specific size and alignment requirements for structure fields. Such code has little in common with how a C client would invoke the same interfaces.

Introducing native syntax for unions makes many FFI interfaces much simpler and less error-prone to write, simplifying the creation of bindings to native libraries, and enriching the Rust/Cargo ecosystem.

A native union mechanism would also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases.

The syntax proposed here recognizes union as though it were a keyword when used to introduce a union declaration, without breaking any existing code that uses union as an identifier. Experiments by Niko Matsakis demonstrate that recognizing union in this manner works unambiguously with zero conflicts in the Rust grammar.

To preserve memory safety, accesses to union fields may only occur in unsafe code. Commonly, code using unions will provide safe wrappers around unsafe union field accesses.

Detailed design

Declaring a union type

A union declaration uses the same field declaration syntax as a struct declaration, except with union in place of struct.

union MyUnion {
    f1: u32,
    f2: f32,
}

By default, a union uses an unspecified binary layout. A union declared with the #[repr(C)] attribute will have the same layout as an equivalent C union.

A union must have at least one field; an empty union declaration produces a syntax error.

Contextual keyword

Rust normally prevents the use of a keyword as an identifier; for instance, a declaration fn struct() {} will produce an error “expected identifier, found keyword struct”. However, to avoid breaking existing declarations that use union as an identifier, Rust will only recognize union as a keyword when used to introduce a union declaration. A declaration fn union() {} will not produce such an error.

Instantiating a union

A union instantiation uses the same syntax as a struct instantiation, except that it must specify exactly one field:

let u = MyUnion { f1: 1 };

Specifying multiple fields in a union instantiation results in a compiler error.

Safe code may instantiate a union, as no unsafe behavior can occur until accessing a field of the union. Code that wishes to maintain invariants about the union fields should make the union fields private and provide public functions that maintain the invariants.

Reading fields

Unsafe code may read from union fields, using the same dotted syntax as a struct:

fn f(u: MyUnion) -> f32 {
    unsafe { u.f2 }
}

Writing fields

Unsafe code may write to fields in a mutable union, using the same syntax as a struct:

fn f(u: &mut MyUnion) {
    unsafe {
        u.f1 = 2;
    }
}

If a union contains multiple fields of different sizes, assigning to a field smaller than the entire union must not change the memory of the union outside that field.

Union fields will normally not implement Drop, and by default, declaring a union with a field type that implements Drop will produce a lint warning. Assigning to a field with a type that implements Drop will call drop() on the previous value of that field. This matches the behavior of struct fields that implement Drop. To avoid this, such as if interpreting the union’s value via that field and dropping it would produce incorrect behavior, Rust code can assign to the entire union instead of the field. A union does not implicitly implement Drop even if its field types do.

The lint warning produced when declaring a union field of a type that implements Drop should document this caveat in its explanatory text.

Pattern matching

Unsafe code may pattern match on union fields, using the same syntax as a struct, without the requirement to mention every field of the union in a match or use ..:

fn f(u: MyUnion) {
    unsafe {
        match u {
            MyUnion { f1: 10 } => { println!("ten"); }
            MyUnion { f2 } => { println!("{}", f2); }
        }
    }
}

Matching a specific value from a union field makes a refutable pattern; naming a union field without matching a specific value makes an irrefutable pattern. Both require unsafe code.

Pattern matching may match a union as a field of a larger structure. In particular, when using a Rust union to implement a C tagged union via FFI, this allows matching on the tag and the corresponding field simultaneously:

#[repr(u32)]
enum Tag { I, F }

#[repr(C)]
union U {
    i: i32,
    f: f32,
}

#[repr(C)]
struct Value {
    tag: Tag,
    u: U,
}

fn is_zero(v: Value) -> bool {
    unsafe {
        match v {
            Value { tag: I, u: U { i: 0 } } => true,
            Value { tag: F, u: U { f: 0.0 } } => true,
            _ => false,
        }
    }
}

Note that a pattern match on a union field that has a smaller size than the entire union must not make any assumptions about the value of the union’s memory outside that field. For example, if a union contains a u8 and a u32, matching on the u8 may not perform a u32-sized comparison over the entire union.

Borrowing union fields

Unsafe code may borrow a reference to a field of a union; doing so borrows the entire union, such that any borrow conflicting with a borrow of the union (including a borrow of another union field or a borrow of a structure containing the union) will produce an error.

union U {
    f1: u32,
    f2: f32,
}

#[test]
fn test() {
    let mut u = U { f1: 1 };
    unsafe {
        let b1 = &mut u.f1;
	// let b2 = &mut u.f2; // This would produce an error
        *b1 = 5;
    }
    assert_eq!(unsafe { u.f1 }, 5);
}

Simultaneous borrows of multiple fields of a struct contained within a union do not conflict:

struct S {
    x: u32,
    y: u32,
}

union U {
    s: S,
    both: u64,
}

#[test]
fn test() {
    let mut u = U { s: S { x: 1, y: 2 } };
    unsafe {
        let bx = &mut u.s.x;
        // let bboth = &mut u.both; // This would fail
        let by = &mut u.s.y;
        *bx = 5;
        *by = 10;
    }
    assert_eq!(unsafe { u.s.x }, 5);
    assert_eq!(unsafe { u.s.y }, 10);
}

Union and field visibility

The pub keyword works on the union and on its fields, as with a struct. The union and its fields default to private. Using a private field in a union instantiation, field access, or pattern match produces an error.

Uninitialized unions

The compiler should consider a union uninitialized if declared without an initializer. However, providing a field during instantiation, or assigning to a field, should cause the compiler to treat the entire union as initialized.

Unions and traits

A union may have trait implementations, using the same impl syntax as a struct.

The compiler should provide a lint if a union field has a type that implements the Drop trait. The explanation for that lint should include an explanation of the caveat documented in the section “Writing fields”. The compiler should allow disabling that lint with #[allow(union_field_drop)], for code that intentionally stores a type with Drop in a union. The compiler must never implicitly generate a Drop implementation for the union itself, though Rust code may explicitly implement Drop for a union type.

Generic unions

A union may have a generic type, with one or more type parameters or lifetime parameters. As with a generic enum, the types within the union must make use of all the parameters; however, not all fields within the union must use all parameters.

Type inference works on generic union types. In some cases, the compiler may not have enough information to infer the parameters of a generic type, and may require explicitly specifying them.

Unions and undefined behavior

Rust code must not use unions to invoke undefined behavior. In particular, Rust code must not use unions to break the pointer aliasing rules with raw pointers, or access a field containing a primitive type with an invalid value.

In addition, since a union declared without #[repr(C)] uses an unspecified binary layout, code reading fields of such a union or pattern-matching such a union must not read from a field other than the one written to. This includes pattern-matching a specific value in a union field.

Union size and alignment

A union declared with #[repr(C)] must have the same size and alignment as an equivalent C union declaration for the target platform. Typically, a union would have the maximum size of any of its fields, and the maximum alignment of any of its fields. Note that those maximums may come from different fields; for instance:

#[repr(C)]
union U {
    f1: u16,
    f2: [u8; 4],
}

#[test]
fn test() {
    assert_eq!(std::mem::size_of<U>(), 4);
    assert_eq!(std::mem::align_of<U>(), 2);
}

Drawbacks

Adding a new type of data structure would increase the complexity of the language and the compiler implementation, albeit marginally. However, this change seems likely to provide a net reduction in the quantity and complexity of unsafe code.

Alternatives

Proposals for unions in Rust have a substantial history, with many variants and alternatives prior to the syntax proposed here with a union pseudo-keyword. Thanks to many people in the Rust community for helping to refine this RFC.

The most obvious path to introducing unions in Rust would introduce union as a new keyword. However, any introduction of a new keyword will necessarily break some code that previously compiled, such as code using the keyword as an identifier. Making union a keyword in the standard way would break the substantial volume of existing Rust code using union for other purposes, including multiple functions in the standard library. The approach proposed here, recognizing union to introduce a union declaration without prohibiting union as an identifier, provides the most natural declaration syntax and avoids breaking any existing code.

Proposals for unions in Rust have extensively explored possible variations on declaration syntax, including longer keywords (untagged_union), built-in syntax macros (union!), compound keywords (unsafe union), pragmas (#[repr(union)] struct), and combinations of existing keywords (unsafe enum).

In the absence of a new keyword, since unions represent unsafe, untagged sum types, and enum represents safe, tagged sum types, Rust could base unions on enum instead. The unsafe enum proposal took this approach, introducing unsafe, untagged enums, identified with unsafe enum; further discussion around that proposal led to the suggestion of extending it with struct-like field access syntax. Such a proposal would similarly eliminate explicit use of std::mem::transmute, and avoid the need to handle platform-specific size and alignment requirements for fields.

The standard pattern-matching syntax of enums would make field accesses significantly more verbose than struct-like syntax, and in particular would typically require more code inside unsafe blocks. Adding struct-like field access syntax would avoid that; however, pairing an enum-like definition with struct-like usage seems confusing for developers. A declaration using enum leads users to expect enum-like syntax; a new construct distinct from both enum and struct avoids leading users to expect any particular syntax or semantics. Furthermore, developers used to C unions will expect struct-like field access for unions.

Since this proposal uses struct-like syntax for declaration, initialization, pattern matching, and field access, the original version of this RFC used a pragma modifying the struct keyword: #[repr(union)] struct. However, while the proposed unions match struct syntax, they do not share the semantics of struct; most notably, unions represent a sum type, while structs represent a product type. The new construct union avoids the semantics attached to existing keywords.

In the absence of any native support for unions, developers of existing Rust code have resorted to either complex platform-specific transmute code, or complex union-definition macros. In the latter case, such macros make field accesses and pattern matching look more cumbersome and less structure-like, and still require detailed platform-specific knowledge of structure layout and field sizes. The implementation and use of such macros provides strong motivation to seek a better solution, and indeed existing writers and users of such macros have specifically requested native syntax in Rust.

Finally, to call more attention to reads and writes of union fields, field access could use a new access operator, rather than the same . operator used for struct fields. This would make union fields more obvious at the time of access, rather than making them look syntactically identical to struct fields despite the semantic difference in storage representation. However, this does not seem worth the additional syntactic complexity and divergence from other languages. Union field accesses already require unsafe blocks, which calls attention to them. Calls to unsafe functions use the same syntax as calls to safe functions.

Much discussion in the tracking issue for unions debated whether assigning to a union field that implements Drop should drop the previous value of the field. This produces potentially surprising behavior if that field doesn’t currently contain a valid value of that type. However, that behavior maintains consistency with assignments to struct fields and mutable variables, which writers of unsafe code must already take into account; the alternative would add an additional special case for writers of unsafe code. This does provide further motivation for the lint for union fields implementing Drop; code that explicitly overrides that lint will need to take this into account.

Unresolved questions

Can the borrow checker support the rule that “simultaneous borrows of multiple fields of a struct contained within a union do not conflict”? If not, omitting that rule would only marginally increase the verbosity of such code, by requiring an explicit borrow of the entire struct first.

Can a pattern match match multiple fields of a union at once? For rationale, consider a union using the low bits of an aligned pointer as a tag; a pattern match may match the tag using one field and a value identified by that tag using another field. However, if this complicates the implementation, omitting it would not significantly complicate code using unions.

C APIs using unions often also make use of anonymous unions and anonymous structs. For instance, a union may contain anonymous structs to define non-overlapping fields, and a struct may contain an anonymous union to define overlapping fields. This RFC does not define anonymous unions or structs, but a subsequent RFC may wish to do so.

Edit History

  • This RFC was amended in https://github.com/rust-lang/rfcs/pull/1663/ to clarify the behavior when an individual field whose type implements Drop.

Summary

The current compiler implements a more expansive semantics for pattern matching than was originally intended. This RFC introduces several mechanisms to reign in these semantics without actually breaking (much, if any) extant code:

  • Introduce a feature-gated attribute #[structural_match] which can be applied to a struct or enum T to indicate that constants of type T can be used within patterns.
  • Have #[derive(Eq)] automatically apply this attribute to the struct or enum that it decorates. Automatically inserted attributes do not require use of feature-gate.
  • When expanding constants of struct or enum type into equivalent patterns, require that the struct or enum type is decorated with #[structural_match]. Constants of builtin types are always expanded.

The practical effect of these changes will be to prevent the use of constants in patterns unless the type of those constants is either a built-in type (like i32 or &str) or a user-defined constant for which Eq is derived (not merely implemented).

To be clear, this #[structural_match] attribute is never intended to be stabilized. Rather, the intention of this change is to restrict constant patterns to those cases that everyone can agree on for now. We can then have further discussion to settle the best semantics in the long term.

Because the compiler currently accepts arbitrary constant patterns, this is technically a backwards incompatible change. However, the design of the RFC means that existing code that uses constant patterns will generally “just work”. The justification for this change is that it is clarifying “underspecified language semantics” clause, as described in RFC 1122. A recent crater run with a prototype implementation found 6 regressions.

Note: this was also discussed on an internals thread. Major points from that thread are summarized either inline or in alternatives.

Motivation

The compiler currently permits any kind of constant to be used within a pattern. However, the meaning of such a pattern is somewhat controversial: the current semantics implemented by the compiler were adopted in July of 2014 and were never widely discussed nor did they go through the RFC process. Moreover, the discussion at the time was focused primarily on implementation concerns, and overlooked the potential semantic hazards.

Semantic vs structural equality

Consider a program like this one, which references a constant value from within a pattern:

struct SomeType {
    a: u32,
    b: u32,
}

const SOME_CONSTANT: SomeType = SomeType { a: 22+22, b: 44+44 };

fn test(v: SomeType) {
    match v {
        SOME_CONSTANT => println!("Yes"),
        _ => println!("No"),
    }
}

The question at hand is what do we expect this match to do, precisely? There are two main possibilities: semantic and structural equality.

Semantic equality. Semantic equality states that a pattern SOME_CONSTANT matches a value v if v == SOME_CONSTANT. In other words, the match statement above would be exactly equivalent to an if:

if v == SOME_CONSTANT {
    println!("Yes")
} else {
    println!("No");
}

Under semantic equality, the program above would not compile, because SomeType does not implement the PartialEq trait.

Structural equality. Under structural equality, v matches the pattern SOME_CONSTANT if all of its fields are (structurally) equal. Primitive types like u32 are structurally equal if they represent the same value (but see below for discussion about floating point types like f32 and f64). This means that the match statement above would be roughly equivalent to the following if (modulo privacy):

if v.a == SOME_CONSTANT.a && v.b == SOME_CONSTANT.b {
    println!("Yes")
} else {
    println!("No");
}

Structural equality basically says “two things are structurally equal if their fields are structurally equal”. It is sort of equality you would get if everyone used #[derive(PartialEq)] on all types. Note that the equality defined by structural equality is completely distinct from the == operator, which is tied to the PartialEq traits. That is, two values that are semantically unequal could be structurally equal (an example where this might occur is the floating point value NaN).

Current semantics. The compiler’s current semantics are basically structural equality, though in the case of floating point numbers they are arguably closer to semantic equality (details below). In particular, when a constant appears in a pattern, the compiler first evaluates that constant to a specific value. So we would reduce the expression:

const SOME_CONSTANT: SomeType = SomeType { a: 22+22, b: 44+44 };

to the value SomeType { a: 44, b: 88 }. We then expand the pattern SOME_CONSTANT as though you had typed this value in place (well, almost as though, read on for some complications around privacy). Thus the match statement above is equivalent to:

match v {
    SomeType { a: 44, b: 88 } => println!(Yes),
    _ => println!("No"),
}

Disadvantages of the current approach

Given that the compiler already has a defined semantics, it is reasonable to ask why we might want to change it. There are two main disadvantages:

  1. No abstraction boundary. The current approach does not permit types to define what equality means for themselves (at least not if they can be constructed in a constant).
  2. Scaling to associated constants. The current approach does not permit associated constants or generic integers to be used in a match statement.

Disadvantage: Weakened abstraction boundary

The single biggest concern with structural equality is that it introduces two distinct notions of equality: the == operator, based on the PartialEq trait, and pattern matching, based on a builtin structural recursion. This will cause problems for user-defined types that rely on PartialEq to define equality. Put another way, it is no longer possible for user-defined types to completely define what equality means for themselves (at least not if they can be constructed in a constant). Furthermore, because the builtin structural recursion does not consider privacy, match statements can now be used to observe private fields.

Example: Normalized durations. Consider a simple duration type:

#[derive(Copy, Clone)]
pub struct Duration {
    pub seconds: u32,
    pub minutes: u32,
}

Let’s say that this Duration type wishes to represent a span of time, but it also wishes to preserve whether that time was expressed in seconds or minutes. In other words, 60 seconds and 1 minute are equal values, but we don’t want to normalize 60 seconds into 1 minute; perhaps because it comes from user input and we wish to keep things just as the user chose to express it.

We might implement PartialEq like so (actually the PartialEq trait is slightly different, but you get the idea):

impl PartialEq for Duration {
    fn eq(&self, other: &Duration) -> bool {
        let s1 = (self.seconds as u64) + (self.minutes as u64 * 60);
        let s2 = (other.seconds as u64) + (other.minutes as u64 * 60);
        s1 == s2
    }
}

Now imagine I have some constants:

const TWENTY_TWO_SECONDS: Duration = Duration { seconds: 22, minutes: 0 };
const ONE_MINUTE: Duration = Duration { seconds: 0, minutes: 1 };

And I write a match statement using those constants:

fn detect_some_case_or_other(d: Duration) {
    match d {
        TWENTY_TWO_SECONDS => /* do something */,
        ONE_MINUTE => /* do something else */,
        _ => /* do something else again */,
    }
}

Now this code is, in all probability, buggy. Probably I meant to use the notion of equality that Duration defined, where seconds and minutes are normalized. But that is not the behavior I will see – instead I will use a pure structural match. What’s worse, this means the code will probably work in my local tests, since I like to say “one minute”, but it will break when I demo it for my customer, since she prefers to write “60 seconds”.

Example: Floating point numbers. Another example is floating point numbers. Consider the case of 0.0 and -0.0: these two values are distinct, but they typically behave the same; so much so that they compare equal (that is, 0.0 == -0.0 is true). So it is likely that code such as:

match some_computation() {
    0.0 => ...,
    x => ...,
}

did not intend to discriminate between zero and negative zero. In fact, in the compiler today, match will compare 0.0 and -0.0 as equal. We simply do not extend that courtesy to user-defined types.

Example: observing private fields. The current constant expansion code does not consider privacy. In other words, constants are expanded into equivalent patterns, but those patterns may not have been something the user could have typed because of privacy rules. Consider a module like:

mod foo {
    pub struct Foo { b: bool }
    pub const V1: Foo = Foo { b: true };
    pub const V2: Foo = Foo { b: false };
}

Note that there is an abstraction boundary here: b is a private field. But now if I wrote code from another module that matches on a value of type Foo, that abstraction boundary is pierced:

fn bar(f: x::Foo) {
    // rustc knows this is exhaustive because if expanded `V1` into
    // equivalent patterns; patterns you could not write by hand!
    match f {
        x::V1 => { /* moreover, now we know that f.b is true */ }
        x::V2 => { /* and here we know it is false */ }
    }
}

Note that, because Foo does not implement PartialEq, just having access to V1 would not otherwise allow us to observe the value of f.b. (And even if Foo did implement PartialEq, that implementation might not read f.b, so we still would not be able to observe its value.)

More examples. There are numerous possible examples here. For example, strings that compare using case-insensitive comparisons, but retain the original case for reference, such as those used in file-systems. Views that extract a subportion of a larger value (and hence which should only compare that subportion). And so forth.

Disadvantage: Scaling to associated constants and generic integers

Rewriting constants into patterns requires that we can fully evaluate the constant at the time of exhaustiveness checking. For associated constants and type-level integers, that is not possible – we have to wait until monomorphization time. Consider:

trait SomeTrait {
    const A: bool;
    const B: bool;
}

fn foo<T:SomeTrait>(x: bool) {
    match x {
        T::A => println!("A"),
        T::B => println!("B"),
    }
}

impl SomeTrait for i32 {
    const A: bool = true;
    const B: bool = true;
}

impl SomeTrait for u32 {
    const A: bool = true;
    const B: bool = false;
}

Is this match exhaustive? Does it contain dead code? The answer will depend on whether T=i32 or T=u32, of course.

Advantages of the current approach

However, structural equality also has a number of advantages:

Better optimization. One of the biggest “pros” is that it can potentially enable nice optimization. For example, given constants like the following:

struct Value { x: u32 }
const V1: Value = Value { x: 0 };
const V2: Value = Value { x: 1 };
const V3: Value = Value { x: 2 };
const V4: Value = Value { x: 3 };
const V5: Value = Value { x: 4 };

and a match pattern like the following:

match v {
    V1 => ...,
    ...,
    V5 => ...,
}

then, because pattern matching is always a process of structurally extracting values, we can compile this to code that reads the field x (which is a u32) and does an appropriate switch on that value. Semantic equality would potentially force a more conservative compilation strategy.

Better exhautiveness and dead-code checking. Similarly, we can do more thorough exhaustiveness and dead-code checking. So for example if I have a struct like:

struct Value { field: bool }
const TRUE: Value { field: true };
const FALSE: Value { field: false };

and a match pattern like:

match v { TRUE => .., FALSE => .. }

then we can prove that this match is exhaustive. Similarly, we can prove that the following match contains dead-code:

const A: Value { field: true };
match v {
    TRUE => ...,
    A => ...,
}

Again, some of the alternatives might not allow this. (But note the cons, which also raise the question of exhaustiveness checking.)

Nullary variants and constants are (more) equivalent. Currently, there is a sort of equivalence between enum variants and constants, at least with respect to pattern matching. Consider a C-like enum:

enum Modes {
    Happy = 22,
    Shiny = 44,
    People = 66,
    Holding = 88,
    Hands = 110,
}

const C: Modes = Modes::Happy;

Now if I match against Modes::Happy, that is matching against an enum variant, and under all the proposals I will discuss below, it will check the actual variant of the value being matched (regardless of whether Modes implements PartialEq, which it does not here). On the other hand, if matching against C were to require a PartialEq impl, then it would be illegal. Therefore matching against an enum variant is distinct from matching against a constant.

Detailed design

The goal of this RFC is not to decide between semantic and structural equality. Rather, the goal is to restrict pattern matching to that subset of types where the two variants behave roughly the same.

The structural match attribute

We will introduce an attribute #[structural_match] which can be applied to struct and enum types. Explicit use of this attribute will (naturally) be feature-gated. When converting a constant value into a pattern, if the constant is of struct or enum type, we will check whether this attribute is present on the struct – if so, we will convert the value as we do today. If not, we will report an error that the struct/enum value cannot be used in a pattern.

Behavior of #[derive(Eq)]

When deriving the Eq trait, we will add the #[structural_match] to the type in question. Attributes added in this way will be exempt from the feature gate.

Exhaustiveness and dead-code checking

We will treat user-defined structs “opaquely” for the purpose of exhaustiveness and dead-code checking. This is required to allow for semantic equality semantics in the future, since in that case we cannot rely on Eq to be correctly implemented (e.g., it could always return false, no matter values are supplied to it, even though it’s not supposed to). The impact of this change has not been evaluated but is expected to be very small, since in practice it is rather challenging to successfully make an exhaustive match using user-defined constants, unless they are something trivial like newtype’d booleans (and, in that case, you can update the code to use a more extended pattern).

Similarly, dead code detection should treat constants in a conservative fashion. that is, we can recognize that if there are two arms using the same constant, the second one is dead code, even though it may be that neither will matches (e.g., match foo { C => _, C => _ }). We will make no assumptions about two distinct constants, even if we can concretely evaluate them to the same value.

One unresolved question (described below) is what behavior to adopt for constants that involve no user-defined types. There, the definition of Eq is purely under our control, and we know that it matches structural equality, so we can retain our current aggressive analysis if desired.

Phasing

We will not make this change instantaneously. Rather, for at least one release cycle, users who are pattern matching on struct types that lack #[structural_match] will be warned about imminent breakage.

Drawbacks

This is a breaking change, which means some people might have to change their code. However, that is considered extremely unlikely, because such users would have to be pattern matching on constants that are not comparable for equality (this is likely a bug in any case).

Alternatives

Limit matching to builtin types. An earlier version of this RFC limited matching to builtin types like integers (and tuples of integers). This RFC is a generalization of that which also accommodates struct types that derive Eq.

Embrace current semantics (structural equality). Naturally we could opt to keep the semantics as they are. The advantages and disadvantages are discussed above.

Embrace semantic equality. We could opt to just go straight towards “semantic equality”. However, it seems better to reset the semantics to a base point that everyone can agree on, and then extend from that base point. Moreover, adopting semantic equality straight out would be a riskier breaking change, as it could silently change the semantics of existing programs (whereas the current proposal only causes compilation to fail, never changes what an existing program will do).

Discussion thread summary

This section summarizes various points that were raised in the internals thread which are related to patterns but didn’t seem to fit elsewhere.

Overloaded patterns. Some languages, notably Scala, permit overloading of patterns. This is related to “semantic equality” in that it involves executing custom, user-provided code at compilation time.

Pattern synonyms. Haskell offers a feature called “pattern synonyms” and it was argued that the current treatment of patterns can be viewed as a similar feature. This may be true, but constants-in-patterns are lacking a number of important features from pattern synonyms, such as bindings, as discussed in this response. The author feels that pattern synonyms might be a useful feature, but it would be better to design them as a first-class feature, not adapt constants for that purpose.

Unresolved questions

What about exhaustiveness etc on builtin types? Even if we ignore user-defined types, there are complications around exhaustiveness checking for constants of any kind related to associated constants and other possible future extensions. For example, the following code fails to compile because it contains dead-code:

const X: u64 = 0;
const Y: u64 = 0;
fn bar(foo: u64) {
    match foo {
        X => { }
        Y => { }
        _ => { }
    }
}

However, we would be unable to perform such an analysis in a more generic context, such as with an associated constant:

trait Trait {
    const X: u64;
    const Y: u64;
}

fn bar<T:Trait>(foo: u64) {
    match foo {
        T::X => { }
        T::Y => { }
        _ => { }
    }
}

Here, although it may well be that T::X == T::Y, we can’t know for sure. So, for consistency, we may wish to treat all constants opaquely regardless of whether we are in a generic context or not. (However, it also seems reasonable to make a “best effort” attempt at exhaustiveness and dead pattern checking, erring on the conservative side in those cases where constants cannot be fully evaluated.)

A different argument in favor of treating all constants opaquely is that the current behavior can leak details that perhaps were intended to be hidden. For example, imagine that I define a fn hash that, given a previous hash and a value, produces a new hash. Because I am lazy and prototyping my system, I decide for now to just ignore the new value and pass the old hash through:

const fn add_to_hash(prev_hash: u64, _value: u64) -> u64 {
    prev_hash
}

Now I have some consumers of my library and they define a few constants:

const HASH_OF_ZERO: add_to_hash(0, 0);
const HASH_OF_ONE: add_to_hash(0, 1);

And at some point they write a match statement:

fn process_hash(h: u64) {
    match h {
        HASH_OF_ZERO => /* do something */,
        HASH_OF_ONE => /* do something else */,
        _ => /* do something else again */,
}

As before, what you get when you compile this is a dead-code error, because the compiler can see that HASH_OF_ZERO and HASH_OF_ONE are the same value.

Part of the solution here might be making “unreachable patterns” a warning and not an error. The author feels this would be a good idea regardless (though not necessarily as part of this RFC). However, that’s not a complete solution, since – at least for bool constants – the same issues arise if you consider exhaustiveness checking.

On the other hand, it feels very silly for the compiler not to understand that match some_bool { true => ..., false => ... } is exhaustive. Furthermore, there are other ways for the values of constants to “leak out”, such as when part of a type like [u8; SOME_CONSTANT] (a point made by both arielb1 and glaebhoerl on the internals thread). Therefore, the proper way to address this question is perhaps to consider an explicit form of “abstract constant”.

Summary

RFC 1158 proposed the addition of more functionality for the TcpStream, TcpListener and UdpSocket types, but was declined so that those APIs could be built up out of tree in the net2 crate. This RFC proposes pulling portions of net2’s APIs into the standard library.

Motivation

The functionality provided by the standard library’s wrappers around standard networking types is fairly limited, and there is a large set of well supported, standard functionality that is not currently implemented in std::net but has existed in net2 for some time.

All of the methods to be added map directly to equivalent system calls.

This does not cover the entirety of net2’s APIs. In particular, this RFC does not propose to touch the builder types.

Detailed design

The following methods will be added:

impl TcpStream {
    fn set_nodelay(&self, nodelay: bool) -> io::Result<()>;
    fn nodelay(&self) -> io::Result<bool>;

    fn set_ttl(&self, ttl: u32) -> io::Result<()>;
    fn ttl(&self) -> io::Result<u32>;

    fn set_only_v6(&self, only_v6: bool) -> io::Result<()>;
    fn only_v6(&self) -> io::Result<bool>;

    fn take_error(&self) -> io::Result<Option<io::Error>>;

    fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()>;
}

impl TcpListener {
    fn set_ttl(&self, ttl: u32) -> io::Result<()>;
    fn ttl(&self) -> io::Result<u32>;

    fn set_only_v6(&self, only_v6: bool) -> io::Result<()>;
    fn only_v6(&self) -> io::Result<bool>;

    fn take_error(&self) -> io::Result<Option<io::Error>>;

    fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()>;
}

impl UdpSocket {
    fn set_broadcast(&self, broadcast: bool) -> io::Result<()>;
    fn broadcast(&self) -> io::Result<bool>;

    fn set_multicast_loop_v4(&self, multicast_loop_v4: bool) -> io::Result<()>;
    fn multicast_loop_v4(&self) -> io::Result<bool>;

    fn set_multicast_ttl_v4(&self, multicast_ttl_v4: u32) -> io::Result<()>;
    fn multicast_ttl_v4(&self) -> io::Result<u32>;

    fn set_multicast_loop_v6(&self, multicast_loop_v6: bool) -> io::Result<()>;
    fn multicast_loop_v6(&self) -> io::Result<bool>;

    fn set_ttl(&self, ttl: u32) -> io::Result<()>;
    fn ttl(&self) -> io::Result<u32>;

    fn set_only_v6(&self, only_v6: bool) -> io::Result<()>;
    fn only_v6(&self) -> io::Result<bool>;

    fn join_multicast_v4(&self, multiaddr: &Ipv4Addr, interface: &Ipv4Addr) -> io::Result<()>;
    fn join_multicast_v6(&self, multiaddr: &Ipv6Addr, interface: u32) -> io::Result<()>;

    fn leave_multicast_v4(&self, multiaddr: &Ipv4Addr, interface: &Ipv4Addr) -> io::Result<()>;
    fn leave_multicast_v6(&self, multiaddr: &Ipv6Addr, interface: u32) -> io::Result<()>;

    fn connect<A: ToSocketAddrs>(&self, addr: A) -> Result<()>;
    fn send(&self, buf: &[u8]) -> Result<usize>;
    fn recv(&self, buf: &mut [u8]) -> Result<usize>;

    fn take_error(&self) -> io::Result<Option<io::Error>>;

    fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()>;
}

The traditional approach would be to add these as unstable, inherent methods. However, since inherent methods take precedence over trait methods, this would cause all code using the extension traits in net2 to start reporting stability errors. Instead, we have two options:

  1. Add this functionality as stable inherent methods. The rationale here would be that time in a nursery crate acts as a de facto stabilization period.
  2. Add this functionality via unstable extension traits. When/if we decide to stabilize, we would deprecate the trait and add stable inherent methods. Extension traits are a bit more annoying to work with, but this would give us a formal stabilization period.

Option 2 seems like the safer approach unless people feel comfortable with these APIs.

Drawbacks

This is a fairly significant increase in the surface areas of these APIs, and most users will never touch some of the more obscure functionality that these provide.

Alternatives

We can leave some or all of this functionality in net2.

Unresolved questions

The stabilization path (see above).

Summary

Stabilize the volatile_load and volatile_store intrinsics as ptr::read_volatile and ptr::write_volatile.

Motivation

This is necessary to allow volatile access to memory-mapping I/O in stable code. Currently this is only possible using unstable intrinsics, or by abusing a bug in the load and store functions on atomic types which gives them volatile semantics (rust-lang/rust#30962).

Detailed design

ptr::read_volatile and ptr::write_volatile will work the same way as ptr::read and ptr::write respectively, except that the memory access will be done with volatile semantics. The semantics of a volatile access are already pretty well defined by the C standard and by LLVM. In documentation we can refer to http://llvm.org/docs/LangRef.html#volatile-memory-accesses.

Drawbacks

None.

Alternatives

We could also stabilize the volatile_set_memory, volatile_copy_memory and volatile_copy_nonoverlapping_memory intrinsics as ptr::write_bytes_volatile, ptr::copy_volatile and ptr::copy_nonoverlapping_volatile, but these are not as widely used and are not available in C.

Unresolved questions

None.

Summary

Unix domain sockets provide a commonly used form of IPC on Unix-derived systems. This RFC proposes move the unix_socket nursery crate into the std::os::unix module.

Motivation

Unix sockets are a common form of IPC on unixy systems. Databases like PostgreSQL and Redis allow connections via Unix sockets, and Servo uses them to communicate with subprocesses. Even though Unix sockets are not present on Windows, their use is sufficiently widespread to warrant inclusion in the platform-specific sections of the standard library.

Detailed design

Unix sockets can be configured with the SOCK_STREAM, SOCK_DGRAM, and SOCK_SEQPACKET types. SOCK_STREAM creates a connection-oriented socket that behaves like a TCP socket, SOCK_DGRAM creates a packet-oriented socket that behaves like a UDP socket, and SOCK_SEQPACKET provides something of a hybrid between the other two - a connection-oriented, reliable, ordered stream of delimited packets. SOCK_SEQPACKET support has not yet been implemented in the unix_socket crate, so only the first two socket types will initially be supported in the standard library.

While a TCP or UDP socket would be identified by a IP address and port number, Unix sockets are typically identified by a filesystem path. For example, a Postgres server will listen on a Unix socket located at /run/postgresql/.s.PGSQL.5432 in some configurations. However, the socketpair function can make a pair of unnamed connected Unix sockets not associated with a filesystem path. In addition, Linux provides a separate abstract namespace not associated with the filesystem, indicated by a leading null byte in the address. In the initial implementation, the abstract namespace will not be supported - the various socket constructors will check for and reject addresses with interior null bytes.

A std::os::unix::net module will be created with the following contents:

The UnixStream type mirrors TcpStream:

pub struct UnixStream {
    ...
}

impl UnixStream {
    /// Connects to the socket named by `path`.
    ///
    /// `path` may not contain any null bytes.
    pub fn connect<P: AsRef<Path>>(path: P) -> io::Result<UnixStream> {
        ...
    }

    /// Creates an unnamed pair of connected sockets.
    ///
    /// Returns two `UnixStream`s which are connected to each other.
    pub fn pair() -> io::Result<(UnixStream, UnixStream)> {
        ...
    }

    /// Creates a new independently owned handle to the underlying socket.
    ///
    /// The returned `UnixStream` is a reference to the same stream that this
    /// object references. Both handles will read and write the same stream of
    /// data, and options set on one stream will be propagated to the other
    /// stream.
    pub fn try_clone(&self) -> io::Result<UnixStream> {
        ...
    }

    /// Returns the socket address of the local half of this connection.
    pub fn local_addr(&self) -> io::Result<SocketAddr> {
        ...
    }

    /// Returns the socket address of the remote half of this connection.
    pub fn peer_addr(&self) -> io::Result<SocketAddr> {
        ...
    }

    /// Sets the read timeout for the socket.
    ///
    /// If the provided value is `None`, then `read` calls will block
    /// indefinitely. It is an error to pass the zero `Duration` to this
    /// method.
    pub fn set_read_timeout(&self, timeout: Option<Duration>) -> io::Result<()> {
        ...
    }

    /// Sets the write timeout for the socket.
    ///
    /// If the provided value is `None`, then `write` calls will block
    /// indefinitely. It is an error to pass the zero `Duration` to this
    /// method.
    pub fn set_write_timeout(&self, timeout: Option<Duration>) -> io::Result<()> {
        ...
    }

    /// Returns the read timeout of this socket.
    pub fn read_timeout(&self) -> io::Result<Option<Duration>> {
        ...
    }

    /// Returns the write timeout of this socket.
    pub fn write_timeout(&self) -> io::Result<Option<Duration>> {
        ...
    }

    /// Moves the socket into or out of nonblocking mode.
    pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> {
        ...
    }

    /// Returns the value of the `SO_ERROR` option.
    pub fn take_error(&self) -> io::Result<Option<io::Error>> {
        ...
    }

    /// Shuts down the read, write, or both halves of this connection.
    ///
    /// This function will cause all pending and future I/O calls on the
    /// specified portions to immediately return with an appropriate value
    /// (see the documentation of `Shutdown`).
    pub fn shutdown(&self, how: Shutdown) -> io::Result<()> {
        ...
    }
}

impl Read for UnixStream {
    ...
}

impl<'a> Read for &'a UnixStream {
    ...
}

impl Write for UnixStream {
    ...
}

impl<'a> Write for UnixStream {
    ...
}

impl FromRawFd for UnixStream {
    ...
}

impl AsRawFd for UnixStream {
    ...
}

impl IntoRawFd for UnixStream {
    ...
}

Differences from TcpStream:

  • connect takes an AsRef<Path> rather than a ToSocketAddrs.
  • The pair method creates a pair of connected, unnamed sockets, as this is commonly used for IPC.
  • The SocketAddr returned by the local_addr and peer_addr methods is different.
  • The set_nonblocking and take_error methods are not currently present on TcpStream but are provided in the net2 crate and are being proposed for addition to the standard library in a separate RFC.

As noted above, a Unix socket can either be unnamed, be associated with a path on the filesystem, or (on Linux) be associated with an ID in the abstract namespace. The SocketAddr struct is fairly simple:

pub struct SocketAddr {
    ...
}

impl SocketAddr {
    /// Returns true if the address is unnamed.
    pub fn is_unnamed(&self) -> bool {
        ...
    }

    /// Returns the contents of this address if it corresponds to a filesystem path.
    pub fn as_pathname(&self) -> Option<&Path> {
        ...
    }
}

The UnixListener type mirrors the TcpListener type:

pub struct UnixListener {
    ...
}

impl UnixListener {
    /// Creates a new `UnixListener` bound to the specified socket.
    ///
    /// `path` may not contain any null bytes.
    pub fn bind<P: AsRef<Path>>(path: P) -> io::Result<UnixListener> {
        ...
    }

    /// Accepts a new incoming connection to this listener.
    ///
    /// This function will block the calling thread until a new Unix connection
    /// is established. When established, the corersponding `UnixStream` and
    /// the remote peer's address will be returned.
    pub fn accept(&self) -> io::Result<(UnixStream, SocketAddr)> {
        ...
    }

    /// Creates a new independently owned handle to the underlying socket.
    ///
    /// The returned `UnixListener` is a reference to the same socket that this
    /// object references. Both handles can be used to accept incoming
    /// connections and options set on one listener will affect the other.
    pub fn try_clone(&self) -> io::Result<UnixListener> {
        ...
    }

    /// Returns the local socket address of this listener.
    pub fn local_addr(&self) -> io::Result<SocketAddr> {
        ...
    }

    /// Moves the socket into or out of nonblocking mode.
    pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> {
        ...
    }

    /// Returns the value of the `SO_ERROR` option.
    pub fn take_error(&self) -> io::Result<Option<io::Error>> {
        ...
    }

    /// Returns an iterator over incoming connections.
    ///
    /// The iterator will never return `None` and will also not yield the
    /// peer's `SocketAddr` structure.
    pub fn incoming<'a>(&'a self) -> Incoming<'a> {
        ...
    }
}

impl FromRawFd for UnixListener {
    ...
}

impl AsRawFd for UnixListener {
    ...
}

impl IntoRawFd for UnixListener {
    ...
}

Differences from TcpListener:

  • bind takes an AsRef<Path> rather than a ToSocketAddrs.
  • The SocketAddr type is different.
  • The set_nonblocking and take_error methods are not currently present on TcpListener but are provided in the net2 crate and are being proposed for addition to the standard library in a separate RFC.

Finally, the UnixDatagram type mirrors the UpdSocket type:

pub struct UnixDatagram {
    ...
}

impl UnixDatagram {
    /// Creates a Unix datagram socket bound to the given path.
    ///
    /// `path` may not contain any null bytes.
    pub fn bind<P: AsRef<Path>>(path: P) -> io::Result<UnixDatagram> {
        ...
    }

    /// Creates a Unix Datagram socket which is not bound to any address.
    pub fn unbound() -> io::Result<UnixDatagram> {
        ...
    }

    /// Create an unnamed pair of connected sockets.
    ///
    /// Returns two `UnixDatagrams`s which are connected to each other.
    pub fn pair() -> io::Result<(UnixDatagram, UnixDatagram)> {
        ...
    }

    /// Creates a new independently owned handle to the underlying socket.
    ///
    /// The returned `UnixDatagram` is a reference to the same stream that this
    /// object references. Both handles will read and write the same stream of
    /// data, and options set on one stream will be propagated to the other
    /// stream.
    pub fn try_clone(&self) -> io::Result<UnixStream> {
        ...
    }

    /// Connects the socket to the specified address.
    ///
    /// The `send` method may be used to send data to the specified address.
    /// `recv` and `recv_from` will only receive data from that address.
    ///
    /// `path` may not contain any null bytes.
    pub fn connect<P: AsRef<Path>>(&self, path: P) -> io::Result<()> {
        ...
    }

    /// Returns the address of this socket.
    pub fn local_addr(&self) -> io::Result<SocketAddr> {
        ...
    }

    /// Returns the address of this socket's peer.
    ///
    /// The `connect` method will connect the socket to a peer.
    pub fn peer_addr(&self) -> io::Result<SocketAddr> {
        ...
    }

    /// Receives data from the socket.
    ///
    /// On success, returns the number of bytes read and the address from
    /// whence the data came.
    pub fn recv_from(&self, buf: &mut [u8]) -> io::Result<(usize, SocketAddr)> {
        ...
    }

    /// Receives data from the socket.
    ///
    /// On success, returns the number of bytes read.
    pub fn recv(&self, buf: &mut [u8]) -> io::Result<usize> {
        ...
    }

    /// Sends data on the socket to the specified address.
    ///
    /// On success, returns the number of bytes written.
    ///
    /// `path` may not contain any null bytes.
    pub fn send_to<P: AsRef<Path>>(&self, buf: &[u8], path: P) -> io::Result<usize> {
        ...
    }

    /// Sends data on the socket to the socket's peer.
    ///
    /// The peer address may be set by the `connect` method, and this method
    /// will return an error if the socket has not already been connected.
    ///
    /// On success, returns the number of bytes written.
    pub fn send(&self, buf: &[u8]) -> io::Result<usize> {
        ...
    }

    /// Sets the read timeout for the socket.
    ///
    /// If the provided value is `None`, then `recv` and `recv_from` calls will
    /// block indefinitely. It is an error to pass the zero `Duration` to this
    /// method.
    pub fn set_read_timeout(&self, timeout: Option<Duration>) -> io::Result<()> {
        ...
    }

    /// Sets the write timeout for the socket.
    ///
    /// If the provided value is `None`, then `send` and `send_to` calls will
    /// block indefinitely. It is an error to pass the zero `Duration` to this
    /// method.
    pub fn set_write_timeout(&self, timeout: Option<Duration>) -> io::Result<()> {
        ...
    }

    /// Returns the read timeout of this socket.
    pub fn read_timeout(&self) -> io::Result<Option<Duration>> {
        ...
    }

    /// Returns the write timeout of this socket.
    pub fn write_timeout(&self) -> io::Result<Option<Duration>> {
        ...
    }

    /// Moves the socket into or out of nonblocking mode.
    pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> {
        ...
    }

    /// Returns the value of the `SO_ERROR` option.
    pub fn take_error(&self) -> io::Result<Option<io::Error>> {
        ...
    }

    /// Shut down the read, write, or both halves of this connection.
    ///
    /// This function will cause all pending and future I/O calls on the
    /// specified portions to immediately return with an appropriate value
    /// (see the documentation of `Shutdown`).
    pub fn shutdown(&self, how: Shutdown) -> io::Result<()> {
        ...
    }
}

impl FromRawFd for UnixDatagram {
    ...
}

impl AsRawFd for UnixDatagram {
    ...
}

impl IntoRawFd for UnixDatagram {
    ...
}

Differences from UdpSocket:

  • bind takes an AsRef<Path> rather than a ToSocketAddrs.
  • The unbound method creates an unbound socket, as a Unix socket does not need to be bound to send messages.
  • The pair method creates a pair of connected, unnamed sockets, as this is commonly used for IPC.
  • The SocketAddr returned by the local_addr and peer_addr methods is different.
  • The connect, send, recv, set_nonblocking, and take_error methods are not currently present on UdpSocket but are provided in the net2 crate and are being proposed for addition to the standard library in a separate RFC.

Functionality not present

Some functionality is notably absent from this proposal:

  • Linux’s abstract namespace is not supported. Functionality may be added in the future via extension traits in std::os::linux::net.
  • No support for SOCK_SEQPACKET sockets is proposed, as it has not yet been implemented. Since it is connection oriented, there will be a socket type UnixSeqPacket and a listener type UnixSeqListener. The naming of the listener is a bit unfortunate, but use of SOCK_SEQPACKET is rare compared to SOCK_STREAM so naming priority can go to that version.
  • Unix sockets support file descriptor and credential transfer, but these will not initially be supported as the sendmsg/recvmsg interface is complex and bindings will need some time to prototype.

These features can bake in the rust-lang-nursery/unix-socket as they’re developed.

Drawbacks

While there is precedent for platform specific components in the standard library, this will be the by far the largest platform specific addition.

Alternatives

Unix socket support could be left out of tree.

The naming convention of UnixStream and UnixDatagram doesn’t perfectly mirror TcpStream and UdpSocket, but UnixStream and UnixSocket seems way too confusing.

Unresolved questions

Is std::os::unix::net the right name for this module? It’s not strictly “networking” as all communication is local to one machine. std::os::unix::unix is more accurate but weirdly repetitive and the extension trait module std::os::linux::unix is even weirder. std::os::unix::socket is an option, but seems like too general of a name for specifically AF_UNIX sockets as opposed to all sockets.

Summary

Permit the .. pattern fragment in more contexts.

Motivation

The pattern fragment .. can be used in some patterns to denote several elements in list contexts. However, it doesn’t always compiles when used in such contexts. One can expect the ability to match tuple variants like V(u8, u8, u8) with patterns like V(x, ..) or V(.., z), but the compiler rejects such patterns currently despite accepting very similar V(..).

This RFC is intended to “complete” the feature and make it work in all possible list contexts, making the language a bit more convenient and consistent.

Detailed design

Let’s list all the patterns currently existing in the language, that contain lists of subpatterns:

// Struct patterns.
S { field1, field2, ..., fieldN }

// Tuple struct patterns.
S(field1, field2, ..., fieldN)

// Tuple patterns.
(field1, field2, ..., fieldN)

// Slice patterns.
[elem1, elem2, ..., elemN]

In all the patterns above, except for struct patterns, field/element positions are significant.

Now list all the contexts that currently permit the .. pattern fragment:

// Struct patterns, the last position.
S { subpat1, subpat2, .. }

// Tuple struct patterns, the last and the only position, no extra subpatterns allowed.
S(..)

// Slice patterns, the last position.
[subpat1, subpat2, ..]
// Slice patterns, the first position.
[.., subpatN-1, subpatN]
// Slice patterns, any other position.
[subpat1, .., subpatN]
// Slice patterns, any of the above with a subslice binding.
// (The binding is not actually a binding, but one more pattern bound to the sublist, but this is
// not important for our discussion.)
[subpat1, binding.., subpatN]

Something is obviously missing, let’s fill in the missing parts.

// Struct patterns, the last position.
S { subpat1, subpat2, .. }
// **NOT PROPOSED**: Struct patterns, any position.
// Since named struct fields are not positional, there's essentially no sense in placing the `..`
// anywhere except for one conventionally chosen position (the last one) or in sublist bindings,
// so we don't propose extensions to struct patterns.
S { subpat1, .., subpatN }
// **NOT PROPOSED**: Struct patterns with bindings
S { subpat1, binding.., subpatN }

// Tuple struct patterns, the last and the only position, no extra subpatterns allowed.
S(..)
// **NEW**: Tuple struct patterns, any position.
S(subpat1, subpat2, ..)
S(.., subpatN-1, subpatN)
S(subpat1, .., subpatN)
// **NOT PROPOSED**: Struct patterns with bindings
S(subpat1, binding.., subpatN)

// **NEW**: Tuple patterns, any position.
(subpat1, subpat2, ..)
(.., subpatN-1, subpatN)
(subpat1, .., subpatN)
// **NOT PROPOSED**: Tuple patterns with bindings
(subpat1, binding.., subpatN)

Slice patterns are not covered in this RFC, but here is the syntax for reference:

// Slice patterns, the last position.
[subpat1, subpat2, ..]
// Slice patterns, the first position.
[.., subpatN-1, subpatN]
// Slice patterns, any other position.
[subpat1, .., subpatN]
// Slice patterns, any of the above with a subslice binding.
// By ref bindings are allowed, slices and subslices always have compatible layouts.
[subpat1, binding.., subpatN]

Trailing comma is not allowed after .. in the last position by analogy with existing slice and struct patterns.

This RFC is not critically important and can be rolled out in parts, for example, bare .. first, .. with a sublist binding eventually.

Drawbacks

None.

Alternatives

Do not permit sublist bindings in tuples and tuple structs at all.

Unresolved questions

Sublist binding syntax conflicts with possible exclusive range patterns begin .. end/begin../..end. This problem already exists for slice patterns and has to be solved independently from extensions to ... This RFC simply selects the same syntax that slice patterns already have.

Summary

Add constructor and conversion functions for std::net::Ipv6Addr and std::net::Ipv4Addr that are oriented around arrays of octets.

Motivation

Currently, the interface for std::net::Ipv6Addr is oriented around 16-bit “segments”. The constructor takes eight 16-bit integers as arguments, and the sole getter function, segments, returns an array of eight 16-bit integers. This interface is unnatural when doing low-level network programming, where IPv6 addresses are treated as a sequence of 16 octets. For example, building and parsing IPv6 packets requires doing bitwise arithmetic with careful attention to byte order in order to convert between the on-wire format of 16 octets and the eight segments format used by std::net::Ipv6Addr.

Detailed design

The following method would be added to impl std::net::Ipv6Addr:

pub fn octets(&self) -> [u8; 16] {
	self.inner.s6_addr
}

The following From trait would be implemented:

impl From<[u8; 16]> for Ipv6Addr {
	fn from(octets: [u8; 16]) -> Ipv6Addr {
		let mut addr: c::in6_addr = unsafe { std::mem::zeroed() };
		addr.s6_addr = octets;
		Ipv6Addr { inner: addr }
	}
}

For consistency, the following From trait would be implemented for Ipv4Addr:

impl From<[u8; 4]> for Ipv4Addr {
	fn from(octets: [u8; 4]) -> Ipv4Addr {
		Ipv4Addr::new(octets[0], octets[1], octets[2], octets[3])
	}
}

Note: Ipv4Addr already has an octets method that returns a [u8; 4].

Drawbacks

It adds additional functions to the API, which increases cognitive load and maintenance burden. That said, the functions are conceptually very simple and their implementations short.

Alternatives

Do nothing. The downside is that developers will need to resort to bitwise arithmetic, which is awkward and error-prone (particularly with respect to byte ordering) to convert between Ipv6Addr and the on-wire representation of IPv6 addresses. Or they will use their alternative implementations of Ipv6Addr, fragmenting the ecosystem.

Unresolved questions

Summary

This RFC adds the i128 and u128 primitive types to Rust.

Motivation

Some algorithms need to work with very large numbers that don’t fit in 64 bits, such as certain cryptographic algorithms. One possibility would be to use a BigNum library, but these use heap allocation and tend to have high overhead. LLVM has support for very efficient 128-bit integers, which are exposed by Clang in C as the __int128 type.

Detailed design

Compiler support

The first step for implementing this feature is to add support for the i128/u128 primitive types to the compiler. This will requires changes to many parts of the compiler, from libsyntax to trans.

The compiler will need to be bootstrapped from an older compiler which does not support i128/u128, but rustc will want to use these types internally for things like literal parsing and constant propagation. This can be solved by using a “software” implementation of these types, similar to the one in the extprim crate. Once stage1 is built, stage2 can be compiled using the native LLVM i128/u128 types.

Runtime library support

The LLVM code generator supports 128-bit integers on all architectures, however it will lower some operations to runtime library calls. This similar to how we currently handle u64 and i64 on 32-bit platforms: “complex” operations such as multiplication or division are lowered by LLVM backends into calls to functions in the compiler-rt runtime library.

Here is a rough breakdown of which operations are handled natively instead of through a library call:

  • Add/Sub/Neg: native, including checked overflow variants
  • Compare (eq/ne/gt/ge/lt/le): native
  • Bitwise and/or/xor/not: native
  • Shift left/right: native on most architectures (some use libcalls instead)
  • Bit counting, parity, leading/trailing ones/zeroes: native
  • Byte swapping: native
  • Mul/Div/Mod: libcall (including checked overflow multiplication)
  • Conversion to/from f32/f64: libcall

The compiler-rt library that comes with LLVM only implements runtime library functions for 128-bit integers on 64-bit platforms (#ifdef __LP64__). We will need to provide our own implementations of the relevant functions to allow i128/u128 to be available on all architectures. Note that this can only be done with a compiler that already supports i128/u128 to match the calling convention that LLVM is expecting.

Here is the list of functions that need to be implemented:

fn __ashlti3(a: i128, b: i32) -> i128;
fn __ashrti3(a: i128, b: i32) -> i128;
fn __divti3(a: i128, b: i128) -> i128;
fn __fixdfti(a: f64) -> i128;
fn __fixsfti(a: f32) -> i128;
fn __fixunsdfti(a: f64) -> u128;
fn __fixunssfti(a: f32) -> u128;
fn __floattidf(a: i128) -> f64;
fn __floattisf(a: i128) -> f32;
fn __floatuntidf(a: u128) -> f64;
fn __floatuntisf(a: u128) -> f32;
fn __lshrti3(a: i128, b: i32) -> i128;
fn __modti3(a: i128, b: i128) -> i128;
fn __muloti4(a: i128, b: i128, overflow: &mut i32) -> i128;
fn __multi3(a: i128, b: i128) -> i128;
fn __udivti3(a: u128, b: u128) -> u128;
fn __umodti3(a: u128, b: u128) -> u128;

Implementations of these functions will be written in Rust and will be included in libcore. Note that it is not possible to write these functions in C or use the existing implementations in compiler-rt since the __int128 type is not available in C on 32-bit platforms.

Modifications to libcore

Several changes need to be done to libcore:

  • src/libcore/num/i128.rs: Define MIN and MAX.
  • src/libcore/num/u128.rs: Define MIN and MAX.
  • src/libcore/num/mod.rs: Implement inherent methods, Zero, One, From and FromStr for u128 and i128.
  • src/libcore/num/wrapping.rs: Implement methods for Wrapping<u128> and Wrapping<i128>.
  • src/libcore/fmt/num.rs: Implement Binary, Octal, LowerHex, UpperHex, Debug and Display for u128 and i128.
  • src/libcore/cmp.rs: Implement Eq, PartialEq, Ord and PartialOrd for u128 and i128.
  • src/libcore/nonzero.rs: Implement Zeroable for u128 and i128.
  • src/libcore/iter.rs: Implement Step for u128 and i128.
  • src/libcore/clone.rs: Implement Clone for u128 and i128.
  • src/libcore/default.rs: Implement Default for u128 and i128.
  • src/libcore/hash/mod.rs: Implement Hash for u128 and i128 and add write_i128 and write_u128 to Hasher.
  • src/libcore/lib.rs: Add the u128 and i128 modules.

Modifications to libstd

A few minor changes are required in libstd:

  • src/libstd/lib.rs: Re-export core::{i128, u128}.
  • src/libstd/primitive_docs.rs: Add documentation for i128 and u128.

Modifications to other crates

A few external crates will need to be updated to support the new types:

  • rustc-serialize: Add the ability to serialize i128 and u128.
  • serde: Add the ability to serialize i128 and u128.
  • rand: Add the ability to generate random i128s and u128s.

Drawbacks

One possible issue is that a u128 can hold a very large number that doesn’t fit in a f32. We need to make sure this doesn’t lead to any undefs from LLVM. See this comment, and this example code.

Alternatives

There have been several attempts to create u128/i128 wrappers based on two u64 values, but these can’t match the performance of LLVM’s native 128-bit integers. For example LLVM is able to lower a 128-bit add into just 2 instructions on 64-bit platforms and 4 instructions on 32-bit platforms.

Unresolved questions

None

Summary

Provide a simple model describing three kinds of structs and variants and their relationships.
Provide a way to match on structs/variants in patterns regardless of their kind (S{..}).
Permit tuple structs and tuple variants with zero fields (TS()).

Motivation

There’s some mental model lying under the current implementation of ADTs, but it is not written out explicitly and not implemented completely consistently. Writing this model out helps to identify its missing parts. Some of this missing parts turn out to be practically useful. This RFC can also serve as a piece of documentation.

Detailed design

The text below mostly talks about structures, but almost everything is equally applicable to variants.

Braced structs

Braced structs are declared with braces (unsurprisingly).

struct S {
    field1: Type1,
    field2: Type2,
    field3: Type3,
}

Braced structs are the basic struct kind, other kinds are built on top of them. Braced structs have 0 or more user-named fields and are defined only in type namespace.

Braced structs can be used in struct expressions S{field1: expr, field2: expr}, including functional record update (FRU) S{field1: expr, ..s}/S{..s} and with struct patterns S{field1: pat, field2: pat}/S{field1: pat, ..}/S{..}. In all cases the path S of the expression or pattern is looked up in the type namespace (so these expressions/patterns can be used with type aliases). Fields of a braced struct can be accessed with dot syntax s.field1.

Note: struct variants are currently defined in the value namespace in addition to type namespace, there are no particular reasons for this and this is probably temporary.

Unit structs

Unit structs are defined without any fields or brackets.

struct US;

Unit structs can be thought of as a single declaration for two things: a basic struct

struct US {}

and a constant with the same nameNote 1

const US: US = US{};

Unit structs have 0 fields and are defined in both type (the type US) and value (the constant US) namespaces.

As a basic struct, a unit struct can participate in struct expressions US{}, including FRU US{..s} and in struct patterns US{}/US{..}. In both cases the path US of the expression or pattern is looked up in the type namespace (so these expressions/patterns can be used with type aliases). Fields of a unit struct could also be accessed with dot syntax, but it doesn’t have any fields.

As a constant, a unit struct can participate in unit struct expressions US and unit struct patterns US, both of these are looked up in the value namespace in which the constant US is defined (so these expressions/patterns cannot be used with type aliases).

Note 1: the constant is not exactly a const item, there are subtle differences (e.g. with regards to match exhaustiveness), but it’s a close approximation.
Note 2: the constant is pretty weirdly namespaced in case of unit variants, constants can’t be defined in “enum modules” manually.

Tuple structs

Tuple structs are declared with parentheses.

struct TS(Type0, Type1, Type2);

Tuple structs can be thought of as a single declaration for two things: a basic struct

struct TS {
    0: Type0,
    1: Type1,
    2: Type2,
}

and a constructor function with the same nameNote 2

fn TS(arg0: Type0, arg1: Type1, arg2: Type2) -> TS {
    TS{0: arg0, 1: arg1, 2: arg2}
}

Tuple structs have 0 or more automatically-named fields and are defined in both type (the type TS) and the value (the constructor function TS) namespaces.

As a basic struct, a tuple struct can participate in struct expressions TS{0: expr, 1: expr}, including FRU TS{0: expr, ..ts}/TS{..ts} and in struct patterns TS{0: pat, 1: pat}/TS{0: pat, ..}/TS{..}. In both cases the path TS of the expression or pattern is looked up in the type namespace (so these expressions/patterns can be used with type aliases). Fields of a tuple struct can be accessed with dot syntax ts.0.

As a constructor, a tuple struct can participate in tuple struct expressions TS(expr, expr) and tuple struct patterns TS(pat, pat)/TS(..), both of these are looked up in the value namespace in which the constructor TS is defined (so these expressions/patterns cannot be used with type aliases). Tuple struct expressions TS(expr, expr) are usual function calls, but the compiler reserves the right to make observable improvements to them based on the additional knowledge, that TS is a constructor.

Note 1: the automatically assigned field names are quite interesting, they are not identifiers lexically (they are integer literals), so such fields can’t be defined manually.
Note 2: the constructor function is not exactly a fn item, there are subtle differences (e.g. with regards to privacy checks), but it’s a close approximation.

Summary of the changes.

Everything related to braced structs and unit structs is already implemented.

New: Permit tuple structs and tuple variants with 0 fields. This restriction is artificial and can be lifted trivially. Macro writers dealing with tuple structs/variants will be happy to get rid of this one special case.

New: Permit using tuple structs and tuple variants in braced struct patterns and expressions not requiring naming their fields - TS{..ts}/TS{}/TS{..}. This doesn’t require much effort to implement as well.
This also means that S{..} patterns can be used to match structures and variants of any kind. The desire to have such “match everything” patterns is sometimes expressed given that number of fields in structures and variants can change from zero to non-zero and back during development.
An extra benefit is ability to match/construct tuple structs using their type aliases.

New: Permit using tuple structs and tuple variants in braced struct patterns and expressions requiring naming their fields - TS{0: expr}/TS{0: pat}/etc. While this change is important for consistency, there’s not much motivation for it in hand-written code besides shortening patterns like ItemFn(_, _, unsafety, _, _, _) into something like ItemFn{2: unsafety, ..} and ability to match/construct tuple structs using their type aliases.
However, automatic code generators (e.g. syntax extensions) can get more benefits from the ability to generate uniform code for all structure kinds.
#[derive] for example, currently has separate code paths for generating expressions and patterns for braces structs (ExprStruct/PatKind::Struct), tuple structs (ExprCall/PatKind::TupleStruct) and unit structs (ExprPath/PatKind::Path). With proposed changes #[derive] could simplify its logic and always generate braced forms for expressions and patterns.

Drawbacks

None.

Alternatives

None.

Unresolved questions

None.

Summary

Add a new crate type accepted by the compiler, called cdylib, which corresponds to exporting a C interface from a Rust dynamic library.

Motivation

Currently the compiler supports two modes of generating dynamic libraries:

  1. One form of dynamic library is intended for reuse with further compilations. This kind of library exposes all Rust symbols, links to the standard library dynamically, etc. I’ll refer to this mode as rdylib as it’s a Rust dynamic library talking to Rust.
  2. Another form of dynamic library is intended for embedding a Rust application into another. Currently the only difference from the previous kind of dynamic library is that it favors linking statically to other Rust libraries (bundling them inside). I’ll refer to this as a cdylib as it’s a Rust dynamic library exporting a C API.

Each of these flavors of dynamic libraries has a distinct use case. For examples rdylibs are used by the compiler itself to implement plugins, and cdylibs are used whenever Rust needs to be dynamically loaded from another language or application.

Unfortunately the balance of features is tilted a little bit too much towards the smallest use case, rdylibs. In practice because Rust is statically linked by default and has an unstable ABI, rdylibs are used quite rarely. There are a number of requirements they impose, however, which aren’t necessary for cdylibs:

  • Metadata is included in all dynamic libraries. If you’re just loading Rust into somewhere else, however, you have no need for the metadata!
  • Reachable symbols are exposed from dynamic libraries, but if you’re loading Rust into somewhere else then, like executables, only public non-Rust-ABI functions need to be exported. This can lead to unnecessarily large Rust dynamic libraries in terms of object size as well as missed optimization opportunities from knowing that a function is otherwise private.
  • We can’t run LTO for dylibs because those are intended for end products, not intermediate ones like (1) is.

The purpose of this RFC is to solve these drawbacks with a new crate-type to represent the more rarely used form of dynamic library (rdylibs).

Detailed design

A new crate type will be accepted by the compiler, cdylib, which can be passed as either --crate-type cdylib on the command line or via #![crate_type = "cdylib"] in crate attributes. This crate type will conceptually correspond to the cdylib use case described above, and today’s dylib crate-type will continue to correspond to the rdylib use case above. Note that the literal output artifacts of these two crate types (files, file names, etc) will be the same.

The two formats will differ in the parts listed in the motivation above, specifically:

  • Metadata - rdylibs will have a section of the library with metadata, whereas cdylibs will not.
  • Symbol visibility - rdylibs will expose all symbols as rlibs do, cdylibs will expose symbols as executables do. This means that pub fn foo() {} will not be an exported symbol, but #[no_mangle] pub extern fn foo() {} will be an exported symbol. Note that the compiler will also be at liberty to pass extra flags to the linker to actively hide exported Rust symbols from linked libraries.
  • LTO - this will disallowed for rdylibs, but enabled for cdylibs.
  • Linkage - rdylibs will link dynamically to one another by default, for example the standard library will be linked dynamically by default. On the other hand, cdylibs will link all Rust dependencies statically by default.

Drawbacks

Rust’s ephemeral and ill-defined “linkage model” is… well… ill defined and ephemeral. This RFC is an extension of this model, but it’s difficult to reason about extending that which is not well defined. As a result there could be unforeseen interactions between this output format and where it’s used.

Alternatives

  • Originally this RFC proposed adding a new crate type, rdylib, instead of adding a new crate type, cdylib. The existing dylib output type would be reinterpreted as a cdylib use-case. This is unfortunately, however, a breaking change and requires a somewhat complicated transition plan in Cargo for plugins. In the end it didn’t seem worth it for the benefit of “cdylib is probably what you want”.

Unresolved questions

  • Should the existing dylib format be considered unstable? (should it require a nightly compiler?). The use case for a Rust dynamic library is so limited, and so volatile, we may want to just gate access to it by default.

Summary

Stabilize implementing panics as aborts.

  • Stabilize the -Z no-landing-pads flag under the name -C panic=strategy
  • Implement a number of unstable features akin to custom allocators to swap out implementations of panic just before a final product is generated.
  • Add a [profile.dev] option to Cargo to configure how panics are implemented.

Motivation

Panics in Rust have long since been implemented with the intention of being caught at particular boundaries (for example the thread boundary). This is quite useful for isolating failures in Rust code, for example:

  • Servers can avoid taking down the entire process but can instead just take down one request.
  • Embedded Rust libraries can avoid taking down the entire process and can instead gracefully inform the caller that an internal logic error occurred.
  • Rust applications can isolate failure from various components. The classical example of this is Servo can display a “red X” for an image which fails to decode instead of aborting the entire browser or killing an entire page.

While these are examples where a recoverable panic is useful, there are many applications where recovering panics is undesirable or doesn’t lead to anything productive:

  • Rust applications which use Result for error handling typically use panic! to indicate a fatal error, in which case the process should be taken down.
  • Many applications simply can’t recover from an internal assertion failure, so there’s no need trying to recover it.
  • To implement a recoverable panic, the compiler and standard library use a method called stack unwinding. The compiler must generate code to support this unwinding, however, and this takes time in codegen and optimizers.
  • Low-level applications typically don’t use unwinding at all as there’s no stack unwinder (e.g. kernels).

Note: as an idea of the compile-time and object-size savings from disabling the extra codegen, compiling Cargo as a library is 11% faster (16s from 18s) and 13% smaller (15MB to 13MB). Sizable gains!

Overall, the ability to recover panics is something that needs to be decided at the application level rather than at the language level. Currently the compiler does not support the ability to translate panics to process aborts in a stable fashion, and the purpose of this RFC is to add such a venue.

With such an important codegen option, however, as whether or not exceptions can be caught, it’s easy to get into a situation where libraries of mixed compilation modes are linked together, causing odd or unknown errors. This RFC proposes a situation similar to the design of custom allocators to alleviate this situation.

Detailed design

The major goal of this RFC is to develop a work flow around managing crates which wish to disable unwinding. This intends to set forth a complete vision for how these crates interact with the ecosystem at large. Much of this design will be similar to the custom allocator RFC.

High level design

This section serves as a high-level tour through the design proposed in this RFC. The linked sections provide more complete explanation as to what each step entails.

  • The compiler will have a new stable flag, -C panic which will configure how unwinding-related code is generated.
  • Two new unstable attributes will be added to the compiler, #![needs_panic_runtime] and #![panic_runtime]. The standard library will need a runtime and will be lazily linked to a crate which has #![panic_runtime].
  • Two unstable crates tagged with #![panic_runtime] will be distributed as the runtime implementation of panicking, panic_abort and panic_unwind crates. The former will translate all panics to process aborts, whereas the latter will be implemented as unwinding is today, via the system stack unwinder.
  • Cargo will gain a new panic option in the [profile.foo] sections to indicate how that profile should compile panic support.

New Compiler Flags

The first component to this design is to have a stable flag to the compiler which configures how panic-related code is generated. This will be stabilized in the form:

$ rustc -C help

Available codegen options:

    ...
    -C             panic=val -- strategy to compile in for panic related code
    ...

There will currently be two supported strategies:

  • unwind - this is what the compiler implements by default today via the invoke LLVM instruction.
  • abort - this will implement that -Z no-landing-pads does today, which is to disable the invoke instruction and use call instead everywhere.

This codegen option will default to unwind if not specified (what happens today), and the value will be encoded into the crate metadata. This option is planned with extensibility in mind to future panic strategies if we ever implement some (return-based unwinding is at least one other possible option).

Panic Attributes

Very similarly to custom allocators, two new unstable crate attributes will be added to the compiler:

  • #![needs_panic_runtime] - indicates that this crate requires a “panic runtime” to link correctly. This will be attached to the standard library and is not intended to be attached to any other crate.
  • #![panic_runtime] - indicates that this crate is a runtime implementation of panics.

As with allocators, there are a number of limitations imposed by these attributes by the compiler:

  • Any crate DAG can only contain at most one instance of #![panic_runtime].
  • Implicit dependency edges are drawn from crates tagged with #![needs_panic_runtime] to those tagged with #![panic_runtime]. Loops as usual are forbidden (e.g. a panic runtime can’t depend on libstd).
  • Complete artifacts which include a crate tagged with #![needs_panic_runtime] must include a panic runtime. This includes executables, dylibs, and staticlibs. If no panic runtime is explicitly linked, then the compiler will select an appropriate runtime to inject.
  • Finally, the compiler will ensure that panic runtimes and compilation modes are not mismatched. For a final product (outputs that aren’t rlibs) the -C panic mode of the panic runtime must match the final product itself. If the panic mode is abort, then no other validation is performed, but otherwise all crates in the DAG must have the same value of -C panic.

The purpose of these limitations is to solve a number of problems that arise when switching panic strategies. For example with aborting panic crates won’t have to link to runtime support of unwinding, or rustc will disallow mixing panic strategies by accident.

The actual API of panic runtimes will not be detailed in this RFC. These new attributes will be unstable, and consequently the API itself will also be unstable. It suffices to say, however, that like custom allocators a panic runtime will implement some public extern symbols known to the crates that need a panic runtime, and that’s how they’ll communicate/link up.

Panic Crates

Two new unstable crates will be added to the distribution for each target:

  • panic_unwind - this is an extraction of the current implementation of panicking from the standard library. It will use the same mechanism of stack unwinding as is implemented on all current platforms.
  • panic_abort - this is a new implementation of panicking which will simply translate unwinding to process aborts. There will be no runtime support required by this crate.

The compiler will assume that these crates are distributed for each platform where the standard library is also distributed (e.g. a crate that has #![needs_panic_runtime]).

Compiler defaults

The compiler will ship with a few defaults which affect how panic runtimes are selected in Rust programs. Specifically:

  • The -C panic option will default to unwind as it does today.

  • The libtest crate will explicitly link to panic_unwind. The test runner that libtest implements relies on equating panics with failure and cannot work if panics are translated to aborts.

  • If no panic runtime is explicitly selected, the compiler will employ the following logic to decide what panic runtime to inject:

    1. If any crate in the DAG is compiled with -C panic=abort, then panic_abort will be injected.
    2. If all crates in the DAG are compiled with -C panic=unwind, then panic_unwind is injected.

Cargo changes

In order to export this new feature to Cargo projects, a new option will be added to the [profile] section of manifests:

[profile.dev]
panic = 'unwind'

This will cause Cargo to pass -C panic=unwind to all rustc invocations for a crate graph. Cargo will have special knowledge, however, that for cargo test it cannot pass -C panic=abort.

Drawbacks

  • The implementation of custom allocators was no small feat in the compiler, and much of this RFC is essentially the same thing. Similar infrastructure can likely be leveraged to alleviate the implementation complexity, but this is undeniably a large change to the compiler for albeit a relatively minor option. The counter point to this, however, is that disabling unwinding in a principled fashion provides far higher quality error messages, prevents erroneous situations, and provides an immediate benefit for many Rust users today.

  • The binary distribution of the standard library will not change from what it is today. In other words, the standard library (and dependency crates like libcore) will be compiled with -C panic=unwind. This introduces the opportunity for extra code bloat or missed optimizations in applications that end up disabling unwinding in the long run. Distribution, however, is far easier because there’s only one copy of the standard library and we don’t have to rely on any other form of infrastructure.

  • This represents a proliferation of the #![needs_foo] and #![foo] style system that allocators have begun. This may be indicative of a deeper underlying requirement here of the standard library or perhaps showing how the strategy in the standard library needs to change. If the standard library were a crates.io crate it would arguably support these options via Cargo features, but without that option is this the best way to be implementing these switches for the standard library?

Alternatives

  • Currently this RFC allows mixing multiple panic runtimes in a crate graph so long as the actual runtime is compiled with -C panic=abort. This is primarily done to immediately reap benefit from -C panic=abort even though the standard library we distribute will still have unwinding support compiled in (compiled with -C panic=unwind). In the not-too-distant future however, we will likely be poised to distribute multiple binary copies of the standard library compiled with different profiles. We may be able to tighten this restriction on behalf of the compiler, requiring that all crates in a DAG have the same -C panic compilation mode, but there would unfortunately be no immediate benefit to implementing the RFC from users of our precompiled nightlies.

    This alternative, additionally, can also be viewed as a drawback. It’s unclear what a future libstd distribution mechanism would look like and how this RFC might interact with it. Stabilizing disabling unwinding via a compiler switch or a Cargo profile option may not end up meshing well with the strategy we pursue with shipping multiple standard libraries.

  • Instead of the panic runtime support in this RFC, we could instead just ship two different copies of the standard library where one simply translates panics to abort instead of unwinding. This is unfortunately very difficult for Cargo or the compiler to track, however, to ensure that the codegen option of how panics are translated is propagated throughout the rest of the crate graph. Additionally it may be easy to mix up crates of different panic strategies.

Unresolved questions

  • One possible implementation of unwinding is via return-based flags. Much of this RFC is designed with the intention of supporting arbitrary unwinding implementations, but it’s unclear whether it’s too heavily biased towards panic is either unwinding or aborting.

  • The current implementation of Cargo would mean that a naive implementation of the profile option would cause recompiles between cargo build and cargo test for projects that specify panic = 'abort'. Is this acceptable? Should Cargo cache both copies of the crate?

Summary

With specialization on the way, we need to talk about the semantics of <T as Clone>::clone() where T: Copy.

It’s generally been an unspoken rule of Rust that a clone of a Copy type is equivalent to a memcpy of that type; however, that fact is not documented anywhere. This fact should be in the documentation for the Clone trait, just like the fact that T: Eq should implement a == b == c == a rules.

Motivation

Currently, Vec::clone() is implemented by creating a new Vec, and then cloning all of the elements from one into the other. This is slow in debug mode, and may not always be optimized (although it often will be). Specialization would allow us to simply memcpy the values from the old Vec to the new Vec in the case of T: Copy. However, if we don’t specify this, we will not be able to, and we will be stuck looping over every value.

It’s always been the intention that Clone::clone == ptr::read for T: Copy; see issue #23790: “It really makes sense for Clone to be a supertrait of CopyCopy is a refinement of Clone where memcpy suffices, basically.” This idea was also implicit in accepting rfc #0839 where “[B]ecause Copy: Clone, it would be backwards compatible to upgrade to Clone in the future if demand is high enough.”

Detailed design

Specify that <T as Clone>::clone(t) shall be equivalent to ptr::read(t) where T: Copy, t: &T. An implementation that does not uphold this shall not result in undefined behavior; Clone is not an unsafe trait.

Also add something like the following sentence to the documentation for the Clone trait:

“If T: Copy, x: T, and y: &T, then let x = y.clone(); is equivalent to let x = *y;. Manual implementations must be careful to uphold this.”

Drawbacks

This is a breaking change, technically, although it breaks code that was malformed in the first place.

Alternatives

The alternative is that, for each type and function we would like to specialize in this way, we document this separately. This is how we started off with clone_from_slice.

Unresolved questions

What the exact wording should be.

Summary

Add a conservative form of abstract return types, also known as impl Trait, that will be compatible with most possible future extensions by initially being restricted to:

  • Only free-standing or inherent functions.
  • Only return type position of a function.

Abstract return types allow a function to hide a concrete return type behind a trait interface similar to trait objects, while still generating the same statically dispatched code as with concrete types.

With the placeholder syntax used in discussions so far, abstract return types would be used roughly like this:

fn foo(n: u32) -> impl Iterator<Item = u32> {
    (0..n).map(|x| x * 100)
}
// ^ behaves as if it had return type Map<Range<u32>, Closure>
// where Closure = type of the |x| x * 100 closure.

for x in foo(10) {
    // x = 0, 100, 200, ...
}

Background

There has been much discussion around the impl Trait feature already, with different proposals extending the core idea into different directions:

This RFC is an attempt to make progress on the feature by proposing a minimal subset that should be forwards-compatible with a whole range of extensions that have been discussed (and will be reviewed in this RFC). However, even this small step requires resolving some of the core questions raised in the blog post.

This RFC is closest in spirit to the original RFC, and we’ll repeat its motivation and some other parts of its text below.

Motivation

Why are we doing this? What use cases does it support? What is the expected outcome?

In today’s Rust, you can write a function signature like

fn consume_iter_static<I: Iterator<Item = u8>>(iter: I)
fn consume_iter_dynamic(iter: Box<Iterator<Item = u8>>)

In both cases, the function does not depend on the exact type of the argument. The type is held “abstract”, and is assumed only to satisfy a trait bound.

  • In the _static version using generics, each use of the function is specialized to a concrete, statically-known type, giving static dispatch, inline layout, and other performance wins.

  • In the _dynamic version using trait objects, the concrete argument type is only known at runtime using a vtable.

On the other hand, while you can write

fn produce_iter_dynamic() -> Box<Iterator<Item = u8>>

you cannot write something like

fn produce_iter_static() -> Iterator<Item = u8>

That is, in today’s Rust, abstract return types can only be written using trait objects, which can be a significant performance penalty. This RFC proposes “unboxed abstract types” as a way of achieving signatures like produce_iter_static. Like generics, unboxed abstract types guarantee static dispatch and inline data layout.

Here are some problems that unboxed abstract types solve or mitigate:

  • Returning unboxed closures. Closure syntax generates an anonymous type implementing a closure trait. Without unboxed abstract types, there is no way to use this syntax while returning the resulting closure unboxed because there is no way to write the name of the generated type.

  • Leaky APIs. Functions can easily leak implementation details in their return type, when the API should really only promise a trait bound. For example, a function returning Rev<Splits<'a, u8>> is revealing exactly how the iterator is constructed, when the function should only promise that it returns some type implementing Iterator<Item = u8>. Using newtypes/structs with private fields helps, but is extra work. Unboxed abstract types make it as easy to promise only a trait bound as it is to return a concrete type.

  • Complex types. Use of iterators in particular can lead to huge types:

    Chain<Map<'a, (i32, u8), u16, Enumerate<Filter<'a, u8, vec::MoveItems<u8>>>>, SkipWhile<'a, u16, Map<'a, &u16, u16, slice::Items<u16>>>>

    Even when using newtypes to hide the details, the type still has to be written out, which can be very painful. Unboxed abstract types only require writing the trait bound.

  • Documentation. In today’s Rust, reading the documentation for the Iterator trait is needlessly difficult. Many of the methods return new iterators, but currently each one returns a different type (Chain, Zip, Map, Filter, etc), and it requires drilling down into each of these types to determine what kind of iterator they produce.

In short, unboxed abstract types make it easy for a function signature to promise nothing more than a trait bound, and do not generally require the function’s author to write down the concrete type implementing the bound.

Detailed design

As explained at the start of the RFC, the focus here is a relatively narrow introduction of abstract types limited to the return type of inherent methods and free functions. While we still need to resolve some of the core questions about what an “abstract type” means even in these cases, we avoid some of the complexities that come along with allowing the feature in other locations or with other extensions.

Syntax

Let’s start with the bikeshed: The proposed syntax is impl Trait in return type position, composing like trait objects to forms like impl Foo + Send + 'a.

It can be explained as “a type that implements Trait”, and has been used in that form in most earlier discussions and proposals.

Initial versions of this RFC proposed @Trait for brevity reasons, since the feature is supposed to be used commonly once implemented, but due to strong negative reactions by the community this has been changed back to the current form.

There are other possibilities, like abstract Trait or ~Trait, with good reasons for or against them, but since the concrete choice of syntax is not a blocker for the implementation of this RFC, it is intended for a possible follow-up RFC to address syntax changes if needed.

Semantics

The core semantics of the feature is described below.

Note that the sections after this one go into more detail on some of the design decisions, and that it is likely for many of the mentioned limitations to be lifted at some point in the future. For clarity, we’ll separately categorize the core semantics of the feature (aspects that would stay unchanged with future extensions) and the initial limitations (which are likely to be lifted later).

Core semantics:

  • If a function returns impl Trait, its body can return values of any type that implements Trait, but all return values need to be of the same type.

  • As far as the typesystem and the compiler is concerned, the return type outside of the function would not be a entirely “new” type, nor would it be a simple type alias. Rather, its semantics would be very similar to that of generic type parameters inside a function, with small differences caused by being an output rather than an input of the function.

    • The type would be known to implement the specified traits.
    • The type would not be known to implement any other trait, with the exception of OIBITS (aka “auto traits”) and default traits like Sized.
    • The type would not be considered equal to the actual underlying type.
    • The type would not be allowed to appear as the Self type for an impl block.
  • Because OIBITS like Send and Sync will leak through an abstract return type, there will be some additional complexity in the compiler due to some non-local type checking becoming necessary.

  • The return type has an identity based on all generic parameters the function body is parameterized by, and by the location of the function in the module system. This means type equality behaves like this:

    fn foo<T: Trait>(t: T) -> impl Trait {
        t
    }
    
    fn bar() -> impl Trait {
        123
    }
    
    fn equal_type<T>(a: T, b: T) {}
    
    equal_type(bar(), bar());                      // OK
    equal_type(foo::<i32>(0), foo::<i32>(0));      // OK
    equal_type(bar(), foo::<i32>(0));              // ERROR, `impl Trait {bar}` is not the same type as `impl Trait {foo<i32>}`
    equal_type(foo::<bool>(false), foo::<i32>(0)); // ERROR, `impl Trait {foo<bool>}` is not the same type as `impl Trait {foo<i32>}`
  • The code generation passes of the compiler would not draw a distinction between the abstract return type and the underlying type, just like they don’t for generic parameters. This means:

    • The same trait code would be instantiated, for example, -> impl Any would return the type id of the underlying type.
      • Specialization would specialize based on the underlying type.

Initial limitations:

  • impl Trait may only be written within the return type of a freestanding or inherent-impl function, not in trait definitions or any non-return type position. They may also not appear in the return type of closure traits or function pointers, unless these are themselves part of a legal return type.

    • Eventually, we will want to allow the feature to be used within traits, and likely in argument position as well (as an ergonomic improvement over today’s generics).
    • Using impl Trait multiple times in the same return type would be valid, like for example in -> (impl Foo, impl Bar).
  • The type produced when a function returns impl Trait would be effectively unnameable, just like closures and function items.

    • We will almost certainly want to lift this limitation in the long run, so that abstract return types can be placed into structs and so on. There are a few ways we could do so, all related to getting at the “output type” of a function given all of its generic arguments.
  • The function body cannot see through its own return type, so code like this would be forbidden just like on the outside:

    fn sum_to(n: u32) -> impl Display {
        if n == 0 {
            0
        } else {
            n + sum_to(n - 1)
        }
    }
    • It’s unclear whether we’ll want to lift this limitation, but it should be possible to do so.

Rationale

Why these semantics for the return type?

There has been a lot of discussion about what the semantics of the return type should be, with the theoretical extremes being “full return type inference” and “fully abstract type that behaves like a autogenerated newtype wrapper”. (This was in fact the main focus of the blog post on impl Trait.)

The design as chosen in this RFC lies somewhat in between those two, since it allows OIBITs to leak through, and allows specialization to “see” the full type being returned. That is, impl Trait does not attempt to be a “tightly sealed” abstraction boundary. The rationale for this design is a mixture of pragmatics and principles.

Specialization transparency

Principles for specialization transparency:

The specialization RFC has given us a basic principle for how to understand bounds in function generics: they represent a minimum contract between the caller and the callee, in that the caller must meet at least those bounds, and the callee must be prepared to work with any type that meets at least those bounds. However, with specialization, the callee may choose different behavior when additional bounds hold.

This RFC abides by a similar interpretation for return types: the signature represents the minimum bound that the callee must satisfy, and the caller must be prepared to work with any type that meets at least that bound. Again, with specialization, the caller may dispatch on additional type information beyond those bounds.

In other words, to the extent that returning impl Trait is intended to be symmetric with taking a generic T: Trait, transparency with respect to specialization maintains that symmetry.

Pragmatics for specialization transparency:

The practical reason we want impl Trait to be transparent to specialization is the same as the reason we want specialization in the first place: to be able to break through abstractions with more efficient special-case code.

This is particularly important for one of the primary intended usecases: returning impl Iterator. We are very likely to employ specialization for various iterator types, and making the underlying return type invisible to specialization would lose out on those efficiency wins.

OIBIT transparency

OIBITs leak through an abstract return type. This might be considered controversial, since it effectively opens a channel where the result of function-local type inference affects item-level API, but has been deemed worth it for the following reasons:

  • Ergonomics: Trait objects already have the issue of explicitly needing to declare Send/Sync-ability, and not extending this problem to abstract return types is desirable. In practice, most uses of this feature would have to add explicit bounds for OIBITS if they wanted to be maximally usable.

  • Low real change, since the situation already somewhat exists on structs with private fields:

    • In both cases, a change to the private implementation might change whether a OIBIT is implemented or not.
    • In both cases, the existence of OIBIT impls is not visible without documentation tools
    • In both cases, you can only assert the existence of OIBIT impls by adding explicit trait bounds either to the API or to the crate’s test suite.

In fact, a large part of the point of OIBITs in the first place was to cut across abstraction barriers and provide information about a type without the type’s author having to explicitly opt in.

This means, however, that it has to be considered a silent breaking change to change a function with an abstract return type in a way that removes OIBIT impls, which might be a problem. (As noted above, this is already the case for struct definitions.)

But since the number of used OIBITs is relatively small, deducing the return type in a function body and reasoning about whether such a breakage will occur has been deemed as a manageable amount of work.

Wherefore type abstraction?

In the most recent RFC related to this feature, a more “tightly sealed” abstraction mechanism was proposed. However, part of the discussion on specialization centered on precisely the issue of what type abstraction provides and how to achieve it. A particular salient point there is that, in Rust, privacy is already our primary mechanism for hiding (“privacy is the new parametricity”). In practice, that means that if you want opacity against specialization, you should use something like a newtype.

Anonymity

An abstract return type cannot be named in this proposal, which means that it cannot be placed into structs and so on. This is not a fundamental limitation in any sense; the limitation is there both to keep this RFC simple, and because the precise way we might want to allow naming of such types is still a bit unclear. Some possibilities include a typeof operator, or explicit named abstract types.

Limitation to only return type position

There have been various proposed additional places where abstract types might be usable. For example, fn x(y: impl Trait) as shorthand for fn x<T: Trait>(y: T).

Since the exact semantics and user experience for these locations are yet unclear (impl Trait would effectively behave completely different before and after the ->), this has also been excluded from this proposal.

Type transparency in recursive functions

Functions with abstract return types can not see through their own return type, making code like this not compile:

fn sum_to(n: u32) -> impl Display {
    if n == 0 {
        0
    } else {
        n + sum_to(n - 1)
    }
}

This limitation exists because it is not clear how much a function body can and should know about different instantiations of itself.

It would be safe to allow recursive calls if the set of generic parameters is identical, and it might even be safe if the generic parameters are different, since you would still be inside the private body of the function, just differently instantiated.

But variance caused by lifetime parameters and the interaction with specialization makes it uncertain whether this would be sound.

In any case, it can be initially worked around by defining a local helper function like this:

fn sum_to(n: u32) -> impl Display {
    fn sum_to_(n: u32) -> u32 {
        if n == 0 {
            0
        } else {
            n + sum_to_(n - 1)
        }
    }
    sum_to_(n)
}

Because impl Trait defines a type tied to the concrete function body, it does not make much sense to talk about it separately in a function signature, so the syntax is forbidden there.

Compatibility with conditional trait bounds

One valid critique for the existing impl Trait proposal is that it does not cover more complex scenarios where the return type would implement one or more traits depending on whether a type parameter does so with another.

For example, an iterator adapter might want to implement Iterator and DoubleEndedIterator, depending on whether the adapted one does:

fn skip_one<I>(i: I) -> SkipOne<I> { ... }
struct SkipOne<I> { ... }
impl<I: Iterator> Iterator for SkipOne<I> { ... }
impl<I: DoubleEndedIterator> DoubleEndedIterator for SkipOne<I> { ... }

Using just -> impl Iterator, this would not be possible to reproduce.

Since there have been no proposals so far that would address this in a way that would conflict with the fixed-trait-set case, this RFC punts on that issue as well.

Limitation to free/inherent functions

One important usecase of abstract return types is to use them in trait methods.

However, there is an issue with this, namely that in combinations with generic trait methods, they are effectively equivalent to higher kinded types. Which is an issue because Rust’s HKT story is not yet figured out, so any “accidental implementation” might cause unintended fallout.

HKT allows you to be generic over a type constructor, a.k.a. a “thing with type parameters”, and then instantiate them at some later point to get the actual type. For example, given a HK type T that takes one type as parameter, you could write code that uses T<u32> or T<bool> without caring about whether T = Vec, T = Box, etc.

Now if we look at abstract return types, we have a similar situation:

trait Foo {
    fn bar<U>() -> impl Baz
}

Given a T: Foo, we could instantiate T::bar::<u32> or T::bar::<bool>, and could get arbitrary different return types of bar instantiated with a u32 or bool, just like T<u32> and T<bool> might give us Vec<u32> or Box<bool> in the example above.

The problem does not exist with trait method return types today because they are concrete:

trait Foo {
    fn bar<U>() -> X<U>
}

Given the above code, there is no way for bar to choose a return type X that could fundamentally differ between instantiations of Self while still being instantiable with an arbitrary U.

At most you could return a associated type, but then you’d lose the generics from bar

trait Foo {
    type X;
    fn bar<U>() -> Self::X // No way to apply U
}

So, in conclusion, since Rust’s HKT story is not yet fleshed out, and the compatibility of the current compiler with it is unknown, it is not yet possible to reach a concrete solution here.

In addition to that, there are also different proposals as to whether an abstract return type is its own thing or sugar for a associated type, how it interacts with other associated items and so on, so forbidding them in traits seems like the best initial course of action.

Drawbacks

Why should we not do this?

Drawbacks due to the proposal’s minimalism

As has been elaborated on above, there are various way this feature could be extended and combined with the language, so implementing it might cause issues down the road if limitations or incompatibilities become apparent. However, variations of this RFC’s proposal have been under discussion for quite a long time at this point, and this proposal is carefully designed to be future-compatible with them, while resolving the core issue around transparency.

A drawback of limiting the feature to return type position (and not arguments) is that it creates a somewhat inconsistent mental model: it forces you to understand the feature in a highly special-cased way, rather than as a general way to talk about unknown-but-bounded types in function signatures. This could be particularly bewildering to newcomers, who must choose between T: Trait, Box<Trait>, and impl Trait, with the latter only usable in one place.

Drawbacks due to partial transparency

The fact that specialization and OIBITs can “see through” impl Trait may be surprising, to the extent that one wants to see impl Trait as an abstraction mechanism. However, as the RFC argued in the rationale section, this design is probably the most consistent with our existing post-specialization abstraction mechanisms, and lead to the relatively simple story that privacy is the way to achieve hiding in Rust.

Alternatives

What other designs have been considered? What is the impact of not doing this?

See the links in the motivation section for detailed analysis that we won’t repeat here.

But basically, without this feature certain things remain hard or impossible to do in Rust, like returning a efficiently usable type parameterized by types private to a function body, for example an iterator adapter containing a closure.

Unresolved questions

What parts of the design are still to be determined?

The precise implementation details for OIBIT transparency are a bit unclear: in general, it means that type checking may need to proceed in a particular order, since you cannot get the full type information from the signature alone (you have to typecheck the function body to determine which OIBITs apply).

Summary

Improve Cargo’s story around multi-crate single-repo project management by introducing the concept of workspaces. All packages in a workspace will share Cargo.lock and an output directory for artifacts.

Motivation

A common method to organize a multi-crate project is to have one repository which contains all of the crates. Each crate has a corresponding subdirectory along with a Cargo.toml describing how to build it. There are a number of downsides to this approach, however:

  • Each sub-crate will have its own Cargo.lock, so it’s difficult to ensure that the entire project is using the same version of all dependencies. This is desired as the main crate (often a binary) is often the one that has the Cargo.lock “which counts”, but it needs to be kept in sync with all dependencies.

  • When building or testing sub-crates, all dependencies will be recompiled as the target directory will be changing as you move around the source tree. This can be overridden with build.target-dir or CARGO_TARGET_DIR, but this isn’t always convenient to set.

Solving these two problems should help ease the development of large Rust projects by ensuring that all dependencies remain in sync and builds by default use already-built artifacts if available.

Detailed design

Cargo will grow the concept of a workspace for managing repositories of multiple crates. Workspaces will then have the properties:

  • A workspace can contain multiple local crates: one ‘root crate’, and any number of ‘member crate’.
  • The root crate of a workspace has a Cargo.toml file containing [workspace] key, which we call it as ‘root Cargo.toml’.
  • Whenever any crate in the workspace is compiled, output will be placed in the target directory next to the root Cargo.toml.
  • One Cargo.lock file for the entire workspace will reside next to the root Cargo.toml and encompass the dependencies (and dev-dependencies) for all crates in the workspace.

With workspaces, Cargo can now solve the problems set forth in the motivation section. Next, however, workspaces need to be defined. In the spirit of much of the rest of Cargo’s configuration today this will largely be automatic for conventional project layouts but will have explicit controls for configuration.

New manifest keys

First, let’s look at the new manifest keys which will be added to Cargo.toml:

[workspace]
members = ["relative/path/to/child1", "../child2"]

# or ...

[package]
workspace = "../foo"

The root Cargo.toml of a workspace, indicated by the presence of [workspace], is responsible for defining the entire workspace (listing all members). This example here means that two extra crates will be members of the workspace (which also includes the root).

The package.workspace key is used to point at a workspace’s root crate. For example this Cargo.toml indicates that the Cargo.toml in ../foo is the root Cargo.toml of root crate, that this package is a member of.

These keys are mutually exclusive when applied in Cargo.toml. A crate may either specify package.workspace or specify [workspace]. That is, a crate cannot both be a root crate in a workspace (contain [workspace]) and also be a member crate of another workspace (contain package.workspace).

“Virtual” Cargo.toml

A good number of projects do not necessarily have a “root Cargo.toml” which is an appropriate root for a workspace. To accommodate these projects and allow for the output of a workspace to be configured regardless of where crates are located, Cargo will now allow for “virtual manifest” files. These manifests will currently only contains the [workspace] table and will notably be lacking a [project] or [package] top level key.

Cargo will for the time being disallow many commands against a virtual manifest, for example cargo build will be rejected. Arguments that take a package, however, such as cargo test -p foo will be allowed. Workspaces can eventually get extended with --all flags so in a workspace root you could execute cargo build --all to compile all crates.

Validating a workspace

A workspace is valid if these two properties hold:

  1. A workspace has only one root crate (that with [workspace] in Cargo.toml).
  2. All workspace crates defined in workspace.members point back to the workspace root with package.workspace.

While the restriction of one-root-per workspace may make sense, the restriction of crates pointing back to the root may not. If, however, this restriction were not in place then the set of crates in a workspace may differ depending on which crate it was viewed from. For example if workspace root A includes B then it will think B is in A’s workspace. If, however, B does not point back to A, then B would not think that A was in its workspace. This would in turn cause the set of crates in each workspace to be different, further causing Cargo.lock to get out of sync if it were allowed. By ensuring that all crates have edges to each other in a workspace Cargo can prevent this situation and guarantee robust builds no matter where they’re executed in the workspace.

To alleviate misconfiguration Cargo will emit an error if the two properties above do not hold for any crate attempting to be part of a workspace. For example, if the package.workspace key is specified, but the crate is not a workspace root or doesn’t point back to the original crate an error is emitted.

Implicit relations

The combination of the package.workspace key and [workspace] table is enough to specify any workspace in Cargo. Having to annotate all crates with a package.workspace parent or a workspace.members list can get quite tedious, however! To alleviate this configuration burden Cargo will allow these keys to be implicitly defined in some situations.

The package.workspace can be omitted if it would only contain ../ (or some repetition of it). That is, if the root of a workspace is hierarchically the first Cargo.toml with [workspace] above a crate in the filesystem, then that crate can omit the package.workspace key.

Next, a crate which specifies [workspace] without a members key will transitively crawl path dependencies to fill in this key. This way all path dependencies (and recursively their own path dependencies) will inherently become the default value for workspace.members.

Note that these implicit relations will be subject to the same validations mentioned above for all of the explicit configuration as well.

Workspaces in practice

Many Rust projects today already have Cargo.toml at the root of a repository, and with the small addition of [workspace] in the root Cargo.toml, a workspace will be ready for all crates in that repository. For example:

  • An FFI crate with a sub-crate for FFI bindings

    Cargo.toml
    src/
    foo-sys/
      Cargo.toml
      src/
    
  • A crate with multiple in-tree dependencies

    Cargo.toml
    src/
    dep1/
      Cargo.toml
      src/
    dep2/
      Cargo.toml
      src/
    

Some examples of layouts that will require extra configuration, along with the configuration necessary, are:

  • Trees without any root crate

    crate1/
      Cargo.toml
      src/
    crate2/
      Cargo.toml
      src/
    crate3/
      Cargo.toml
      src/
    

    these crates can all join the same workspace via a Cargo.toml file at the root looking like:

    [workspace]
    members = ["crate1", "crate2", "crate3"]
    
  • Trees with multiple workspaces

    ws1/
      crate1/
        Cargo.toml
        src/
      crate2/
        Cargo.toml
        src/
    ws2/
      Cargo.toml
      src/
      crate3/
        Cargo.toml
        src/
    

    The two workspaces here can be configured by placing the following in the manifests:

    # ws1/Cargo.toml
    [workspace]
    members = ["crate1", "crate2"]
    
    # ws2/Cargo.toml
    [workspace]
    
  • Trees with non-hierarchical workspaces

    root/
      Cargo.toml
      src/
    crates/
      crate1/
        Cargo.toml
        src/
      crate2/
        Cargo.toml
        src/
    

    The workspace here can be configured by placing the following in the manifests:

    # root/Cargo.toml
    #
    # Note that `members` aren't necessary if these are otherwise path
    # dependencies.
    [workspace]
    members = ["../crates/crate1", "../crates/crate2"]
    
    # crates/crate1/Cargo.toml
    [package]
    workspace = "../../root"
    
    # crates/crate2/Cargo.toml
    [package]
    workspace = "../../root"
    

Projects like the compiler will likely need exhaustively explicit configuration. The rust repo conceptually has two workspaces, the standard library and the compiler, and these would need to be manually configured with workspace.members and package.workspace keys amongst all crates.

Lockfile and override interactions

One of the main features of a workspace is that only one Cargo.lock is generated for the entire workspace. This lock file can be affected, however, with both [replace] overrides as well as paths overrides.

Primarily, the Cargo.lock generate will not simply be the concatenation of the lock files from each project. Instead the entire workspace will be resolved together all at once, minimizing versions of crates used and sharing dependencies as much as possible. For example one path dependency will always have the same set of dependencies no matter which crate is being compiled.

When interacting with overrides, workspaces will be modified to only allow [replace] to exist in the workspace root. This Cargo.toml will affect lock file generation, but no other workspace members will be allowed to have a [replace] directive (with an informative error message being produced).

Finally, the paths overrides will be applied as usual, and they’ll continue to be applied relative to whatever crate is being compiled (not the workspace root). These are intended for much more local testing, so no restriction of “must be in the root” should be necessary.

Note that this change to the lockfile format is technically incompatible with older versions of Cargo.lock, but the entire workspaces feature is also incompatible with older versions of Cargo. This will require projects that wish to work with workspaces and multiple versions of Cargo to check in multiple Cargo.lock files, but if projects avoid workspaces then Cargo will remain forwards and backwards compatible.

Future Extensions

Once Cargo understands a workspace of crates, we could easily extend various subcommands with a --all flag to perform tasks such as:

  • Test all crates within a workspace (run all unit tests, doc tests, etc)
  • Build all binaries for a set of crates within a workspace
  • Publish all crates in a workspace if necessary to crates.io

Furthermore, workspaces could start to deduplicate metadata among crates like version numbers, URL information, authorship, etc.

This support isn’t proposed to be added in this RFC specifically, but simply to show that workspaces can be used to solve other existing issues in Cargo.

Drawbacks

  • As proposed there is no method to disable implicit actions taken by Cargo. It’s unclear what the use case for this is, but it could in theory arise.

  • No crate will implicitly benefit from workspaces after this is implemented. Existing crates must opt-in with a [workspace] key somewhere at least.

Alternatives

  • The workspace.members key could support globs to define a number of directories at once. For example one could imagine:

    [workspace]
    members = ["crates/*"]
    

    as an ergonomic method of slurping up all sub-folders in the crates folder as crates.

  • Cargo could attempt to perform more inference of workspace members by simply walking the entire directory tree starting at Cargo.toml. All children found could implicitly be members of the workspace. Walking entire trees, unfortunately, isn’t always efficient to do and it would be unfortunate to have to unconditionally do this.

Unresolved questions

  • Does this approach scale well to repositories with a large number of crates? For example does the winapi-rs repository experience a slowdown on standard cargo build as a result?

Summary

Stabilize the -C overflow-checks command line argument.

Motivation

This is an easy way to turn on overflow checks in release builds without otherwise turning on debug assertions, via the -C debug-assertions flag. In stable Rust today you can’t get one without the other.

Users can use the -C overflow-checks flag from their Cargo config to turn on overflow checks for an entire application.

This flag, which accepts values of ‘yes’/‘no’, ‘on’/‘off’, is being renamed from force-overflow-checks because the force doesn’t add anything that the ‘yes’/‘no’

Detailed design

This is a stabilization RFC. The only steps will be to move force-overflow-checks from -Z to -C, renaming it to overflow-checks, and making it stable.

Drawbacks

It’s another rather ad-hoc flag for modifying code generation.

Like other such flags, this applies to the entire code unit, regardless of monomorphizations. This means that code generation for a single function can be different based on which code unit it’s instantiated in.

Alternatives

The flag could instead be tied to crates such that any time code from that crate is inlined/monomorphized it turns on overflow checks.

We might also want a design that provides per-function control over overflow checks.

Unresolved questions

Cargo might also add a profile option like

[profile.dev]
overflow-checks = true

This may also be accomplished by Cargo’s pending support for passing arbitrary flags to rustc.

Summary

The standard library provides the From and Into traits as standard ways to convert between types. However, these traits only support infallible conversions. This RFC proposes the addition of TryFrom and TryInto traits to support these use cases in a standard way.

Motivation

Fallible conversions are fairly common, and a collection of ad-hoc traits has arisen to support them, both within the standard library and in third party crates. A standardized set of traits following the pattern set by From and Into will ease these APIs by providing a standardized interface as we expand the set of fallible conversions.

One specific avenue of expansion that has been frequently requested is fallible integer conversion traits. Conversions between integer types may currently be performed with the as operator, which will silently truncate the value if it is out of bounds of the target type. Code which needs to down-cast values must manually check that the cast will succeed, which is both tedious and error prone. A fallible conversion trait reduces code like this:

let value: isize = ...;

let value: u32 = if value < 0 || value > u32::max_value() as isize {
    return Err(BogusCast);
} else {
    value as u32
};

to simply:

let value: isize = ...;
let value: u32 = try!(value.try_into());

Detailed design

Two traits will be added to the core::convert module:

pub trait TryFrom<T>: Sized {
    type Err;

    fn try_from(t: T) -> Result<Self, Self::Err>;
}

pub trait TryInto<T>: Sized {
    type Err;

    fn try_into(self) -> Result<T, Self::Err>;
}

In a fashion similar to From and Into, a blanket implementation of TryInto is provided for all TryFrom implementations:

impl<T, U> TryInto<U> for T where U: TryFrom<T> {
    type Error = U::Err;

    fn try_into(self) -> Result<U, Self::Err> {
        U::try_from(self)
    }
}

In addition, implementations of TryFrom will be provided to convert between all combinations of integer types:

#[derive(Debug)]
pub struct TryFromIntError(());

impl fmt::Display for TryFromIntError {
    fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
        fmt.write_str(self.description())
    }
}

impl Error for TryFromIntError {
    fn description(&self) -> &str {
        "out of range integral type conversion attempted"
    }
}

impl TryFrom<usize> for u8 {
    type Err = TryFromIntError;

    fn try_from(t: usize) -> Result<u8, TryFromIntError> {
        // ...
    }
}

// ...

This notably includes implementations that are actually infallible, including implementations between a type and itself. A common use case for these kinds of conversions is when interacting with a C API and converting, for example, from a u64 to a libc::c_long. c_long may be u32 on some platforms but u64 on others, so having an impl TryFrom<u64> for u64 ensures that conversions using these traits will compile on all architectures. Similarly, a conversion from usize to u32 may or may not be fallible depending on the target architecture.

The standard library provides a reflexive implementation of the From trait for all types: impl<T> From<T> for T. We could similarly provide a “lifting” implementation of TryFrom:

impl<T, U: From<T>> TryFrom<T> for U {
    type Err = Void;

    fn try_from(t: T) -> Result<U, Void> {
        Ok(U::from(t))
    }
}

However, this implementation would directly conflict with our goal of having uniform TryFrom implementations between all combinations of integer types. In addition, it’s not clear what value such an implementation would actually provide, so this RFC does not propose its addition.

Drawbacks

It is unclear if existing fallible conversion traits can backwards-compatibly be subsumed into TryFrom and TryInto, which may result in an awkward mix of ad-hoc traits in addition to TryFrom and TryInto.

Alternatives

We could avoid general traits and continue making distinct conversion traits for each use case.

Unresolved questions

Are TryFrom and TryInto the right names? There is some precedent for the try_ prefix: TcpStream::try_clone, Mutex::try_lock, etc.

What should be done about FromStr, ToSocketAddrs, and other ad-hoc fallible conversion traits? An upgrade path may exist in the future with specialization, but it is probably too early to say definitively.

Should TryFrom and TryInto be added to the prelude? This would be the first prelude addition since the 1.0 release.

Summary

This RFC basically changes core::sync::atomic to look like this:

#[cfg(target_has_atomic = "8")]
struct AtomicBool {}
#[cfg(target_has_atomic = "8")]
struct AtomicI8 {}
#[cfg(target_has_atomic = "8")]
struct AtomicU8 {}
#[cfg(target_has_atomic = "16")]
struct AtomicI16 {}
#[cfg(target_has_atomic = "16")]
struct AtomicU16 {}
#[cfg(target_has_atomic = "32")]
struct AtomicI32 {}
#[cfg(target_has_atomic = "32")]
struct AtomicU32 {}
#[cfg(target_has_atomic = "64")]
struct AtomicI64 {}
#[cfg(target_has_atomic = "64")]
struct AtomicU64 {}
#[cfg(target_has_atomic = "128")]
struct AtomicI128 {}
#[cfg(target_has_atomic = "128")]
struct AtomicU128 {}
#[cfg(target_has_atomic = "ptr")]
struct AtomicIsize {}
#[cfg(target_has_atomic = "ptr")]
struct AtomicUsize {}
#[cfg(target_has_atomic = "ptr")]
struct AtomicPtr<T> {}

Motivation

Many lock-free algorithms require a two-value compare_exchange, which is effectively twice the size of a usize. This would be implemented by atomically swapping a struct containing two members.

Another use case is to support Linux’s futex API. This API is based on atomic i32 variables, which currently aren’t available on x86_64 because AtomicIsize is 64-bit.

Detailed design

New atomic types

The AtomicI8, AtomicI16, AtomicI32, AtomicI64 and AtomicI128 types are added along with their matching AtomicU* type. These have the same API as the existing AtomicIsize and AtomicUsize types. Note that support for 128-bit atomics is dependent on the i128/u128 RFC being accepted.

Target support

One problem is that it is hard for a user to determine if a certain type T can be placed inside an Atomic<T>. After a quick survey of the LLVM and Clang code, architectures can be classified into 3 categories:

  • The architecture does not support any form of atomics (mainly microcontroller architectures).
  • The architecture supports all atomic operations for integers from i8 to iN (where N is the architecture word/pointer size).
  • The architecture supports all atomic operations for integers from i8 to i(N*2).

A new target cfg is added: target_has_atomic. It will have multiple values, one for each atomic size supported by the target. For example:

#[cfg(target_has_atomic = "128")]
static ATOMIC: AtomicU128 = AtomicU128::new(mem::transmute((0u64, 0u64)));
#[cfg(not(target_has_atomic = "128"))]
static ATOMIC: Mutex<(u64, u64)> = Mutex::new((0, 0));

#[cfg(target_has_atomic = "64")]
static COUNTER: AtomicU64 = AtomicU64::new(0);
#[cfg(not(target_has_atomic = "64"))]
static COUNTER: AtomicU32 = AtomicU32::new(0);

Note that it is not necessary for an architecture to natively support atomic operations for all sizes (i8, i16, etc) as long as it is able to perform a compare_exchange operation with a larger size. All smaller operations can be emulated using that. For example a byte atomic can be emulated by using a compare_exchange loop that only modifies a single byte of the value. This is actually how LLVM implements byte-level atomics on MIPS, which only supports word-sized atomics native. Note that the out-of-bounds read is fine here because atomics are aligned and will never cross a page boundary. Since this transformation is performed transparently by LLVM, we do not need to do any extra work to support this.

Changes to AtomicPtr, AtomicIsize and AtomicUsize

These types will have a #[cfg(target_has_atomic = "ptr")] bound added to them. Although these types are stable, this isn’t a breaking change because all targets currently supported by Rust will have this type available. This would only affect custom targets, which currently fail to link due to missing compiler-rt symbols anyways.

Changes to AtomicBool

This type will be changes to use an AtomicU8 internally instead of an AtomicUsize, which will allow it to be safely transmuted to a bool. This will make it more consistent with the other atomic types that have the same layout as their underlying type. (For example futex code will assume that a &AtomicI32 can be passed as a &i32 to the system call)

Drawbacks

Having certain atomic types get enabled/disable based on the target isn’t very nice, but it’s unavoidable because support for atomic operations is very architecture-specific.

This approach doesn’t directly support for atomic operations on user-defined structs, but this can be emulated using transmutes.

Alternatives

One alternative that was discussed in a previous RFC was to add a generic Atomic<T> type. However the consensus was that having unsupported atomic types either fail at monomorphization time or fall back to lock-based implementations was undesirable.

Several other designs have been suggested here.

Unresolved questions

None

Summary

This RFC exposes LLVM’s support for module-level inline assembly by adding a global_asm! macro. The syntax is very simple: it just takes a string literal containing the assembly code.

Example:

global_asm!(r#"
.globl my_asm_func
my_asm_func:
    ret
"#);

extern {
    fn my_asm_func();
}

Motivation

There are two main use cases for this feature. The first is that it allows functions to be written completely in assembly, which mostly eliminates the need for a naked attribute. This is mainly useful for function that use a custom calling convention, such as interrupt handlers.

Another important use case is that it allows external assembly files to be used in a Rust module without needing hacks in the build system:

global_asm!(include_str!("my_asm_file.s"));

Assembly files can also be preprocessed or generated by build.rs (for example using the C preprocessor), which will produce output files in the Cargo output directory:

global_asm!(include_str!(concat!(env!("OUT_DIR"), "/preprocessed_asm.s")));

Detailed design

See description above, not much to add. The macro will map directly to LLVM’s module asm.

Drawbacks

Like asm!, this feature depends on LLVM’s integrated assembler.

Alternatives

The current way of including external assembly is to compile the assembly files using gcc in build.rs and link them into the Rust program as a static library.

An alternative for functions written entirely in assembly is to add a #[naked] function attribute.

Unresolved questions

None

Summary

Add a contains method to VecDeque and LinkedList that checks if the collection contains a given item.

Motivation

A contains method exists for the slice type [T] and for Vec through Deref, but there is no easy way to check if a VecDeque or LinkedList contains a specific item. Currently, the shortest way to do it is something like:

vec_deque.iter().any(|e| e == item)

While this is not insanely verbose, a contains method has the following advantages:

  • the name contains expresses the programmer’s intent…
  • … and thus is more idiomatic
  • it’s as short as it can get
  • programmers that are used to call contains on a Vec are confused by the non-existence of the method for VecDeque or LinkedList

Detailed design

Add the following method to std::collections::VecDeque:

impl<T> VecDeque<T> {
    /// Returns `true` if the `VecDeque` contains an element equal to the
    /// given value.
    pub fn contains(&self, x: &T) -> bool
        where T: PartialEq<T>
    {
        // implementation with a result equivalent to the result
        // of `self.iter().any(|e| e == x)`
    }
}

Add the following method to std::collections::LinkedList:

impl<T> LinkedList<T> {
    /// Returns `true` if the `LinkedList` contains an element equal to the
    /// given value.
    pub fn contains(&self, x: &T) -> bool
        where T: PartialEq<T>
    {
        // implementation with a result equivalent to the result
        // of `self.iter().any(|e| e == x)`
    }
}

The new methods should probably be marked as unstable initially and be stabilized later.

Drawbacks

Obviously more methods increase the complexity of the standard library, but in case of this RFC the increase is rather tiny.

While VecDeque::contains should be (nearly) as fast as [T]::contains, LinkedList::contains will probably be much slower due to the cache inefficient nature of a linked list. Offering a method that is short to write and convenient to use could lead to excessive use of said method without knowing about the problems mentioned above.

Alternatives

There are a few alternatives:

  • add VecDeque::contains only and do not add LinkedList::contains
  • do nothing, because – technically – the same functionality is offered through iterators
  • also add BinaryHeap::contains, since it could be convenient for some use cases, too

Unresolved questions

None so far.

Summary

A closure that does not move, borrow, or otherwise access (capture) local variables should be coercible to a function pointer (fn).

Motivation

Currently in Rust, it is impossible to bind anything but a pre-defined function as a function pointer. When dealing with closures, one must either rely upon Rust’s type-inference capabilities, or use the Fn trait to abstract for any closure with a certain type signature.

It is not possible to define a function while at the same time binding it to a function pointer.

This is, admittedly, a convenience-motivated feature, but in certain situations the inability to bind code this way creates a significant amount of boilerplate. For example, when attempting to create an array of small, simple, but unique functions, it would be necessary to pre-define each and every function beforehand:

fn inc_0(var: &mut u32) {}
fn inc_1(var: &mut u32) { *var += 1; }
fn inc_2(var: &mut u32) { *var += 2; }
fn inc_3(var: &mut u32) { *var += 3; }

const foo: [fn(&mut u32); 4] = [
  inc_0,
  inc_1,
  inc_2,
  inc_3,
];

This is a trivial example, and one that might not seem too consequential, but the code doubles with every new item added to the array. With a large amount of elements, the duplication begins to seem unwarranted.

A solution, of course, is to use an array of Fn instead of fn:

const foo: [&'static Fn(&mut u32); 4] = [
  &|var: &mut u32| {},
  &|var: &mut u32| *var += 1,
  &|var: &mut u32| *var += 2,
  &|var: &mut u32| *var += 3,
];

And this seems to fix the problem. Unfortunately, however, because we use a reference to the Fn trait, an extra layer of indirection is added when attempting to run foo[n](&mut bar).

Rust must use dynamic dispatch in this situation; a closure with captures is nothing but a struct containing references to captured variables. The code associated with a closure must be able to access those references stored in the struct.

In situations where this function pointer array is particularly hot code, any optimizations would be appreciated. More generally, it is always preferable to avoid unnecessary indirection. And, of course, it is impossible to use this syntax when dealing with FFI.

Aside from code-size nits, anonymous functions are legitimately useful for programmers. In the case of callback-heavy code, for example, it can be impractical to define functions out-of-line, with the requirement of producing confusing (and unnecessary) names for each. In the very first example given, inc_X names were used for the out-of-line functions, but more complicated behavior might not be so easily representable.

Finally, this sort of automatic coercion is simply intuitive to the programmer. In the &Fn example, no variables are captured by the closures, so the theory is that nothing stops the compiler from treating them as anonymous functions.

Detailed design

In C++, non-capturing lambdas (the C++ equivalent of closures) “decay” into function pointers when they do not need to capture any variables. This is used, for example, to pass a lambda into a C function:

void foo(void (*foobar)(void)) {
    // impl
}
void bar() {
    foo([]() { /* do something */ });
}

With this proposal, rust users would be able to do the same:

fn foo(foobar: fn()) {
    // impl
}
fn bar() {
    foo(|| { /* do something */ });
}

Using the examples within “Motivation”, the code array would be simplified to no performance detriment:

const foo: [fn(&mut u32); 4] = [
  |var: &mut u32| {},
  |var: &mut u32| *var += 1,
  |var: &mut u32| *var += 2,
  |var: &mut u32| *var += 3,
];

Because there does not exist any item in the language that directly produces a fn type, even fn items must go through the process of reification. To perform the coercion, then, rustc must additionally allow the reification of unsized closures to fn types. The implementation of this is simplified by the fact that closures’ capture information is recorded on the type-level.

Note: once explicitly assigned to an Fn trait, the closure can no longer be coerced into fn, even if it has no captures.

let a: &Fn(u32) -> u32 = |foo: u32| { foo + 1 };
let b: fn(u32) -> u32 = *a; // Can't re-coerce

Drawbacks

This proposal could potentially allow Rust users to accidentally constrain their APIs. In the case of a crate, a user returning fn instead of Fn may find that their code compiles at first, but breaks when the user later needs to capture variables:

// The specific syntax is more convenient to use
fn func_specific(&self) -> (fn() -> u32) {
  || return 0
}

fn func_general<'a>(&'a self) -> impl Fn() -> u32 {
  move || return self.field
}

In the above example, the API author could start off with the specific version of the function, and by circumstance later need to capture a variable. The required change from fn to Fn could be a breaking change.

We do expect crate authors to measure their API’s flexibility in other areas, however, as when determining whether to take &self or &mut self. Taking a similar situation to the above:

fn func_specific<'a>(&'a self) -> impl Fn() -> u32 {
  move || return self.field
}

fn func_general<'a>(&'a mut self) -> impl FnMut() -> u32 {
  move || { self.field += 1; return self.field; }
}

This aspect is probably outweighed by convenience, simplicity, and the potential for optimization that comes with the proposed changes.

Alternatives

Function literal syntax

With this alternative, Rust users would be able to directly bind a function to a variable, without needing to give the function a name.

let foo = fn() { /* do something */ };
foo();
const foo: [fn(&mut u32); 4] = [
  fn(var: &mut u32) {},
  fn(var: &mut u32) { *var += 1 },
  fn(var: &mut u32) { *var += 2 },
  fn(var: &mut u32) { *var += 3 },
];

This isn’t ideal, however, because it would require giving new semantics to fn syntax. Additionally, such syntax would either require explicit return types, or additional reasoning about the literal’s return type.

fn(x: bool) { !x }

The above function literal, at first glance, appears to return (). This could be potentially misleading, especially in situations where the literal is bound to a variable with let.

As with all new syntax, this alternative would carry with it a discovery barrier. Closure coercion may be preferred due to its intuitiveness.

Aggressive optimization

This is possibly unrealistic, but an alternative would be to continue encouraging the use of closures with the Fn trait, but use static analysis to determine when the used closure is “trivial” and does not need indirection.

Of course, this would probably significantly complicate the optimization process, and would have the detriment of not being easily verifiable by the programmer without checking the disassembly of their program.

Unresolved questions

Should we generalize this behavior in the future, so that any zero-sized type that implements Fn can be converted into a fn pointer?

Summary

This RFC proposes accepting literals in attributes by defining the grammar of attributes as:

attr : '#' '!'? '[' meta_item ']' ;

meta_item : IDENT ( '=' LIT | '(' meta_item_inner? ')' )? ;

meta_item_inner : (meta_item | LIT) (',' meta_item_inner)? ;

Note that LIT is a valid Rust literal and IDENT is a valid Rust identifier. The following attributes, among others, would be accepted by this grammar:

#[attr]
#[attr(true)]
#[attr(ident)]
#[attr(ident, 100, true, "true", ident = 100, ident = "hello", ident(100))]
#[attr(100)]
#[attr(enabled = true)]
#[enabled(true)]
#[attr("hello")]
#[repr(C, align = 4)]
#[repr(C, align(4))]

Motivation

At present, literals are only accepted as the value of a key-value pair in attributes. What’s more, only string literals are accepted. This means that literals can only appear in forms of #[attr(name = "value")] or #[attr = "value"].

This forces non-string literal values to be awkwardly stringified. For example, while it is clear that something like alignment should be an integer value, the following are disallowed: #[align(4)], #[align = 4]. Instead, we must use something akin to #[align = "4"]. Even #[align("4")] and #[name("name")] are disallowed, forcing key-value pairs or identifiers to be used instead: #[align(size = "4")] or #[name(name)].

In short, the current design forces users to use values of a single type, and thus occasionally the wrong type, in attributes.

Cleaner Attributes

Implementation of this RFC can clean up the following attributes in the standard library:

  • #![recursion_limit = "64"] => #![recursion_limit = 64] or #![recursion_limit(64)]
  • #[cfg(all(unix, target_pointer_width = "32"))] => #[cfg(all(unix, target_pointer_width = 32))]

If align were to be added as an attribute, the following are now valid options for its syntax:

  • #[repr(align(4))]
  • #[repr(align = 4)]
  • #[align = 4]
  • #[align(4)]

Syntax Extensions

As syntax extensions mature and become more widely used, being able to use literals in a variety of positions becomes more important.

Detailed design

To clarify, literals are:

  • Strings: "foo", r##"foo"##
  • Byte Strings: b"foo"
  • Byte Characters: b'f'
  • Characters: 'a'
  • Integers: 1, 1{i,u}{8,16,32,64,size}
  • Floats: 1.0, 1.0f{32,64}
  • Booleans: true, false

They are defined in the manual and by implementation in the AST.

Implementation of this RFC requires the following changes:

  1. The MetaItemKind structure would need to allow literals as top-level entities:

    pub enum MetaItemKind {
        Word(InternedString),
        List(InternedString, Vec<P<MetaItem>>),
        NameValue(InternedString, Lit),
        Literal(Lit),
    }
  2. libsyntax (libsyntax/parse/attr.rs) would need to be modified to allow literals as values in k/v pairs and as top-level entities of a list.

  3. Crate metadata encoding/decoding would need to encode and decode literals in attributes.

Drawbacks

This RFC requires a change to the AST and is likely to break syntax extensions using attributes in the wild.

Alternatives

Token trees

An alternative is to allow any tokens inside of an attribute. That is, the grammar could be:

attr : '#' '!'? '[' TOKEN+ ']' ;

where TOKEN is any valid Rust token. The drawback to this approach is that attributes lose any sense of structure. This results in more difficult and verbose attribute parsing, although this could be ameliorated through libraries. Further, this would require almost all of the existing attribute parsing code to change.

The advantage, of course, is that it allows any syntax and is rather future proof. It is also more inline with macro!s.

Allow only unsuffixed literals

This RFC proposes allowing any valid Rust literals in attributes. Instead, the use of literals could be restricted to only those that are unsuffixed. That is, only the following literals could be allowed:

  • Strings: "foo"
  • Characters: 'a'
  • Integers: 1
  • Floats: 1.0
  • Booleans: true, false

This cleans up the appearance of attributes will still increasing flexibility.

Allow literals only as values in k/v pairs

Instead of allowing literals in top-level positions, i.e. #[attr(4)], only allow them as values in key value pairs: #[attr = 4] or #[attr(ident = 4)]. This has the nice advantage that it was the initial idea for attributes, and so the AST types already reflect this. As such, no changes would have to be made to existing code. The drawback, of course, is the lack of flexibility. #[repr(C, align(4))] would no longer be valid.

Do nothing

Of course, the current design could be kept. Although it seems that the initial intention was for a form of literals to be allowed. Unfortunately, this idea was scrapped due to release pressure and never revisited. Even the reference alludes to allowing all literals as values in k/v pairs.

Unresolved questions

None that I can think of.

Summary

Some internal and language-level changes to name resolution.

Internally, name resolution will be split into two parts - import resolution and name lookup. Import resolution is moved forward in time to happen in the same phase as parsing and macro expansion. Name lookup remains where name resolution currently takes place (that may change in the future, but is outside the scope of this RFC). However, name lookup can be done earlier if required (importantly it can be done during macro expansion to allow using the module system for macros, also outside the scope of this RFC). Import resolution will use a new algorithm.

The observable effects of this RFC (i.e., language changes) are some increased flexibility in the name resolution rules, especially around globs and shadowing.

There is an implementation of the language changes in PR #32213.

Motivation

Naming and importing macros currently works very differently to naming and importing any other item. It would be impossible to use the same rules, since macro expansion happens before name resolution in the compilation process. Implementing this RFC means that macro expansion and name resolution can happen in the same phase, thus allowing macros to use the Rust module system properly.

At the same time, we should be able to accept more Rust programs by tweaking the current rules around imports and name shadowing. This should make programming using imports easier.

Some issues in Rust’s name resolution

Whilst name resolution is sometimes considered a simple part of the compiler, there are some details in Rust which make it tricky to properly specify and implement. Some of these may seem obvious, but the distinctions will be important later.

  • Imported vs declared names - a name can be imported (e.g., use foo;) or declared (e.g., fn foo ...).

  • Single vs glob imports - a name can be explicitly (e.g., use a::foo;) or implicitly imported (e.g., use a::*; where foo is declared in a).

  • Public vs private names - the visibility of names is somewhat tied up with name resolution, for example in current Rust use a::*; only imports the public names from a.

  • Lexical scoping - a name can be inherited from a surrounding scope, rather than being declared in the current one, e.g., let foo = ...; { foo(); }.

  • There are different kinds of scopes - at the item level, names are not inherited from outer modules into inner modules. Items may also be declared inside functions and blocks within functions, with different rules from modules. At the expression level, blocks ({...}) give explicit scope, however, from the point of view of macro hygiene and region inference, each let statement starts a new implicit scope.

  • Explicitly declared vs macro generated names - a name can be declared explicitly in the source text, or could be declared as the result of expanding a macro.

  • Rust has multiple namespaces - types, values, and macros exist in separate namespaces (some items produce names in multiple namespaces). Imports refer (implicitly) to one or more names in different namespaces.

    Note that all top-level (i.e., not parameters, etc.) path segments in a path other than the last must be in the type namespace, e.g., in a::b::c, a and b are assumed to be in the type namespace, and c may be in any namespace.

  • Rust has an implicit prelude - the prelude defines a set of names which are always (unless explicitly opted-out) nameable. The prelude includes macros. Names in the prelude can be shadowed by any other names.

Detailed design

Guiding principles

We would like the following principles to hold. There may be edge cases where they do not, but we would like these to be as small as possible (and prefer they don’t exist at all).

Avoid ‘time-travel’ ambiguities, or different results of resolution if names

are resolved in different orders.

Due to macro expansion, it is possible for a name to be resolved and then to become ambiguous, or (with rules formulated in a certain way) for a name to be resolved, then to be ambiguous, then to be resolvable again (possibly to different bindings).

Furthermore, there is some flexibility in the order in which macros can be expanded. How a name resolves should be consistent under any ordering.

The strongest form of this principle, I believe, is that at any stage of macro expansion, and under any ordering of expansions, if a name resolves to a binding then it should always (i.e., at any other stage of any other expansion series) resolve to that binding, and if resolving a name produces an error (n.b., distinct from not being able to resolve), it should always produce an error.

Avoid errors due to the resolver being stuck.

Errors with concrete causes and explanations are easier for the user to understand and to correct. If an error is caused by name resolution getting stuck, rather than by a concrete problem, this is hard to explain or correct.

For example, if we support a rule that means that a certain glob can’t be expanded before a macro is, but the macro can only be named via that glob import, then there is an obvious resolution that can’t be reached due to our ordering constraints.

The order of declarations of items should be irrelevant.

I.e., names should be able to be used before they are declared. Note that this clearly does not hold for declarations of variables in statements inside function bodies.

Macros should be manually expandable.

Compiling a program should have the same result before and after expanding a macro ‘by hand’, so long as hygiene is accounted for.

Glob imports should be manually expandable.

A programmer should be able to replace a glob import with a list import that imports any names imported by the glob and used in the current scope, without changing name resolution behaviour.

Visibility should not affect name resolution.

Clearly, visibility affects whether a name can be used or not. However, it should not affect the mechanics of name resolution. I.e., changing a name from public to private (or vice versa), should not cause more or fewer name resolution errors (it may of course cause more or fewer accessibility errors).

Changes to name resolution rules

Multiple unused imports

A name may be imported multiple times, it is only a name resolution error if that name is used. E.g.,

mod foo {
    pub struct Qux;
}

mod bar {
    pub struct Qux;
}

mod baz {
    use foo::*;
    use bar::*; // Ok, no name conflict.
}

In this example, adding a use of Qux in baz would cause a name resolution error.

Multiple imports of the same binding

A name may be imported multiple times and used if both names bind to the same item. E.g.,

mod foo {
    pub struct Qux;
}

mod bar {
    pub use foo::Qux;
}

mod baz {
    use foo::*;
    use bar::*;

    fn f(q: Qux) {}
}

non-public imports

Currently use and pub use items are treated differently. Non-public imports will be treated in the same way as public imports, so they may be referenced from modules which have access to them. E.g.,

mod foo {
    pub struct Qux;
}

mod bar {
    use foo::Qux;

    mod baz {
        use bar::Qux; // Ok
    }
}

Glob imports of accessible but not public names

Glob imports will import all accessible names, not just public ones. E.g.,

struct Qux;

mod foo {
    use super::*;

    fn f(q: Qux) {} // Ok
}

This change is backwards incompatible. However, the second rule above should address most cases, e.g.,

struct Qux;

mod foo {
    use super::*;
    use super::Qux; // Legal due to the second rule above.

    fn f(q: Qux) {} // Ok
}

The below rule (though more controversial) should make this change entirely backwards compatible.

Note that in combination with the above rule, this means non-public imports are imported by globs where they are private but accessible.

Explicit names may shadow implicit names

Here, an implicit name means a name imported via a glob or inherited from an outer scope (as opposed to being declared or imported directly in an inner scope).

An explicit name may shadow an implicit name without causing a name resolution error. E.g.,

mod foo {
    pub struct Qux;
}

mod bar {
    pub struct Qux;
}

mod baz {
    use foo::*;

    struct Qux; // Shadows foo::Qux.
}

mod boz {
    use foo::*;
    use bar::Qux; // Shadows foo::Qux; note, ordering is not important.
}

or

fn main() {
    struct Foo; // 1.
    {
        struct Foo; // 2.

        let x = Foo; // Ok and refers to declaration 2.
    }
}

Note that shadowing is namespace specific. I believe this is consistent with our general approach to name spaces. E.g.,

mod foo {
    pub struct Qux;
}

mod bar {
    pub trait Qux;
}

mod boz {
    use foo::*;
    use bar::Qux; // Shadows only in the type name space.

    fn f(x: &Qux) {   // bound to bar::Qux.
        let _ = Qux;  // bound to foo::Qux.
    }
}

Caveat: an explicit name which is defined by the expansion of a macro does not shadow implicit names. Example:

macro_rules! foo {
    () => {
        fn foo() {}
    }
}

mod a {
    fn foo() {}
}

mod b {
    use a::*;

    foo!(); // Expands to `fn foo() {}`, this `foo` does not shadow the `foo`
            // imported from `a` and therefore there is a duplicate name error.
}

The rationale for this caveat is so that during import resolution, if we have a glob import (or other implicit name) we can be sure that any imported names will not be shadowed, either the name will continue to be valid, or there will be an error. Without this caveat, a name could be valid, and then after further expansion, become shadowed by a higher priority name.

An error is reported if there is an ambiguity between names due to the lack of shadowing, e.g., (this example assumes modularised macros),

macro_rules! foo {
    () => {
        macro! bar { ... }
    }
}

mod a {
    macro! bar { ... }
}

mod b {
    use a::*;

    foo!(); // Expands to `macro! bar { ... }`.

    bar!(); // ERROR: bar is ambiguous.
}

Note on the caveat: there will only be an error emitted if an ambiguous name is used directly or indirectly in a macro use. I.e., is the name of a macro that is used, or is the name of a module that is used to name a macro either in a macro use or in an import.

Alternatives: we could emit an error even if the ambiguous name is not used, or as a compromise between these two, we could emit an error if the name is in the type or macro namespace (a name in the value namespace can never cause problems).

This change is discussed in issue 31337 and on this RFC PR’s comment thread.

Re-exports, namespaces, and visibility.

(This is something of a clarification point, rather than explicitly new behaviour. See also discussion on issue 31783).

An import (use) or re-export (pub use) imports a name in all available namespaces. E.g., use a::foo; will import foo in the type and value namespaces if it is declared in those namespaces in a.

For a name to be re-exported, it must be public, e.g, pub use a::foo; requires that foo is declared publicly in a. This is complicated by namespaces. The following behaviour should be followed for a re-export of foo:

  • foo is private in all namespaces in which it is declared - emit an error.
  • foo is public in all namespaces in which it is declared - foo is re-exported in all namespaces.
  • foo is mixed public/private - foo is re-exported in the namespaces in which it is declared publicly and imported but not re-exported in namespaces in which it is declared privately.

For a glob re-export, there is an error if there are no public items in any namespace. Otherwise private names are imported and public names are re-exported on a per-namespace basis (i.e., following the above rules).

Changes to the implementation

Note: below I talk about “the binding table”, this is sort of hand-waving. I’m envisaging a sets-of-scopes system where there is effectively a single, global binding table. However, the details of that are beyond the scope of this RFC. One can imagine “the binding table” means one binding table per scope, as in the current system.

Currently, parsing and macro expansion happen in the same phase. With this proposal, we add import resolution to that mix too. Binding tables as well as the AST will be produced by libsyntax. Name lookup will continue to be done where name resolution currently takes place.

To resolve imports, the algorithm proceeds as follows: we start by parsing as much of the program as we can; like today we don’t parse macros. When we find items which bind a name, we add the name to the binding table. When we find an import which can’t be resolved, we add it to a work list. When we find a glob import, we have to record a ‘back link’, so that when a public name is added for the supplying module, we can add it for the importing module.

We then loop over the work list and try to lookup names. If a name has exactly one best binding then we use it (and record the binding on a list of resolved names). If there are zero then we put it back on the work list. If there is more than one binding, then we record an ambiguity error. When we reach a fixed point, i.e., the work list no longer changes, then we are done. If the work list is empty, then expansion/import resolution succeeded, otherwise there are names not found, or ambiguous names, and we failed.

As we are looking up names, we record the resolutions in the binding table. If the name we are looking up is for a glob import, we add bindings for every accessible name currently known.

To expand a macro use, we try to resolve the macro’s name. If that fails, we put it on the work list. Otherwise, we expand that macro by parsing the arguments, pattern matching, and doing hygienic expansion. We then parse the generated code in the same way as we parsed the original program. We add new names to the binding table, and expand any new macro uses.

If we add names for a module which has back links, we must follow them and add these names to the importing module (if they are accessible).

In pseudo-code:

// Assumes parsing is already done, but the two things could be done in the same
// pass.
fn parse_expand_and_resolve() {
    loop until fixed point {
        process_names()
        loop until fixed point {
            process_work_list()
        }
        expand_macros()
    }

    for item in work_list {
        report_error()
    } else {
        success!()
    }
}

fn process_names() {
    // 'module' includes `mod`s, top level of the crate, function bodies
    for each unseen item in any module {
        if item is a definition {
            // struct, trait, type, local variable def, etc.
            bindings.insert(item.name, module, item)
            populate_back_links(module, item)
        } else {
            try_to_resolve_import(module, item)
        }
        record_macro_uses()
    }
}

fn try_to_resolve_import(module, item) {
    if item is an explicit use {
        // item is use a::b::c as d;
        match try_to_resolve(item) {
            Ok(r) => {
                add(bindings.insert(d, module, r, Priority::Explicit))
                populate_back_links(module, item)
            }
            Err() => work_list.push(module, item)
        }
    } else if item is a glob {
        // use a::b::*;
        match try_to_resolve(a::b) {
            Ok(n) => 
                for binding in n {
                    bindings.insert_if_no_higher_priority_binding(binding.name, module, binding, Priority::Glob)
                    populate_back_links(module, binding)
                }
                add_back_link(n to module)
                work_list.remove()
            Err(_) => work_list.push(module, item)
        }
    }    
}

fn process_work_list() {
    for each (module, item) in work_list {
        work_list.remove()
        try_to_resolve_import(module, item)
    }
}

Note that this pseudo-code elides some details: that names are imported into distinct namespaces (the type and value namespaces, and with changes to macro naming, also the macro namespace), and that we must record whether a name is due to macro expansion or not to abide by the caveat to the ‘explicit names shadow glob names’ rule.

If Rust had a single namespace (or had some other properties), we would not have to distinguish between failed and unresolved imports. However, it does and we must. This is not clear from the pseudo-code because it elides namespaces, but consider the following small example:

use a::foo; // foo exists in the value namespace of a.
use b::*;   // foo exists in the type namespace of b.

Can we resolve a use of foo in type position to the import from b? That depends on whether foo exists in the type namespace in a. If we can prove that it does not (i.e., resolution fails) then we can use the glob import. If we cannot (i.e., the name is unresolved but we can’t prove it will not resolve later), then it is not safe to use the glob import because it may be shadowed by the explicit import. (Note, since foo exists in at least the value namespace in a, there will be no error due to a bad import).

In order to keep macro expansion comprehensible to programmers, we must enforce that all macro uses resolve to the same binding at the end of resolution as they do when they were resolved.

We rely on a monotonicity property in macro expansion - once an item exists in a certain place, it will always exist in that place. It will never disappear and never change. Note that for the purposes of this property, I do not consider code annotated with a macro to exist until it has been fully expanded.

A consequence of this is that if the compiler resolves a name, then does some expansion and resolves it again, the first resolution will still be valid. However, another resolution may appear, so the resolution of a name may change as we expand. It can also change from a good resolution to an ambiguity. It is also possible to change from good to ambiguous to good again. There is even an edge case where we go from good to ambiguous to the same good resolution (but via a different route).

If import resolution succeeds, then we check our record of name resolutions. We re-resolve and check we get the same result. We can also check for un-used macros at this point.

Note that the rules in the previous section have been carefully formulated to ensure that this check is sufficient to prevent temporal ambiguities. There are many slight variations for which this check would not be enough.

Privacy

In order to resolve imports (and in the future for macro privacy), we must be able to decide if names are accessible. This requires doing privacy checking as required during parsing/expansion/import resolution. We can keep the current algorithm, but check accessibility on demand, rather than as a separate pass.

During macro expansion, once a name is resolvable, then we can safely perform privacy checking, because parsing and macro expansion will never remove items, nor change the module structure of an item once it has been expanded.

Metadata

When a crate is packed into metadata, we must also include the binding table. We must include private entries due to macros that the crate might export. We don’t need data for function bodies. For functions which are serialised for inlining/monomorphisation, we should include local data (although it’s probably better to serialise the HIR or MIR, then the local bindings are unnecessary).

Drawbacks

It’s a lot of work and name resolution is complex, therefore there is scope for introducing bugs.

The macro changes are not backwards compatible, which means having a macro system 2.0. If users are reluctant to use that, we will have two macro systems forever.

Alternatives

Naming rules

We could take a subset of the shadowing changes (or none at all), whilst still changing the implementation of name resolution. In particular, we might want to discard the explicit/glob shadowing rule change, or only allow items, not imported names to shadow.

We could also consider different shadowing rules around namespacing. In the ‘globs and explicit names’ rule change, we could consider an explicit name to shadow both name spaces and emit a custom error. The example becomes:

mod foo {
    pub struct Qux;
}

mod bar {
    pub trait Qux;
}

mod boz {
    use foo::*;
    use bar::Qux; // Shadows both name spaces.

    fn f(x: &Qux) {   // bound to bar::Qux.
        let _ = Qux;  // ERROR, unresolved name Qux; the compiler would emit a
                      // note about shadowing and namespaces.
    }
}

Import resolution algorithm

Rather than lookup names for imports during the fixpoint iteration, one could save links between imports and definitions. When lookup is required (for macros, or later in the compiler), these links are followed to find a name, rather than having the name being immediately available.

Unresolved questions

Name lookup

The name resolution phase would be replaced by a cut-down name lookup phase, where the binding tables generated during expansion are used to lookup names in the AST.

We could go further, two appealing possibilities are merging name lookup with the lowering from AST to HIR, so the HIR is a name-resolved data structure. Or, name lookup could be done lazily (probably with some caching) so no tables binding names to definitions are kept. I prefer the first option, but this is not really in scope for this RFC.

pub(restricted)

Where this RFC touches on the privacy system there are some edge cases involving the pub(path) form of restricted visibility. I expect the precise solutions will be settled during implementation and this RFC should be amended to reflect those choices.

References

Summary

Naming and modularisation for macros.

This RFC proposes making macros a first-class citizen in the Rust module system. Both macros by example (macro_rules macros) and procedural macros (aka syntax extensions) would use the same naming and modularisation scheme as other items in Rust.

For procedural macros, this RFC could be implemented immediately or as part of a larger effort to reform procedural macros. For macros by example, this would be part of a macros 2.0 feature, the rest of which will be described in a separate RFC. This RFC depends on the changes to name resolution described in RFC 1560.

Motivation

Currently, procedural macros are not modularised at all (beyond the crate level). Macros by example have a custom modularisation scheme which involves modules to some extent, but relies on source ordering and attributes which are not used for other items. Macros cannot be imported or named using the usual syntax. It is confusing that macros use their own system for modularisation. It would be far nicer if they were a more regular feature of Rust in this respect.

Detailed design

Defining macros

This RFC does not propose changes to macro definitions. It is envisaged that definitions of procedural macros will change, see this blog post for some rough ideas. I’m assuming that procedural macros will be defined in some function-like way and that these functions will be defined in modules in their own crate (to start with).

Ordering of macro definitions in the source text will no longer be significant. A macro may be used before it is defined, as long as it can be named. That is, macros follow the same rules regarding ordering as other items. E.g., this will work:

foo!();

macro! foo { ... }

(Note, I’m using a hypothetical macro! definition which I will define in a future RFC. The reader can assume it works much like macro_rules!, but with the new naming scheme).

Macro expansion order is also not defined by source order. E.g., in foo!(); bar!();, bar may be expanded before foo. Ordering is only guaranteed as far as it is necessary. E.g., if bar is only defined by expanding foo, then foo must be expanded before bar.

Function-like macro uses

A function-like macro use (c.f., attribute-like macro use) is a macro use which uses foo!(...) or foo! ident (...) syntax (where () may also be [] or {}).

Macros may be named by using a ::-separated path. Naming follows the same rules as other items in Rust.

If a macro baz (by example or procedural) is defined in a module bar which is nested in foo, then it may be used anywhere in the crate using an absolute path: ::foo::bar::baz!(...). It can be used via relative paths in the usual way, e.g., inside foo as bar::baz!().

Macros declared inside a function body can only be used inside that function body.

For procedural macros, the path must point to the function defining the macro.

The grammar for macros is changed, anywhere we currently parser name "!", we now parse path "!". I don’t think this introduces any issues.

Name lookup follows the same name resolution rules as other items. See RFC 1560 for details on how name resolution could be adapted to support this.

Attribute-like macro uses

Attribute macros may also be named using a ::-separated path. Other than appearing in an attribute, these also follow the usual Rust naming rules.

E.g., #[::foo::bar::baz(...)] and #[bar::baz(...)] are uses of absolute and relative paths, respectively.

Importing macros

Importing macros is done using use in the same way as other items. An ! is not necessary in an import item. Macros are imported into their own namespace and do not shadow or overlap items with the same name in the type or value namespaces.

E.g., use foo::bar::baz; imports the macro baz from the module ::foo::bar. Macro imports may be used in import lists (with other macro imports and with non-macro imports).

Where a glob import (use ...::*;) imports names from a module including macro definitions, the names of those macros are also imported. E.g., use foo::bar::*; would import baz along with any other items in foo::bar.

Where macros are defined in a separate crate, these are imported in the same way as other items by an extern crate item.

No #[macro_use] or #[macro_export] annotations are required.

Shadowing

Macro names follow the same shadowing rules as other names. For example, an explicitly declared macro would shadow a glob-imported macro with the same name. Note that since macros are in a different namespace from types and values, a macro cannot shadow a type or value or vice versa.

Drawbacks

If the new macro system is not well adopted by users, we could be left with two very different schemes for naming macros depending on whether a macro is defined by example or procedurally. That would be inconsistent and annoying. However, I hope we can make the new macro system appealing enough and close enough to the existing system that migration is both desirable and easy.

Alternatives

We could adopt the proposed scheme for procedural macros only and keep the existing scheme for macros by example.

We could adapt the current macros by example scheme to procedural macros.

We could require the ! in macro imports to distinguish them from other names. I don’t think this is necessary or helpful.

We could continue to require macro_export annotations on top of this scheme. However, I prefer moving to a scheme using the same privacy system as the rest of Rust, see below.

Unresolved questions

Privacy for macros

I would like that macros follow the same rules for privacy as other Rust items, i.e., they are private by default and may be marked as pub to make them public. This is not as straightforward as it sounds as it requires parsing pub macro! foo as a macro definition, etc. I leave this for a separate RFC.

Scoped attributes

It would be nice for tools to use scoped attributes as well as procedural macros, e.g., #[rustfmt::skip] or #[rust::new_attribute]. I believe this should be straightforward syntactically, but there are open questions around when attributes are ignored or seen by tools and the compiler. Again, I leave it for a future RFC.

Inline procedural macros

Some day, I hope that procedural macros may be defined in the same crate in which they are used. I leave the details of this for later, however, I don’t think this affects the design of naming - it should all Just Work.

Applying to existing macros

This RFC is framed in terms of a new macro system. There are various ways that some parts of it could be applied to existing macros (macro_rules!) to backwards compatibly make existing macros usable under the new naming system.

I want to leave this question unanswered for now. Until we get some experience implementing this feature it is unclear how much this is possible. Once we know that we can try to decide how much of that is also desirable.

Summary

This RFC proposes an evolution of Rust’s procedural macro system (aka syntax extensions, aka compiler plugins). This RFC specifies syntax for the definition of procedural macros, a high-level view of their implementation in the compiler, and outlines how they interact with the compilation process.

This RFC specifies the architecture of the procedural macro system. It relies on RFC 1561 which specifies the naming and modularisation of macros. It leaves many of the details for further RFCs, in particular the details of the APIs available to macro authors (tentatively called libproc_macro, formerly libmacro). See this blog post for some ideas of how that might look.

RFC 1681 specified a mechanism for custom derive using ‘macros 1.1’. That RFC is essentially a subset of this one. Changes and differences are noted throughout the text.

At the highest level, macros are defined by implementing functions marked with a #[proc_macro] attribute. Macros operate on a list of tokens provided by the compiler and return a list of tokens that the macro use is replaced by. We provide low-level facilities for operating on these tokens. Higher level facilities (e.g., for parsing tokens to an AST) should exist as library crates.

Motivation

Procedural macros have long been a part of Rust and have been used for diverse and interesting purposes, for example compile-time regexes, serialisation, and design by contract. They allow the ultimate flexibility in syntactic abstraction, and offer possibilities for efficiently using Rust in novel ways.

Procedural macros are currently unstable and are awkward to define. We would like to remedy this by implementing a new, simpler system for procedural macros, and for this new system to be on the usual path to stabilisation.

One major problem with the current system is that since it is based on ASTs, if we change the Rust language (even in a backwards compatible way) we can easily break procedural macros. Therefore, offering the usual backwards compatibility guarantees to procedural macros, would inhibit our ability to evolve the language. By switching to a token-based (rather than AST- based) system, we hope to avoid this problem.

Detailed design

There are two kinds of procedural macro: function-like and attribute-like. These two kinds exist today, and other than naming (see RFC 1561) the syntax for using these macros remains unchanged. If the macro is called foo, then a function- like macro is used with syntax foo!(...), and an attribute-like macro with #[foo(...)] .... Macros may be used in the same places as macro_rules macros and this remains unchanged.

There is also a third kind, custom derive, which are specified in RFC 1681. This RFC extends the facilities open to custom derive macros beyond the string-based system of RFC 1681.

To define a procedural macro, the programmer must write a function with a specific signature and attribute. Where foo is the name of a function-like macro:

#[proc_macro]
pub fn foo(TokenStream) -> TokenStream;

The first argument is the tokens between the delimiters in the macro use. For example in foo!(a, b, c), the first argument would be [Ident(a), Comma, Ident(b), Comma, Ident(c)].

The value returned replaces the macro use.

Attribute-like:

#[proc_macro_attribute]
pub fn foo(Option<TokenStream>, TokenStream) -> TokenStream;

The first argument is a list of the tokens between the delimiters in the macro use. Examples:

  • #[foo] => None
  • #[foo()] => Some([])
  • #[foo(a, b, c)] => Some([Ident(a), Comma, Ident(b), Comma, Ident(c)])

The second argument is the tokens for the AST node the attribute is placed on. Note that in order to compute the tokens to pass here, the compiler must be able to parse the code the attribute is applied to. However, the AST for the node passed to the macro is discarded, it is not passed to the macro nor used by the compiler (in practice, this might not be 100% true due to optimisiations). If the macro wants an AST, it must parse the tokens itself.

The attribute and the AST node it is applied to are both replaced by the returned tokens. In most cases, the tokens returned by a procedural macro will be parsed by the compiler. It is the procedural macro’s responsibility to ensure that the tokens parse without error. In some cases, the tokens will be consumed by another macro without parsing, in which case they do not need to parse. The distinction is not statically enforced. It could be, but I don’t think the overhead would be justified.

Custom derive:

#[proc_macro_derive]
pub fn foo(TokenStream) -> TokenStream;

Similar to attribute-like macros, the item a custom derive applies to must parse. Custom derives may on be applied to the items that a built-in derive may be applied to (structs and enums).

Currently, macros implementing custom derive only have the option of converting the TokenStream to a string and converting a result string back to a TokenStream. This option will remain, but macro authors will also be able to operate directly on the TokenStream (which should be preferred, since it allows for hygiene and span support).

Procedural macros which take an identifier before the argument list (e.g, foo! bar(...)) will not be supported (at least initially).

My feeling is that this macro form is not used enough to justify its existence. From a design perspective, it encourages uses of macros for language extension, rather than syntactic abstraction. I feel that such macros are at higher risk of making programs incomprehensible and of fragmenting the ecosystem).

Behind the scenes, these functions implement traits for each macro kind. We may in the future allow implementing these traits directly, rather than just implementing the above functions. By adding methods to these traits, we can allow macro implementations to pass data to the compiler, for example, specifying hygiene information or allowing for fast re-compilation.

proc-macro crates

Macros 1.1 added a new crate type: proc-macro. This both allows procedural macros to be declared within the crate, and dictates how the crate is compiled. Procedural macros must use this crate type.

We introduce a special configuration option: #[cfg(proc_macro)]. Items with this configuration are not macros themselves but are compiled only for macro uses.

If a crate is a proc-macro crate, then the proc_macro cfg variable is true for the whole crate. Initially it will be false for all other crates. This has the effect of partitioning crates into macro- defining and non-macro defining crates. In the future, I hope we can relax these restrictions so that macro and non-macro code can live in the same crate.

Importing macros for use means using extern crate to make the crate available and then using use imports or paths to name macros, just like other items. Again, see RFC 1561 for more details.

When a proc-macro crate is extern crateed, it’s items (even public ones) are not available to the importing crate; only macros declared in that crate. There should be a lint to warn about public items which will not be visible due to proc_macro. The crate is used by the compiler at compile-time, rather than linked with the importing crate at runtime.

Macros 1.1 required #[macro_use] on extern crate which imports procedural macros. This will not be required and should be deprecated.

Writing procedural macros

Procedural macro authors should not use the compiler crates (libsyntax, etc.). Using these will remain unstable. We will make available a new crate, libproc_macro, which will follow the usual path to stabilisation, will be part of the Rust distribution, and will be required to be used by procedural macros (because, at the least, it defines the types used in the required signatures).

The details of libproc_macro will be specified in a future RFC. In the meantime, this blog post gives an idea of what it might contain.

The philosophy here is that libproc_macro will contain low-level tools for constructing macros, dealing with tokens, hygiene, pattern matching, quasi- quoting, interactions with the compiler, etc. For higher level abstractions (such as parsing and an AST), macros should use external libraries (there are no restrictions on #[cfg(proc_macro)] crates using other crates).

A MacroContext is an object placed in thread-local storage when a macro is expanded. It contains data about how the macro is being used and defined. It is expected that for most uses, macro authors will not use the MacroContext directly, but it will be used by library functions. It will be more fully defined in the upcoming RFC proposing libproc_macro.

Rust macros are hygienic by default. Hygiene is a large and complex subject, but to summarise: effectively, naming takes place in the context of the macro definition, not the expanded macro.

Procedural macros often want to bend the rules around macro hygiene, for example to make items or variables more widely nameable than they would be by default. Procedural macros will be able to take part in the application of the hygiene algorithm via libproc_macro. Again, full details must wait for the libproc_macro RFC and a sketch is available in this blog post.

Tokens

Procedural macros will primarily operate on tokens. There are two main benefits to this principle: flexibility and future proofing. By operating on tokens, code passed to procedural macros does not need to satisfy the Rust parser, only the lexer. Stabilising an interface based on tokens means we need only commit to not changing the rules around those tokens, not the whole grammar. I.e., it allows us to change the Rust grammar without breaking procedural macros.

In order to make the token-based interface even more flexible and future-proof, I propose a simpler token abstraction than is currently used in the compiler. The proposed system may be used directly in the compiler or may be an interface wrapper over a more efficient representation.

Since macro expansion will not operate purely on tokens, we must keep hygiene information on tokens, rather than on Ident AST nodes (we might be able to optimise by not keeping such info for all tokens, but that is an implementation detail). We will also keep span information for each token, since that is where a record of macro expansion is maintained (and it will make life easier for tools. Again, we might optimise internally).

A token is a single lexical element, for example, a numeric literal, a word (which could be an identifier or keyword), a string literal, or a comment.

A token stream is a sequence of tokens, e.g., a b c; is a stream of four tokens - ['a', 'b', 'c', ';''].

A token tree is a tree structure where each leaf node is a token and each interior node is a token stream. I.e., a token stream which can contain nested token streams. A token tree can be delimited, e.g., a (b c); will give TT(None, ['a', TT(Some('()'), ['b', 'c'], ';''])). An undelimited token tree is useful for grouping tokens due to expansion, without representation in the source code. That could be used for unsafety hygiene, or to affect precedence and parsing without affecting scoping. They also replace the interpolated AST tokens currently in the compiler.

In code:

// We might optimise this representation
pub struct TokenStream(Vec<TokenTree>);

// A borrowed TokenStream
pub struct TokenSlice<'a>(&'a [TokenTree]);

// A token or token tree.
pub struct TokenTree {
    pub kind: TokenKind,
    pub span: Span,
    pub hygiene: HygieneObject,
}

pub enum TokenKind {
    Sequence(Delimiter, TokenStream),

    // The content of the comment can be found from the span.
    Comment(CommentKind),

    // `text` is the string contents, not including delimiters. It would be nice
    // to avoid an allocation in the common case that the string is in the
    // source code. We might be able to use `&'codemap str` or something.
    // `raw_markers` is for the count of `#`s if the string is a raw string. If
    // the string is not raw, then it will be `None`.
    String { text: Symbol, raw_markers: Option<usize>, kind: StringKind },

    // char literal, span includes the `'` delimiters.
    Char(char),

    // These tokens are treated specially since they are used for macro
    // expansion or delimiting items.
    Exclamation,  // `!`
    Dollar,       // `$`
    // Not actually sure if we need this or if semicolons can be treated like
    // other punctuation.
    Semicolon,    // `;`
    Eof,          // Do we need this?

    // Word is defined by Unicode Standard Annex 31 -
    // [Unicode Identifier and Pattern Syntax](http://unicode.org/reports/tr31/)
    Word(Symbol),
    Punctuation(char),
}

pub enum Delimiter {
    None,
    // { }
    Brace,
    // ( )
    Parenthesis,
    // [ ]
    Bracket,
}

pub enum CommentKind {
    Regular,
    InnerDoc,
    OuterDoc,
}

pub enum StringKind {
    Regular,
    Byte,
}

// A Symbol is a possibly-interned string.
pub struct Symbol { ... }

Note that although tokens exclude whitespace, by examining the spans of tokens, a procedural macro can get the string representation of a TokenStream and thus has access to whitespace information.

Open question: Punctuation(char) and multi-char operators.

Rust has many compound operators, e.g., <<. It’s not clear how best to deal with them. If the source code contains “+ =”, it would be nice to distinguish this in the token stream from “+=”. On the other hand, if we represent << as a single token, then the macro may need to split them into <, < in generic position.

I had hoped to represent each character as a separate token. However, to make pattern matching backwards compatible, we would need to combine some tokens. In fact, if we want to be completely backwards compatible, we probably need to keep the same set of compound operators as are defined at the moment.

Some solutions:

  • Punctuation(char) with special rules for pattern matching tokens,
  • Punctuation([char]) with a facility for macros to split tokens. Tokenising could match the maximum number of punctuation characters, or use the rules for the current token set. The former would have issues with pattern matching. The latter is a bit hacky, there would be backwards compatibility issues if we wanted to add new compound operators in the future.

Staging

  1. Implement RFC 1561.
  2. Implement #[proc_macro] and #[cfg(proc_macro)] and the function approach to defining macros. However, pass the existing data structures to the macros, rather than tokens and MacroContext.
  3. Implement libproc_macro and make this available to macros. At this stage both old and new macros are available (functions with different signatures). This will require an RFC and considerable refactoring of the compiler.
  4. Implement some high-level macro facilities in external crates on top of libproc_macro. It is hoped that much of this work will be community-led.
  5. After some time to allow conversion, deprecate the old-style macros. Later, remove old macros completely.

Drawbacks

Procedural macros are a somewhat unpleasant corner of Rust at the moment. It is hard to argue that some kind of reform is unnecessary. One could find fault with this proposed reform in particular (see below for some alternatives). Some drawbacks that come to mind:

  • providing such a low-level API risks never seeing good high-level libraries;
  • the design is complex and thus will take some time to implement and stabilise, meanwhile unstable procedural macros are a major pain point in current Rust;
  • dealing with tokens and hygiene may discourage macro authors due to complexity, hopefully that is addressed by library crates.

The actual concept of procedural macros also have drawbacks: executing arbitrary code in the compiler makes it vulnerable to crashes and possibly security issues, macros can introduce hard to debug errors, macros can make a program hard to comprehend, it risks creating de facto dialects of Rust and thus fragmentation of the ecosystem, etc.

Alternatives

We could keep the existing system or remove procedural macros from Rust.

We could have an AST-based (rather than token-based) system. This has major backwards compatibility issues.

We could allow plugging in at later stages of compilation, giving macros access to type information, etc. This would allow some really interesting tools. However, it has some large downsides - it complicates the whole compilation process (not just the macro system), it pollutes the whole compiler with macro knowledge, rather than containing it in the frontend, it complicates the design of the interface between the compiler and macro, and (I believe) the use cases are better addressed by compiler plug-ins or tools based on the compiler (the latter can be written today, the former require more work on an interface to the compiler to be practical).

We could use the macro keyword rather than the fn keyword to declare a macro. We would then not require a #[proc_macro] attribute.

We could use #[macro] instead of #[proc_macro] (and similarly for the other attributes). This would require making macro a contextual keyword.

We could have a dedicated syntax for procedural macros, similar to the macro_rules syntax for macros by example. Since a procedural macro is really just a Rust function, I believe using a function is better. I have also not been able to come up with (or seen suggestions for) a good alternative syntax. It seems reasonable to expect to write Rust macros in Rust (although there is nothing stopping a macro author from using FFI and some other language to write part or all of a macro).

For attribute-like macros on items, it would be nice if we could skip parsing the annotated item until after macro expansion. That would allow for more flexible macros, since the input would not be constrained to Rust syntax. However, this would require identifying items from tokens, rather than from the AST, which would require additional rules on token trees and may not be possible.

Unresolved questions

Linking model

Currently, procedural macros are dynamically linked with the compiler. This prevents the compiler being statically linked, which is sometimes desirable. An alternative architecture would have procedural macros compiled as independent programs and have them communicate with the compiler via IPC.

This would have the advantage of allowing static linking for the compiler and would prevent procedural macros from crashing the main compiler process. However, designing a good IPC interface is complicated because there is a lot of data that might be exchanged between the compiler and the macro.

I think we could first design the syntax, interfaces, etc. and later evolve into a process-separated model (if desired). However, if this is considered an essential feature of macro reform, then we might want to consider the interfaces more thoroughly with this in mind.

A step in this direction might be to run the macro in its own thread, but in the compiler’s process.

Interactions with constant evaluation

Both procedural macros and constant evaluation are mechanisms for running Rust code at compile time. Currently, and under the proposed design, they are considered completely separate features. There might be some benefit in letting them interact.

Inline procedural macros

It would nice to allow procedural macros to be defined in the crate in which they are used, as well as in separate crates (mentioned above). This complicates things since it breaks the invariant that a crate is designed to be used at either compile-time or runtime. I leave it for the future.

Specification of the macro definition function signatures

As proposed, the signatures of functions used as macro definitions are hard- wired into the compiler. It would be more flexible to allow them to be specified by a lang-item. I’m not sure how beneficial this would be, since a change to the signature would require changing much of the procedural macro system. I propose leaving them hard-wired, unless there is a good use case for the more flexible approach.

Specifying delimiters

Under this RFC, a function-like macro use may use either parentheses, braces, or square brackets. The choice of delimiter does not affect the semantics of the macro (the rules requiring braces or a semi-colon for macro uses in item position still apply).

Which delimiter was used should be available to the macro implementation via the MacroContext. I believe this is maximally flexible - the macro implementation can throw an error if it doesn’t like the delimiters used.

We might want to allow the compiler to restrict the delimiters. Alternatively, we might want to hide the information about the delimiter from the macro author, so as not to allow errors regarding delimiter choice to affect the user.

Summary

Rust has extend error messages that explain each error in more detail. We’ve been writing lots of them, which is good, but they’re written in different styles, which is bad. This RFC intends to fix this inconsistency by providing a template for these long-form explanations to follow.

Motivation

Long error codes explanations are a very important part of Rust. Having an explanation of what failed helps to understand the error and is appreciated by Rust developers of all skill levels. Providing an unified template is needed in order to help people who would want to write ones as well as people who read them.

Detailed design

Here is what I propose:

Error description

Provide a more detailed error message. For example:

extern crate a;
extern crate b as a;

We get the E0259 error code which says “an extern crate named a has already been imported in this module” and the error explanation says: “The name chosen for an external crate conflicts with another external crate that has been imported into the current module.”.

Minimal example

Provide an erroneous code example which directly follows Error description. The erroneous example will be helpful for the How to fix the problem. Making it as simple as possible is really important in order to help readers to understand what the error is about. A comment should be added with the error on the same line where the errors occur. Example:

type X = u32<i32>; // error: type parameters are not allowed on this type

If the error comments is too long to fit 80 columns, split it up like this, so the next line start at the same column of the previous line:

type X = u32<'static>; // error: lifetime parameters are not allowed on
                       //        this type

And if the sample code is too long to write an effective comment, place your comment on the line before the sample code:

// error: lifetime parameters are not allowed on this type
fn super_long_function_name_and_thats_problematic() {}

Of course, it the comment is too long, the split rules still applies.

Error explanation

Provide a full explanation about “why you get the error” and some leads on how to fix it. If needed, use additional code snippets to improve your explanations.

How to fix the problem

This part will show how to fix the error that we saw previously in the Minimal example, with comments explaining how it was fixed.

Additional information

Some details which might be useful for the users, let’s take back E0109 example. At the end, the supplementary explanation is the following: “Note that type parameters for enum-variant constructors go after the variant, not after the enum (Option::None::<u32>, not Option::<u32>::None).”. It provides more information, not directly linked to the error, but it might help user to avoid doing another error.

Template

In summary, the template looks like this:

E000: r##"
[Error description]

Example of erroneous code:

\```compile_fail
[Minimal example]
\```

[Error explanation]

\```
[How to fix the problem]
\```

[Optional Additional information]

Now let’s take a full example:

E0409: r##“ An “or” pattern was used where the variable bindings are not consistently bound across patterns.

Example of erroneous code:

let x = (0, 2);
match x {
    (0, ref y) | (y, 0) => { /* use y */} // error: variable `y` is bound with
                                          //        different mode in pattern #2
                                          //        than in pattern #1
    _ => ()
}

Here, y is bound by-value in one case and by-reference in the other.

To fix this error, just use the same mode in both cases. Generally using ref or ref mut where not already used will fix this:

let x = (0, 2);
match x {
    (0, ref y) | (ref y, 0) => { /* use y */}
    _ => ()
}

Alternatively, split the pattern:

let x = (0, 2);
match x {
    (y, 0) => { /* use y */ }
    (0, ref y) => { /* use y */}
    _ => ()
}

“##,

Drawbacks

This will make contributing slightly more complex, as there are rules to follow, whereas right now there are none.

Alternatives

Not having error codes explanations following a common template.

Unresolved questions

None.

  • Feature Name: More API Documentation Conventions
  • Start Date: 2016-03-31
  • RFC PR: rust-lang/rfcs#1574
  • Rust Issue: N/A

Summary

RFC 505 introduced certain conventions around documenting Rust projects. This RFC augments that one, and a full text of the older one combined with these modifications is provided below.

Motivation

Documentation is an extremely important part of any project. It’s important that we have consistency in our documentation.

For the most part, the RFC proposes guidelines that are already followed today, but it tries to motivate and clarify them.

Detailed design

English

This section applies to rustc and the standard library.

Using Markdown

The updated list of common headings is:

  • Examples
  • Panics
  • Errors
  • Safety
  • Aborts
  • Undefined Behavior

RFC 505 suggests that one should always use the rust formatting directive:

```rust
println!("Hello, world!");
```

```ruby
puts "Hello"
```

But, in API documentation, feel free to rely on the default being ‘rust’:

/// For example:
///
/// ```
/// let x = 5;
/// ```

Other places do not know how to highlight this anyway, so it’s not important to be explicit.

RFC 505 suggests that references and citation should be linked ‘reference style.’ This is still recommended, but prefer to leave off the second []:

[Rust website]

[Rust website]: http://www.rust-lang.org

to

[Rust website][website]

[website]: http://www.rust-lang.org

But, if the text is very long, it is okay to use this form.

Examples in API docs

Everything should have examples. Here is an example of how to do examples:

/// # Examples
///
/// ```
/// use op;
///
/// let s = "foo";
/// let answer = op::compare(s, "bar");
/// ```
///
/// Passing a closure to compare with, rather than a string:
///
/// ```
/// use op;
///
/// let s = "foo";
/// let answer = op::compare(s, |a| a.chars().is_whitespace().all());
/// ```

Referring to types

When talking about a type, use its full name. In other words, if the type is generic, say Option<T>, not Option. An exception to this is bounds. Write Cow<'a, B> rather than Cow<'a, B> where B: 'a + ToOwned + ?Sized.

Another possibility is to write in lower case using a more generic term. In other words, ‘string’ can refer to a String or an &str, and ‘an option’ can be ‘an Option<T>’.

A major drawback of Markdown is that it cannot automatically link types in API documentation. Do this yourself with the reference-style syntax, for ease of reading:

/// The [`String`] passed in lorum ipsum...
///
/// [`String`]: ../string/struct.String.html

Module-level vs type-level docs

There has often been a tension between module-level and type-level documentation. For example, in today’s standard library, the various *Cell docs say, in the pages for each type, to “refer to the module-level documentation for more details.”

Instead, module-level documentation should show a high-level summary of everything in the module, and each type should document itself fully. It is okay if there is some small amount of duplication here. Module-level documentation should be broad and not go into a lot of detail. That is left to the type’s documentation.

Example

Below is a full crate, with documentation following these rules. I am loosely basing this off of my ref_slice crate, because it’s small, but I’m not claiming the code is good here. It’s about the docs, not the code.

In lib.rs:

//! Turning references into slices
//!
//! This crate contains several utility functions for taking various kinds
//! of references and producing slices out of them. In this case, only full
//! slices, not ranges for sub-slices.
//!
//! # Layout
//!
//! At the top level, we have functions for working with references, `&T`.
//! There are two submodules for dealing with other types: `option`, for
//! &[`Option<T>`], and `mut`, for `&mut T`.
//!
//! [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html

pub mod option;

/// Converts a reference to `T` into a slice of length 1.
///
/// This will not copy the data, only create the new slice.
///
/// # Panics
///
/// In this case, the code won’t panic, but if it did, the circumstances
/// in which it would would be included here.
///
/// # Examples
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::ref_slice;
/// 
/// let x = &5;
///
/// let slice = ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
///
/// A more complex example. In this case, it’s the same example, because this
/// is a pretty trivial function, but use your imagination.
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::ref_slice;
/// 
/// let x = &5;
///
/// let slice = ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
pub fn ref_slice<T>(s: &T) -> &[T] {
    unimplemented!()
}

/// Functions that operate on mutable references.
///
/// This submodule mirrors the parent module, but instead of dealing with `&T`,
/// they’re for `&mut T`.
mod mut {
    /// Converts a reference to `&mut T` into a mutable slice of length 1.
    ///
    /// This will not copy the data, only create the new slice.
    ///
    /// # Safety
    ///
    /// In this case, the code doesn’t need to be marked as unsafe, but if it
    /// did, the invariants you’re expected to uphold would be documented here.
    ///
    /// # Examples
    ///
    /// ```
    /// extern crate ref_slice;
    /// use ref_slice::mut;
    /// 
    /// let x = &mut 5;
    ///
    /// let slice = mut::ref_slice(x);
    ///
    /// assert_eq!(&mut [5], slice);
    /// ```
    pub fn ref_slice<T>(s: &mut T) -> &mut [T] {
        unimplemented!()
    }
}

in option.rs:

//! Functions that operate on references to [`Option<T>`]s.
//!
//! This submodule mirrors the parent module, but instead of dealing with `&T`,
//! they’re for `&`[`Option<T>`].
//!
//! [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html

/// Converts a reference to `Option<T>` into a slice of length 0 or 1.
///
/// [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html
///
/// This will not copy the data, only create the new slice.
///
/// # Examples
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::option;
/// 
/// let x = &Some(5);
///
/// let slice = option::ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
///
/// `None` will result in an empty slice:
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::option;
/// 
/// let x: &Option<i32> = &None;
///
/// let slice = option::ref_slice(x);
///
/// assert_eq!(&[], slice);
/// ```
pub fn ref_slice<T>(opt: &Option<T>) -> &[T] {
    unimplemented!()
}

Drawbacks

It’s possible that RFC 505 went far enough, and something this detailed is inappropriate.

Alternatives

We could stick with the more minimal conventions of the previous RFC.

Unresolved questions

None.

Appendix A: Full conventions text

Below is a combination of RFC 505 + this RFC’s modifications, for convenience.

Summary sentence

In API documentation, the first line should be a single-line short sentence providing a summary of the code. This line is used as a summary description throughout Rustdoc’s output, so it’s a good idea to keep it short.

The summary line should be written in third person singular present indicative form. Basically, this means write ‘Returns’ instead of ‘Return’.

English

This section applies to rustc and the standard library.

All documentation for the standard library is standardized on American English, with regards to spelling, grammar, and punctuation conventions. Language changes over time, so this doesn’t mean that there is always a correct answer to every grammar question, but there is often some kind of formal consensus.

Use line comments

Avoid block comments. Use line comments instead:

// Wait for the main task to return, and set the process error code
// appropriately.

Instead of:

/*
 * Wait for the main task to return, and set the process error code
 * appropriately.
 */

Only use inner doc comments //! to write crate and module-level documentation, nothing else. When using mod blocks, prefer /// outside of the block:

/// This module contains tests
mod tests {
    // ...
}

over

mod tests {
    //! This module contains tests

    // ...
}

Using Markdown

Within doc comments, use Markdown to format your documentation.

Use top level headings (#) to indicate sections within your comment. Common headings:

  • Examples
  • Panics
  • Errors
  • Safety
  • Aborts
  • Undefined Behavior

An example:

/// # Examples

Even if you only include one example, use the plural form: ‘Examples’ rather than ‘Example’. Future tooling is easier this way.

Use backticks (`) to denote a code fragment within a sentence.

Use triple backticks (```) to write longer examples, like this:

This code does something cool.

```rust
let x = foo();

x.bar();
```

When appropriate, make use of Rustdoc’s modifiers. Annotate triple backtick blocks with the appropriate formatting directive.

```rust
println!("Hello, world!");
```

```ruby
puts "Hello"
```

In API documentation, feel free to rely on the default being ‘rust’:

/// For example:
///
/// ```
/// let x = 5;
/// ```

In long-form documentation, always be explicit:

For example:

```rust
let x = 5;
```

This will highlight syntax in places that do not default to ‘rust’, like GitHub.

Rustdoc is able to test all Rust examples embedded inside of documentation, so it’s important to mark what is not Rust so your tests don’t fail.

References and citation should be linked ‘reference style.’ Prefer

[Rust website]

[Rust website]: http://www.rust-lang.org

to

[Rust website](http://www.rust-lang.org)

If the text is very long, feel free to use the shortened form:

This link [is very long and links to the Rust website][website].

[website]: http://www.rust-lang.org

Examples in API docs

Everything should have examples. Here is an example of how to do examples:

/// # Examples
///
/// ```
/// use op;
///
/// let s = "foo";
/// let answer = op::compare(s, "bar");
/// ```
///
/// Passing a closure to compare with, rather than a string:
///
/// ```
/// use op;
///
/// let s = "foo";
/// let answer = op::compare(s, |a| a.chars().is_whitespace().all());
/// ```

Referring to types

When talking about a type, use its full name. In other words, if the type is generic, say Option<T>, not Option. An exception to this is bounds. Write Cow<'a, B> rather than Cow<'a, B> where B: 'a + ToOwned + ?Sized.

Another possibility is to write in lower case using a more generic term. In other words, ‘string’ can refer to a String or an &str, and ‘an option’ can be ‘an Option<T>’.

A major drawback of Markdown is that it cannot automatically link types in API documentation. Do this yourself with the reference-style syntax, for ease of reading:

/// The [`String`] passed in lorum ipsum...
///
/// [`String`]: ../string/struct.String.html

Module-level vs type-level docs

There has often been a tension between module-level and type-level documentation. For example, in today’s standard library, the various *Cell docs say, in the pages for each type, to “refer to the module-level documentation for more details.”

Instead, module-level documentation should show a high-level summary of everything in the module, and each type should document itself fully. It is okay if there is some small amount of duplication here. Module-level documentation should be broad, and not go into a lot of detail, which is left to the type’s documentation.

Example

Below is a full crate, with documentation following these rules. I am loosely basing this off of my ref_slice crate, because it’s small, but I’m not claiming the code is good here. It’s about the docs, not the code.

In lib.rs:

//! Turning references into slices
//!
//! This crate contains several utility functions for taking various kinds
//! of references and producing slices out of them. In this case, only full
//! slices, not ranges for sub-slices.
//!
//! # Layout
//!
//! At the top level, we have functions for working with references, `&T`.
//! There are two submodules for dealing with other types: `option`, for
//! &[`Option<T>`], and `mut`, for `&mut T`.
//!
//! [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html

pub mod option;

/// Converts a reference to `T` into a slice of length 1.
///
/// This will not copy the data, only create the new slice.
///
/// # Panics
///
/// In this case, the code won’t panic, but if it did, the circumstances
/// in which it would would be included here.
///
/// # Examples
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::ref_slice;
/// 
/// let x = &5;
///
/// let slice = ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
///
/// A more complex example. In this case, it’s the same example, because this
/// is a pretty trivial function, but use your imagination.
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::ref_slice;
/// 
/// let x = &5;
///
/// let slice = ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
pub fn ref_slice<T>(s: &T) -> &[T] {
    unimplemented!()
}

/// Functions that operate on mutable references.
///
/// This submodule mirrors the parent module, but instead of dealing with `&T`,
/// they’re for `&mut T`.
mod mut {
    /// Converts a reference to `&mut T` into a mutable slice of length 1.
    ///
    /// This will not copy the data, only create the new slice.
    ///
    /// # Safety
    ///
    /// In this case, the code doesn’t need to be marked as unsafe, but if it
    /// did, the invariants you’re expected to uphold would be documented here.
    ///
    /// # Examples
    ///
    /// ```
    /// extern crate ref_slice;
    /// use ref_slice::mut;
    /// 
    /// let x = &mut 5;
    ///
    /// let slice = mut::ref_slice(x);
    ///
    /// assert_eq!(&mut [5], slice);
    /// ```
    pub fn ref_slice<T>(s: &mut T) -> &mut [T] {
        unimplemented!()
    }
}

in option.rs:

//! Functions that operate on references to [`Option<T>`]s.
//!
//! This submodule mirrors the parent module, but instead of dealing with `&T`,
//! they’re for `&`[`Option<T>`].
//!
//! [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html

/// Converts a reference to `Option<T>` into a slice of length 0 or 1.
///
/// [`Option<T>`]: http://doc.rust-lang.org/std/option/enum.Option.html
///
/// This will not copy the data, only create the new slice.
///
/// # Examples
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::option;
/// 
/// let x = &Some(5);
///
/// let slice = option::ref_slice(x);
///
/// assert_eq!(&[5], slice);
/// ```
///
/// `None` will result in an empty slice:
///
/// ```
/// extern crate ref_slice;
/// use ref_slice::option;
/// 
/// let x: &Option<i32> = &None;
///
/// let slice = option::ref_slice(x);
///
/// assert_eq!(&[], slice);
/// ```
pub fn ref_slice<T>(opt: &Option<T>) -> &[T] {
    unimplemented!()
}

Summary

Add a literal fragment specifier for macro_rules! patterns that matches literal constants:

macro_rules! foo {
    ($l:literal) => ( /* ... */ );
};

Motivation

There are a lot of macros out there that take literal constants as arguments (often string constants). For now, most use the expr fragment specifier, which is fine since literal constants are a subset of expressions. But it has the following issues:

  • It restricts the syntax of those macros. A limited set of FOLLOW tokens is allowed after an expr specifier. For example $e:expr : $t:ty is not allowed whereas $l:literal : $t:ty should be. There is no reason to arbitrarily restrict the syntax of those macros where they will only be actually used with literal constants. A workaround for that is to use the tt matcher.
  • It does not allow for proper error reporting where the macro actually needs the parameter to be a literal constant. With this RFC, bad usage of such macros will give a proper syntax error message whereas with epxr it would probably give a syntax or typing error inside the generated code, which is hard to understand.
  • It’s not consistent. There is no reason to allow expressions, types, etc. but not literals.

Design

Add a literal (or lit, or constant) matcher in macro patterns that matches all single-tokens literal constants (those that are currently represented by token::Literal). Matching input against this matcher would call the parse_lit method from libsyntax::parse::Parser. The FOLLOW set of this matcher should be the same as ident since it matches a single token.

Drawbacks

This includes only single-token literal constants and not compound literals, for example struct literals Foo { x: some_literal, y: some_literal } or arrays [some_literal ; N], where some_literal can itself be a compound literal. See in alternatives why this is disallowed.

Alternatives

  • Allow compound literals too. In theory there is no reason to exclude them since they do not require any computation. In practice though, allowing them requires using the expression parser but limiting it to allow only other compound literals and not arbitrary expressions to occur inside a compound literal (for example inside struct fields). This would probably require much more work to implement and also mitigates the first motivation since it will probably restrict a lot the FOLLOW set of such fragments.
  • Adding fragment specifiers for each constant type: $s:str which expects a literal string, $i:integer which expects a literal integer, etc. With this design, we could allow something like $s:struct for compound literals which still requires a lot of work to implement but has the advantage of not ‶polluting″ the FOLLOW sets of other specifiers such as str. It provides also better ‶static″ (pre-expansion) checking of the arguments of a macro and thus better error reporting. Types are also good for documentation. The main drawback here if of course that we could not allow any possible type since we cannot interleave parsing and type checking, so we would have to define a list of accepted types, for example str, integer, bool, struct and array (without specifying the complete type of the structs and arrays). This would be a bit inconsistent since those types indeed refer more to syntactic categories in this context than to true Rust types. It would be frustrating and confusing since it can give the impression that macros do type-checking of their arguments, when of course they don’t.
  • Don’t do this. Continue to use expr or tt to refer to literal constants.

Unresolved

The keyword of the matcher can be literal, lit, constant, or something else.

Summary

Add a marker trait FusedIterator to std::iter and implement it on Fuse<I> and applicable iterators and adapters. By implementing FusedIterator, an iterator promises to behave as if Iterator::fuse() had been called on it (i.e. return None forever after returning None once). Then, specialize Fuse<I> to be a no-op if I implements FusedIterator.

Motivation

Iterators are allowed to return whatever they want after returning None once. However, assuming that an iterator continues to return None can make implementing some algorithms/adapters easier. Therefore, Fuse and Iterator::fuse exist. Unfortunately, the Fuse iterator adapter introduces a noticeable overhead. Furthermore, many iterators (most if not all iterators in std) already act as if they were fused (this is considered to be the “polite” behavior). Therefore, it would be nice to be able to pay the Fuse overhead only when necessary.

Microbenchmarks:

test fuse          ... bench:         200 ns/iter (+/- 13)
test fuse_fuse     ... bench:         250 ns/iter (+/- 10)
test myfuse        ... bench:          48 ns/iter (+/- 4)
test myfuse_myfuse ... bench:          48 ns/iter (+/- 3)
test range         ... bench:          48 ns/iter (+/- 2)
#![feature(test, specialization)]
extern crate test;

use std::ops::Range;

#[derive(Clone, Debug)]
#[must_use = "iterator adaptors are lazy and do nothing unless consumed"]
pub struct Fuse<I> {
    iter: I,
    done: bool
}

pub trait FusedIterator: Iterator {}

trait IterExt: Iterator + Sized {
    fn myfuse(self) -> Fuse<Self> {
        Fuse {
            iter: self,
            done: false,
        }
    }
}

impl<I> FusedIterator for Fuse<I> where Fuse<I>: Iterator {}
impl<T> FusedIterator for Range<T> where Range<T>: Iterator {}

impl<T: Iterator> IterExt for T {}

impl<I> Iterator for Fuse<I> where I: Iterator {
    type Item = <I as Iterator>::Item;

    #[inline]
    default fn next(&mut self) -> Option<<I as Iterator>::Item> {
        if self.done {
            None
        } else {
            let next = self.iter.next();
            self.done = next.is_none();
            next
        }
    }
}

impl<I> Iterator for Fuse<I> where I: FusedIterator {
    #[inline]
    fn next(&mut self) -> Option<<I as Iterator>::Item> {
        self.iter.next()
    }
}

impl<I> ExactSizeIterator for Fuse<I> where I: ExactSizeIterator {}

#[bench]
fn myfuse(b: &mut test::Bencher) {
    b.iter(|| {
        for i in (0..100).myfuse() {
            test::black_box(i);
        }
    })
}

#[bench]
fn myfuse_myfuse(b: &mut test::Bencher) {
    b.iter(|| {
        for i in (0..100).myfuse().myfuse() {
            test::black_box(i);
        }
    });
}


#[bench]
fn fuse(b: &mut test::Bencher) {
    b.iter(|| {
        for i in (0..100).fuse() {
            test::black_box(i);
        }
    })
}

#[bench]
fn fuse_fuse(b: &mut test::Bencher) {
    b.iter(|| {
        for i in (0..100).fuse().fuse() {
            test::black_box(i);
        }
    });
}

#[bench]
fn range(b: &mut test::Bencher) {
    b.iter(|| {
        for i in (0..100) {
            test::black_box(i);
        }
    })
}

Detailed Design

trait FusedIterator: Iterator {}

impl<I: Iterator> FusedIterator for Fuse<I> {}

impl<A> FusedIterator for Range<A> {}
// ...and for most std/core iterators...


// Existing implementation of Fuse repeated for convenience
pub struct Fuse<I> {
    iterator: I,
    done: bool,
}

impl<I> Iterator for Fuse<I> where I: Iterator {
    type Item = I::Item;

    #[inline]
    fn next(&mut self) -> Self::Item {
        if self.done {
            None
        } else {
            let next = self.iterator.next();
            self.done = next.is_none();
            next
        }
    }
}

// Then, specialize Fuse...
impl<I> Iterator for Fuse<I> where I: FusedIterator {
    type Item = I::Item;

    #[inline]
    fn next(&mut self) -> Self::Item {
        // Ignore the done flag and pass through.
        // Note: this means that the done flag should *never* be exposed to the
        // user.
        self.iterator.next()
    }
}

Drawbacks

  1. Yet another special iterator trait.
  2. There is a useless done flag on no-op Fuse adapters.
  3. Fuse isn’t used very often anyways. However, I would argue that it should be used more often and people are just playing fast and loose. I’m hoping that making Fuse free when unneeded will encourage people to use it when they should.
  4. This trait locks implementors into following the FusedIterator spec; removing the FusedIterator implementation would be a breaking change. This precludes future optimizations that take advantage of the fact that the behavior of an Iterator is undefined after it returns None the first time.

Alternatives

Do Nothing

Just pay the overhead on the rare occasions when fused is actually used.

IntoFused

Use an associated type (and set it to Self for iterators that already provide the fused guarantee) and an IntoFused trait:

#![feature(specialization)]
use std::iter::Fuse;

trait FusedIterator: Iterator {}

trait IntoFused: Iterator + Sized {
    type Fused: Iterator<Item = Self::Item>;
    fn into_fused(self) -> Self::Fused;
}

impl<T> IntoFused for T where T: Iterator {
    default type Fused = Fuse<Self>;
    default fn into_fused(self) -> Self::Fused {
        // Currently complains about a mismatched type but I think that's a
        // specialization bug.
        self.fuse()
    }
}

impl<T> IntoFused for T where T: FusedIterator {
    type Fused = Self;

    fn into_fused(self) -> Self::Fused {
        self
    }
}

For now, this doesn’t actually compile because rust believes that the associated type Fused could be specialized independent of the into_fuse function.

While this method gets rid of memory overhead of a no-op Fuse wrapper, it adds complexity, needs to be implemented as a separate trait (because adding associated types is a breaking change), and can’t be used to optimize the iterators returned from Iterator::fuse (users would have to call IntoFused::into_fused).

Associated Type

If we add the ability to condition associated types on Self: Sized, I believe we can add them without it being a breaking change (associated types only need to be fully specified on DSTs). If so (after fixing the bug in specialization noted above), we could do the following:

trait Iterator {
    type Item;
    type Fuse: Iterator<Item=Self::Item> where Self: Sized = Fuse<Self>;
    fn fuse(self) -> Self::Fuse where Self: Sized {
        Fuse {
            done: false,
            iter: self,
        }
    }
    // ...
}

However, changing an iterator to take advantage of this would be a breaking change.

Unresolved questions

Should this trait be unsafe? I can’t think of any way generic unsafe code could end up relying on the guarantees of FusedIterator.

Also, it’s possible to implement the specialized Fuse struct without a useless done bool. Unfortunately, it’s very messy. IMO, this is not worth it for now and can always be fixed in the future as it doesn’t change the FusedIterator trait. Resolved: It’s not possible to remove the done bool without making Fuse invariant.

  • Feature Name: macro_2_0
  • Start Date: 2016-04-17
  • RFC PR: 1584
  • Rust Issue: 39412

Summary

Declarative macros 2.0. A replacement for macro_rules!. This is mostly a placeholder RFC since many of the issues affecting the new macro system are (or will be) addressed in other RFCs. This RFC may be expanded at a later date.

Currently in this RFC:

  • That we should have a new declarative macro system,
  • a new keyword for declaring macros (macro).

In other RFCs:

  • Naming and modularisation (#1561).

To come in separate RFCs:

  • more detailed syntax proposal,
  • hygiene improvements,
  • more …

Note this RFC does not involve procedural macros (aka syntax extensions).

Motivation

There are several changes to the declarative macro system which are desirable but not backwards compatible (See RFC 1561 for some changes to macro naming and modularisation, I would also like to propose improvements to hygiene in macros, and some improved syntax).

In order to maintain Rust’s backwards compatibility guarantees, we cannot change the existing system (macro_rules!) to accommodate these changes. I therefore propose a new declarative macro system to live alongside macro_rules!.

Example (possible) improvements:

// Naming (RFC 1561)

fn main() {
    a::foo!(...);
}

mod a {
    // Macro privacy (TBA)
    pub macro foo { ... }
}
// Relative paths (part of hygiene reform, TBA)

mod a {
    pub macro foo { ... bar() ... }
    fn bar() { ... }
}

fn main() {
    a::foo!(...);  // Expansion calls a::bar
}
// Syntax (TBA)

macro foo($a: ident) => {
    return $a + 1;
}

I believe it is extremely important that moving to the new macro system is as straightforward as possible for both macro users and authors. This must be the case so that users make the transition to the new system and we are not left with two systems forever.

A goal of this design is that for macro users, there is no difference in using the two systems other than how macros are named. For macro authors, most macros that work in the old system should work in the new system with minimal changes. Macros which will need some adjustment are those that exploit holes in the current hygiene system.

Detailed design

There will be a new system of declarative macros using similar syntax and semantics to the current macro_rules! system.

A declarative macro is declared using the macro keyword. For example, where a macro foo is declared today as macro_rules! foo { ... }, it will be declared using macro foo { ... }. I leave the syntax of the macro body for later specification.

Nomenclature

Throughout this RFC, I use ‘declarative macro’ to refer to a macro declared using declarative (and domain specific) syntax (such as the current macro_rules! syntax). The ‘declarative macros’ name is in opposition to ‘procedural macros’, which are declared as Rust programs. The specific declarative syntax using pattern matching and templating is often referred to as ‘macros by example’.

‘Pattern macro’ has been suggested as an alternative for ‘declarative macro’.

Drawbacks

There is a risk that macro_rules! is good enough for most users and there is low adoption of the new system. Possibly worse would be that there is high adoption but little migration from the old system, leading to us having to support two systems forever.

Alternatives

Make backwards incompatible changes to macro_rules!. This is probably a non-starter due to our stability guarantees. We might be able to make something work if this was considered desirable.

Limit ourselves to backwards compatible changes to macro_rules!. I don’t think this is worthwhile. It’s not clear we can make meaningful improvements without breaking backwards compatibility.

Use macro! instead of macro (proposed in an earlier version of this RFC).

Don’t use a keyword - either make macro not a keyword or use a different word for declarative macros.

Live with the existing system.

Unresolved questions

What to do with macro_rules? We will need to maintain it at least until macro is stable. Hopefully, we can then deprecate it (some time will be required to migrate users to the new system). Eventually, I hope we can remove macro_rules!. That will take a long time, and would require a 2.0 version of Rust to strictly adhere to our stability guarantees.

Summary

Defines a best practices procedure for making bug fixes or soundness corrections in the compiler that can cause existing code to stop compiling.

Motivation

From time to time, we encounter the need to make a bug fix, soundness correction, or other change in the compiler which will cause existing code to stop compiling. When this happens, it is important that we handle the change in a way that gives users of Rust a smooth transition. What we want to avoid is that existing programs suddenly stop compiling with opaque error messages: we would prefer to have a gradual period of warnings, with clear guidance as to what the problem is, how to fix it, and why the change was made. This RFC describes the procedure that we have been developing for handling breaking changes that aims to achieve that kind of smooth transition.

One of the key points of this policy is that (a) warnings should be issued initially rather than hard errors if at all possible and (b) every change that causes existing code to stop compiling will have an associated tracking issue. This issue provides a point to collect feedback on the results of that change. Sometimes changes have unexpectedly large consequences or there may be a way to avoid the change that was not considered. In those cases, we may decide to change course and roll back the change, or find another solution (if warnings are being used, this is particularly easy to do).

What qualifies as a bug fix?

Note that this RFC does not try to define when a breaking change is permitted. That is already covered under RFC 1122. This document assumes that the change being made is in accordance with those policies. Here is a summary of the conditions from RFC 1122:

  • Soundness changes: Fixes to holes uncovered in the type system.
  • Compiler bugs: Places where the compiler is not implementing the specified semantics found in an RFC or lang-team decision.
  • Underspecified language semantics: Clarifications to grey areas where the compiler behaves inconsistently and no formal behavior had been previously decided.

Please see the RFC for full details!

Detailed design

The procedure for making a breaking change is as follows (each of these steps is described in more detail below):

  1. Do a crater run to assess the impact of the change.
  2. Make a special tracking issue dedicated to the change.
  3. Do not report an error right away. Instead, issue forwards-compatibility lint warnings.
    • Sometimes this is not straightforward. See the text below for suggestions on different techniques we have employed in the past.
    • For cases where warnings are infeasible:
      • Report errors, but make every effort to give a targeted error message that directs users to the tracking issue
      • Submit PRs to all known affected crates that fix the issue
        • or, at minimum, alert the owners of those crates to the problem and direct them to the tracking issue
  4. Once the change has been in the wild for at least one cycle, we can stabilize the change, converting those warnings into errors.

Finally, for changes to libsyntax that will affect plugins, the general policy is to batch these changes. That is discussed below in more detail.

Tracking issue

Every breaking change should be accompanied by a dedicated tracking issue for that change. The main text of this issue should describe the change being made, with a focus on what users must do to fix their code. The issue should be approachable and practical; it may make sense to direct users to an RFC or some other issue for the full details. The issue also serves as a place where users can comment with questions or other concerns.

A template for these breaking-change tracking issues can be found below. An example of how such an issue should look can be found here.

The issue should be tagged with (at least) B-unstable and T-compiler.

Tracking issue template

What follows is a template for tracking issues.


This is the summary issue for the YOUR_LINT_NAME_HERE future-compatibility warning and other related errors. The goal of this page is describe why this change was made and how you can fix code that is affected by it. It also provides a place to ask questions or register a complaint if you feel the change should not be made. For more information on the policy around future-compatibility warnings, see our breaking change policy guidelines.

What is the warning for?

Describe the conditions that trigger the warning and how they can be fixed. Also explain why the change was made.*

When will this warning become a hard error?

At the beginning of each 6-week release cycle, the Rust compiler team will review the set of outstanding future compatibility warnings and nominate some of them for Final Comment Period. Toward the end of the cycle, we will review any comments and make a final determination whether to convert the warning into a hard error or remove it entirely.


Issuing future compatibility warnings

The best way to handle a breaking change is to begin by issuing future-compatibility warnings. These are a special category of lint warning. Adding a new future-compatibility warning can be done as follows.

// 1. Define the lint in `src/librustc/lint/builtin.rs`:
declare_lint! {
    pub YOUR_ERROR_HERE,
    Warn,
    "illegal use of foo bar baz"
}

// 2. Add to the list of HardwiredLints in the same file:
impl LintPass for HardwiredLints {
    fn get_lints(&self) -> LintArray {
        lint_array!(
            ..,
            YOUR_ERROR_HERE
        )
    }
}

// 3. Register the lint in `src/librustc_lint/lib.rs`:
store.register_future_incompatible(sess, vec![
    ...,
    FutureIncompatibleInfo {
        id: LintId::of(YOUR_ERROR_HERE),
        reference: "issue #1234", // your tracking issue here!
    },
]);

// 4. Report the lint:
tcx.sess.add_lint(
    lint::builtin::YOUR_ERROR_HERE,
    path_id,
    binding.span,
    format!("some helper message here"));

Helpful techniques

It can often be challenging to filter out new warnings from older, pre-existing errors. One technique that has been used in the past is to run the older code unchanged and collect the errors it would have reported. You can then issue warnings for any errors you would give which do not appear in that original set. Another option is to abort compilation after the original code completes if errors are reported: then you know that your new code will only execute when there were no errors before.

Crater and crates.io

We should always do a crater run to assess impact. It is polite and considerate to at least notify the authors of affected crates the breaking change. If we can submit PRs to fix the problem, so much the better.

Is it ever acceptable to go directly to issuing errors?

Changes that are believed to have negligible impact can go directly to issuing an error. One rule of thumb would be to check against crates.io: if fewer than 10 total affected projects are found (not root errors), we can move straight to an error. In such cases, we should still make the “breaking change” page as before, and we should ensure that the error directs users to this page. In other words, everything should be the same except that users are getting an error, and not a warning. Moreover, we should submit PRs to the affected projects (ideally before the PR implementing the change lands in rustc).

If the impact is not believed to be negligible (e.g., more than 10 crates are affected), then warnings are required (unless the compiler team agrees to grant a special exemption in some particular case). If implementing warnings is not feasible, then we should make an aggressive strategy of migrating crates before we land the change so as to lower the number of affected crates. Here are some techniques for approaching this scenario:

  1. Issue warnings for subparts of the problem, and reserve the new errors for the smallest set of cases you can.
  2. Try to give a very precise error message that suggests how to fix the problem and directs users to the tracking issue.
  3. It may also make sense to layer the fix:
    • First, add warnings where possible and let those land before proceeding to issue errors.
    • Work with authors of affected crates to ensure that corrected versions are available before the fix lands, so that downstream users can use them.

Stabilization

After a change is made, we will stabilize the change using the same process that we use for unstable features:

  • After a new release is made, we will go through the outstanding tracking issues corresponding to breaking changes and nominate some of them for final comment period (FCP).
  • The FCP for such issues lasts for one cycle. In the final week or two of the cycle, we will review comments and make a final determination:
    • Convert to error: the change should be made into a hard error.
    • Revert: we should remove the warning and continue to allow the older code to compile.
    • Defer: can’t decide yet, wait longer, or try other strategies.

Ideally, breaking changes should have landed on the stable branch of the compiler before they are finalized.

Batching breaking changes to libsyntax

Due to the lack of stable plugins, making changes to libsyntax can currently be quite disruptive to the ecosystem that relies on plugins. In an effort to ease this pain, we generally try to batch up such changes so that they occur all at once, rather than occurring in a piecemeal fashion. In practice, this means that you should add:

cc #31645 @Manishearth

to the PR and avoid directly merging it. In the future we may develop a more polished procedure here, but the hope is that this is a relatively temporary state of affairs.

Drawbacks

Following this policy can require substantial effort and slows the time it takes for a change to become final. However, this is far outweighed by the benefits of avoiding sharp disruptions in the ecosystem.

Alternatives

There are obviously many points that we could tweak in this policy:

  • Eliminate the tracking issue.
  • Change the stabilization schedule.

Two other obvious (and rather extreme) alternatives are not having a policy and not making any sort of breaking change at all:

  • Not having a policy at all (as is the case today) encourages inconsistent treatment of issues.
  • Not making any sorts of breaking changes would mean that Rust simply has to stop evolving, or else would issue new major versions quite frequently, causing undue disruption.

Unresolved questions

N/A

Summary

Add a lifetime specifier for macro_rules! patterns, that matches any valid lifetime.

Motivation

Certain classes of macros are completely impossible without the ability to pass lifetimes. Specifically, anything that wants to implement a trait from inside of a macro is going to need to deal with lifetimes eventually. They’re also commonly needed for any macros that need to deal with types in a more granular way than just ty.

Since a lifetime is a single token, the only way to match against a lifetime is by capturing it as tt. Something like '$lifetime:ident would fail to compile. This is extremely limiting, as it becomes difficult to sanitize input, and tt is extremely difficult to use in a sequence without using awkward separators.

Detailed design

This RFC proposes adding lifetime as an additional specifier to macro_rules! (alternatively: life or lt). As it is a single token, it is able to be followed by any other specifier. Since a lifetime acts very much like an identifier, and can appear in almost as many places, it can be handled almost identically.

A preliminary implementation can be found at https://github.com/rust-lang/rust/pull/33135

Drawbacks

None

Alternatives

A more general specifier, such as a “type parameter list”, which would roughly map to ast::Generics would cover most of the cases that matching lifetimes individually would cover.

Unresolved questions

None

Summary

Allow type constructors to be associated with traits. This is an incremental step toward a more general feature commonly called “higher-kinded types,” which is often ranked highly as a requested feature by Rust users. This specific feature (associated type constructors) resolves one of the most common use cases for higher-kindedness, is a relatively simple extension to the type system compared to other forms of higher-kinded polymorphism, and is forward compatible with more complex forms of higher-kinded polymorphism that may be introduced in the future.

Motivation

Consider the following trait as a representative motivating example:

trait StreamingIterator {
    type Item<'a>;
    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}

This trait is very useful - it allows for a kind of Iterator which yields values which have a lifetime tied to the lifetime of the reference passed to next. A particular obvious use case for this trait would be an iterator over a vector which yields overlapping, mutable subslices with each iteration. Using the standard Iterator interface, such an implementation would be invalid, because each slice would be required to exist for as long as the iterator, rather than for as long as the borrow initiated by next.

This trait cannot be expressed in Rust as it exists today, because it depends on a sort of higher-kinded polymorphism. This RFC would extend Rust to include that specific form of higher-kinded polymorphism, which is referred to here as associated type constructors. This feature has a number of applications, but the primary application is along the same lines as the StreamingIterator trait: defining traits which yield types which have a lifetime tied to the local borrowing of the receiver type.

Detailed design

Background: What is kindedness?

“Higher-kinded types” is a vague term, conflating multiple language features under a single banner, which can be inaccurate. As background, this RFC includes a brief overview of the notion of kinds and kindedness. Kinds are often called ‘the type of a type,’ the exact sort of unhelpful description that only makes sense to someone who already understands what is being explained. Instead, let’s try to understand kinds by analogy to types.

In a well-typed language, every expression has a type. Many expressions have what are sometimes called ‘base types,’ types which are primitive to the language and which cannot be described in terms of other types. In Rust, the types bool, i64, usize, and char are all prominent examples of base types. In contrast, there are types which are formed by arranging other types - functions are a good example of this. Consider this simple function:

fn not(x: bool) -> bool {
   !x
}

not has the type bool -> bool (my apologies for using a syntax different from Rust’s). Note that this is different from the type of not(true), which is bool. This difference is important to understanding higher-kindedness.

In the analysis of kinds, all of these types - bool, char, bool -> bool and so on - have the kind type. Every type has the kind type. However, type is a base kind, just as bool is a base type, and there are terms with more complex kinds, such as type -> type. An example of a term of this kind is Vec, which takes a type as a parameter and evaluates to a type. The difference between the kind of Vec and the kind of Vec<i32> (which is type) is analogous to the difference between the type of not and not(true). Note that Vec<T> has the kind type, just like Vec<i32>: even though T is a type parameter, Vec is still being applied to a type, just like not(x) still has the type bool even though x is a variable.

A relatively uncommon feature of Rust is that it has two base kinds, whereas many languages which deal with higher-kindedness only have the base kind type. The other base kind of Rust is the lifetime parameter. If you have a type like Foo<'a>, the kind of Foo is lifetime -> type.

Higher-kinded terms can take multiple arguments as well, of course. Result has the kind type, type -> type. Given vec::Iter<'a, T> vec::Iter has the kind lifetime, type -> type.

Terms of a higher kind are often called ‘type operators’; the type operators which evaluate to a type are called ‘type constructors’. There are other type operators which evaluate to other type operators, and there are even higher order type operators, which take type operators as their argument (so they have a kind like (type -> type) -> type). This RFC doesn’t deal with anything as exotic as that.

Specifically, the goal of this RFC is to allow type constructors to be associated with traits, just as you can currently associate functions, types, and consts with traits. There are other forms of polymorphism involving type constructors, such as implementing traits for a type constructor instead of a type, which are not a part of this RFC.

Features of associated type constructors

Declaring & assigning an associated type constructor

This RFC proposes a very simple syntax for defining an associated type constructor, which looks a lot like the syntax for creating aliases for type constructors. The goal of using this syntax is to avoid to creating roadblocks for users who do not already understand higher kindedness.

trait StreamingIterator {
   type Item<'a>;
}

It is clear that the Item associated item is a type constructor, rather than a type, because it has a type parameter attached to it.

Associated type constructors can be bounded, just like associated types can be:

trait Iterable {
    type Item<'a>;
    type Iter<'a>: Iterator<Item = Self::Item<'a>>;
    
    fn iter<'a>(&'a self) -> Self::Iter<'a>;
}

This bound is applied to the “output” of the type constructor, and the parameter is treated as a higher rank parameter. That is, the above bound is roughly equivalent to adding this bound to the trait:

for<'a> Self::Iter<'a>: Iterator<Item = Self::Item<'a>>

Assigning associated type constructors in impls is very similar to the syntax for assigning associated types:

impl<T> StreamingIterator for StreamIterMut<T> {
    type Item<'a> = &'a mut [T];
    ...
}

Using an associated type constructor to construct a type

Once a trait has an associated type constructor, it can be applied to any parameters or concrete terms that are in scope. This can be done both inside the body of the trait and outside of it, using syntax which is analogous to the syntax for using associated types. Here are some examples:

trait StreamingIterator {
    type Item<'a>;
    // Applying the lifetime parameter `'a` to `Self::Item` inside the trait.
    fn next<'a>(&'a self) -> Option<Self::Item<'a>>;
}

struct Foo<T: StreamingIterator> {
    // Applying a concrete lifetime to the constructor outside the trait.
    bar: <T as StreamingIterator>::Item<'static>;
}

Associated type constructors can also be used to construct other type constructors:

trait Foo {
    type Bar<'a, 'b>;
}

trait Baz {
    type Quux<'a>;
}

impl<T> Baz for T where T: Foo {
    type Quux<'a> = <T as Foo>::Bar<'a, 'static>;
}

Lastly, lifetimes can be elided in associated type constructors in the same manner that they can be elided in other type constructors. Considering lifetime elision, the full definition of StreamingIterator is:

trait StreamingIterator {
    type Item<'a>;
    fn next(&mut self) -> Option<Self::Item>;
}

Using associated type constructors in bounds

Users can bound parameters by the type constructed by that trait’s associated type constructor of a trait using HRTB. Both type equality bounds and trait bounds of this kind are valid:

fn foo<T: for<'a> StreamingIterator<Item<'a>=&'a [i32]>>(iter: T) { ... }

fn foo<T>(iter: T) where T: StreamingIterator, for<'a> T::Item<'a>: Display { ... }

This RFC does not propose allowing any sort of bound by the type constructor itself, whether an equality bound or a trait bound (trait bounds of course are also impossible).

Associated type constructors of type arguments

All of the examples in this RFC have focused on associated type constructors of lifetime arguments, however, this RFC proposes adding ATCs of types as well:

trait Foo {
    type Bar<T>;
}

This RFC does not propose extending HRTBs to take type arguments, which makes these less expressive than they could be. Such an extension is desired, but out of scope for this RFC.

Type arguments can be used to encode other forms of higher kinded polymorphism using the “family” pattern. For example, Using the PointerFamily trait, you can abstract over Arc and Rc:

trait PointerFamily {
    type Pointer<T>: Deref<Target = T>;
    fn new<T>(value: T) -> Self::Pointer<T>;
}

struct ArcFamily;

impl PointerFamily for ArcFamily {
    type Pointer<T> = Arc<T>;
    fn new<T>(value: T) -> Self::Pointer<T> {
        Arc::new(value)
    }
}

struct RcFamily;

impl PointerFamily for RcFamily {
    type Pointer<T> = Rc<T>;
    fn new<T>(value: T) -> Self::Pointer<T> {
        Rc::new(value)
    }
}

struct Foo<P: PointerFamily> {
    bar: P::Pointer<String>,
}

Evaluating bounds and where clauses

Bounds on associated type constructors

Bounds on associated type constructors are treated as higher rank bounds on the trait itself. This makes their behavior consistent with the behavior of bounds on regular associated types. For example:

trait Foo {
    type Assoc<'a>: Trait<'a>;
}

Is equivalent to:

trait Foo where for<'a> Self::Assoc<'a>: Trait<'a> {
    type Assoc<'a>;
}

where clauses on associated types

In contrast, where clauses on associated types introduce constraints which must be proven each time the associated type is used. For example:

trait Foo {
    type Assoc where Self: Sized;
}

Each invocation of <T as Foo>::Assoc will need to prove T: Sized, as opposed to the impl needing to prove the bound as in other cases.

(@nikomatsakis believes that where clauses will be needed on associated type constructors specifically to handle lifetime well formedness in some cases. The exact details are left out of this RFC because they will emerge more fully during implementation.)

Benefits of implementing only this feature before other higher-kinded polymorphisms

This feature is not full-blown higher-kinded polymorphism, and does not allow for the forms of abstraction that are so popular in Haskell, but it does provide most of the unique-to-Rust use cases for higher-kinded polymorphism, such as streaming iterators and collection traits. It is probably also the most accessible feature for most users, being somewhat easy to understand intuitively without understanding higher-kindedness.

This feature has several tricky implementation challenges, but avoids all of these features that other kinds of higher-kinded polymorphism require:

  • Defining higher-kinded traits
  • Implementing higher-kinded traits for type operators
  • Higher order type operators
  • Type operator parameters bound by higher-kinded traits
  • Type operator parameters applied to a given type or type parameter

Advantages of proposed syntax

The advantage of the proposed syntax is that it leverages syntax that already exists. Type constructors can already be aliased in Rust using the same syntax that this used, and while type aliases play no polymorphic role in type resolution, to users they seem very similar to associated types. A goal of this syntax is that many users will be able to use types which have associated type constructors without even being aware that this has something to do with a type system feature called higher-kindedness.

How We Teach This

This RFC uses the terminology “associated type constructor,” which has become the standard way to talk about this feature in the Rust community. This is not a very accessible framing of this concept; in particular the term “type constructor” is an obscure piece of jargon from type theory which most users cannot be expected to be familiar with.

Upon accepting this RFC, we should begin (with haste) referring to this concept as simply “generic associated types.” Today, associated types cannot be generic; after this RFC, this will be possible. Rather than teaching this as a separate feature, it will be taught as an advanced use case for associated types.

Patterns like “family traits” should also be taught in some way, possible in the book or possibly just through supplemental forms of documentation like blog posts.

This will also likely increase the frequency with which users have to employ higher rank trait bounds; we will want to put additional effort into teaching and making teachable HRTBs.

Drawbacks

Adding language complexity

This would add a somewhat complex feature to the language, being able to polymorphically resolve type constructors, and requires several extensions to the type system which make the implementation more complicated.

Additionally, though the syntax is designed to make this feature easy to learn, it also makes it more plausible that a user may accidentally use it when they mean something else, similar to the confusion between impl .. for Trait and impl<T> .. for T where T: Trait. For example:

// The user means this
trait Foo<'a> {
    type Bar: 'a;
}

// But they write this
trait Foo<'a> {
    type Bar<'a>;
}

Not full “higher-kinded types”

This does not add all of the features people want when they talk about higher- kinded types. For example, it does not enable traits like Monad. Some people may prefer to implement all of these features together at once. However, this feature is forward compatible with other kinds of higher-kinded polymorphism, and doesn’t preclude implementing them in any way. In fact, it paves the way by solving some implementation details that will impact other kinds of higher- kindedness as well, such as partial application.

Syntax isn’t like other forms of higher-kinded polymorphism

Though the proposed syntax is very similar to the syntax for associated types and type aliases, it is probably not possible for other forms of higher-kinded polymorphism to use a syntax along the same lines. For this reason, the syntax used to define an associated type constructor will probably be very different from the syntax used to e.g. implement a trait for a type constructor.

However, the syntax used for these other forms of higher-kinded polymorphism will depend on exactly what features they enable. It would be hard to design a syntax which is consistent with unknown features.

Alternatives

Push HRTBs harder without associated type constructors

An alternative is to push harder on HRTBs, possibly introducing some elision that would make them easier to use.

Currently, an approximation of StreamingIterator can be defined like this:

trait StreamingIterator<'a> {
   type Item: 'a;
   fn next(&'a self) -> Option<Self::Item>;
}

You can then bound types as T: for<'a> StreamingIterator<'a> to avoid the lifetime parameter infecting everything StreamingIterator appears.

However, this only partially prevents the infectiveness of StreamingIterator, only allows for some of the types that associated type constructors can express, and is in generally a hacky attempt to work around the limitation rather than an equivalent alternative.

Impose restrictions on ATCs

What is often called “full higher kinded polymorphism” is allowing the use of type constructors as input parameters to other type constructors - higher order type constructors, in other words. Without any restrictions, multiparameter higher order type constructors present serious problems for type inference.

For example, if you are attempting to infer types, and you know you have a constructor of the form type, type -> Result<(), io::Error>, without any restrictions it is difficult to determine if this constructor is (), io::Error -> Result<(), io::Error> or io::Error, () -> Result<(), io::Error>.

Because of this, languages with first class higher kinded polymorphism tend to impose restrictions on these higher kinded terms, such as Haskell’s currying rules.

If Rust were to adopt higher order type constructors, it would need to impose similar restrictions on the kinds of type constructors they can receive. But associated type constructors, being a kind of alias, inherently mask the actual structure of the concrete type constructor. In other words, if we want to be able to use ATCs as arguments to higher order type constructors, we would need to impose those restrictions on all ATCs.

We have a list of restrictions we believe are necessary and sufficient; more background can be found in this blog post by nmatsakis:

  • Each argument to the ATC must be applied
  • They must be applied in the same order they appear in the ATC
  • They must be applied exactly once
  • They must be the left-most arguments of the constructor

These restrictions are quite constrictive; there are several applications of ATCs that we already know about that would be frustrated by this, such as the definition of Iterable for HashMap (for which the item (&'a K, &'a V), applying the lifetime twice).

For this reason we have decided not to apply these restrictions to all ATCs. This will mean that if higher order type constructors are ever added to the language, they will not be able to take an abstract ATC as an argument. However, this can be maneuvered around using newtypes which do meet the restrictions, for example:

struct IterItem<'a, I: Iterable>(I::Item<'a>);

Unresolved questions

Summary

This RFC proposes a process for deciding detailed guidelines for code formatting, and default settings for Rustfmt. The outcome of the process should be an approved formatting style defined by a style guide and enforced by Rustfmt.

This RFC proposes creating a new repository under the rust-lang organisation called fmt-rfcs. It will be operated in a similar manner to the RFCs repository, but restricted to formatting issues. A new sub-team will be created to deal with those RFCs. Both the team and repository are expected to be temporary. Once the style guide is complete, the team can be disbanded and the repository frozen.

Motivation

There is a need to decide on detailed guidelines for the format of Rust code. A uniform, language-wide formatting style makes comprehending new code-bases easier and forestalls bikeshedding arguments in teams of Rust users. The utility of such guidelines has been proven by Go, amongst other languages.

The Rustfmt tool is reaching maturity and currently enforces a somewhat arbitrary, lightly discussed style, with many configurable options.

If Rustfmt is to become a widely accepted tool, there needs to be a process for the Rust community to decide on the default style, and how configurable that style should be.

These discussions should happen in the open and be highly visible. It is important that the Rust community has significant input to the process. The RFC repository would be an ideal place to have this discussion because it exists to satisfy these goals, and is tried and tested. However, the discussion is likely to be a high-bandwidth one (code style is a contentious and often subjective topic, and syntactic RFCs tend to be the highest traffic ones). Therefore, having the discussion on the RFCs repository could easily overwhelm it and make it less useful for other important discussions.

There currently exists a style guide as part of the Rust documentation. This is far more wide-reaching than just formatting style, but also not detailed enough to specify Rustfmt. This was originally developed in its own repository, but is now part of the main Rust repository. That seems like a poor venue for discussion of these guidelines due to visibility.

Detailed design

Process

The process for style RFCs will mostly follow the process for other RFCs. Anyone may submit an RFC. An overview of the process is:

  • If there is no single, obvious style, then open a GitHub issue on the fmt-rfcs repo for initial discussion. This initial discussion should identify which Rustfmt options are required to enforce the guideline.
  • Implement the style in rustfmt (behind an option if it is not the current default). In exceptional circumstances (such as where the implementation would require very deep changes to rustfmt), this step may be skipped.
  • Write an RFC formalising the formatting convention and referencing the implementation, submit as a PR to fmt-rfcs. The RFC should include the default values for options to enforce the guideline and which non-default options should be kept.
  • The RFC PR will be triaged by the style team and either assigned to a team member for shepherding, or closed.
  • When discussion has reached a fixed point, the RFC PR will be put into a final comment period (FCP).
  • After FCP, the RFC will either be accepted and merged or closed.
  • Implementation in Rustfmt can then be finished (including any changes due to discussion of the RFC), and defaults are set.

Scope of the process

This process is specifically limited to formatting style guidelines which can be enforced by Rustfmt with its current architecture. Guidelines that cannot be enforced by Rustfmt without a large amount of work are out of scope, even if they only pertain to formatting.

Note whether Rustfmt should be configurable at all, and if so how configurable is a decision that should be dealt with using the formatting RFC process. That will be a rather exceptional RFC.

Size of RFCs

RFCs should be self-contained and coherent, whilst being as small as possible to keep discussion focused. For example, an RFC on ‘arithmetic and logic expressions’ is about the right size; ‘expressions’ would be too big, and ‘addition’ would be too small.

When is a guideline ready for RFC?

The purpose of the style RFC process is to foster an open discussion about style guidelines. Therefore, RFC PRs should be made early rather than late. It is expected that there may be more discussion and changes to style RFCs than is typical for Rust RFCs. However, at submission, RFC PRs should be completely developed and explained to the level where they can be used as a specification.

A guideline should usually be implemented in Rustfmt before an RFC PR is submitted. The RFC should be used to select an option to be the default behaviour, rather than to identify a range of options. An RFC can propose a combination of options (rather than a single one) as default behaviour. An RFC may propose some reorganisation of options.

Usually a style should be widely used in the community before it is submitted as an RFC. Where multiple styles are used, they should be covered as alternatives in the RFC, rather than being submitted as multiple RFCs. In some cases, a style may be proposed without wide use (we don’t want to discourage innovation), however, it should have been used in some real code, rather than just being sketched out.

Triage

RFC PRs are triaged by the style team. An RFC may be closed during triage (with feedback for the author) if the style team think it is not specified in enough detail, has too narrow or broad scope, or is not appropriate in some way (e.g., applies to more than just formatting). Otherwise, the PR will be assigned a shepherd as for other RFCs.

FCP

FCP will last for two weeks (assuming the team decide to meet every two weeks) and will be announced in the style team sub-team report.

Decision and post-decision process

The style team will make the ultimate decision on accepting or closing a style RFC PR. Decisions should be by consensus. Most discussion should take place on the PR comment thread, a decision should ideally be made when consensus is reached on the thread. Any additional discussion amongst the style team will be summarised on the thread.

If an RFC PR is accepted, it will be merged. An issue for implementation will be filed in the appropriate place (usually the Rustfmt repository) referencing the RFC. If the style guide needs to be updated, then an issue for that should be filed on the Rust repository.

The author of an RFC is not required to implement the guideline. If you are interested in working on the implementation for an ‘active’ RFC, but cannot determine if someone else is already working on it, feel free to ask (e.g. by leaving a comment on the associated issue).

The fmt-rfcs repository

The form of the fmt-rfcs repository will follow the rfcs repository. Accepted RFCs will live in a text directory, the README.md will include information taken from this RFC, there will be an RFC template in the root of the repository. Issues on the repository can be used for placeholders for future RFCs and for preliminary discussion.

The RFC format will be illustrated by the RFC template. It will have the following sections:

  • summary
  • details
  • implementation
  • rationale
  • alternatives
  • unresolved questions

The ‘details’ section should contain examples of both what should and shouldn’t be done, cover simple and complex cases, and the interaction with other style guidelines.

The ‘implementation’ section should specify how options must be set to enforce the guideline, and what further changes (including additional options) are required. It should specify any renaming, reorganisation, or removal of options.

The ‘rationale’ section should motivate the choices behind the RFC. It should reference existing code bases which use the proposed style. ‘Alternatives’ should cover alternative possible guidelines, if appropriate.

Guidelines may include more than one acceptable rule, but should offer guidance for when to use each rule (which should be formal enough to be used by a tool).

For example:

A struct literal must be formatted either on a single line (with spaces after the opening brace and before the closing brace, and with fields separated by commas and spaces), or on multiple lines (with one field per line and newlines after the opening brace and before the closing brace). The former approach should be used for short struct literals, the latter for longer struct literals. For tools, the first approach should be used when the width of the fields (excluding commas and braces) is 16 characters. E.g.,

let x = Foo { a: 42, b: 34 };
let y = Foo {
    a: 42,
    b: 34,
    c: 1000
};

(Note this is just an example, not a proposed guideline).

The repository in embryonic form lives at nrc/fmt-rfcs. It illustrates what issues and PRs might look like, as well as including the RFC template. Note that typically there should be more discussion on an issue before submitting an RFC PR.

The repository should be updated as this RFC develops, and moved to the rust-lang GitHub organisation if this RFC is accepted.

The style team

The style sub-team will be responsible for handling style RFCs and making decisions related to code style and formatting.

Per the governance RFC, the core team would pick a leader who would then pick the rest of the team. I propose that the team should include members representative of the following areas:

  • Rustfmt,
  • the language, tools, and libraries sub-teams (since each has a stake in code style),
  • large Rust projects.

Because activity such as this hasn’t been done before in the RUst community, it is hard to identify suitable candidates for the team ahead of time. The team will probably start small and consist of core members of the Rust community. I expect that once the process gets underway the team can be rapidly expanded with community members who are active in the fmt-rfcs repository (i.e., submitting and constructively commenting on RFCs).

There will be a dedicated irc channel for discussion on formatting issues: #rust-style.

Style guide

The existing style guide will be split into two guides: one dealing with API design and similar issues which will be managed by the libs team, and one dealing with formatting issues which will be managed by the style team. Note that the formatting part of the guide may include guidelines which are not enforced by Rustfmt. Those are outside the scope of the process defined in this RFC, but still belong in that part of the style guide.

When RFCs are accepted the style guide may need to be updated. Towards the end of the process, the style team should audit and edit the guide to ensure it is a coherent document.

Material goals

Hopefully, the style guideline process will have limited duration, one year seems reasonable. After that time, style guidelines for new syntax could be included with regular RFCs, or the fmt-rfcs repository could be maintained in a less active fashion.

At the end of the process, the fmt-rfcs repository should be a fairly complete guide for formatting Rust code, and useful as a specification for Rustfmt and tools with similar goals, such as IDEs. In particular, there should be a decision made on how configurable Rustfmt should be, and an agreed set of default options. The formatting style guide in the Rust repository should be a more human-friendly source of formatting guidelines, and should be in sync with the fmt-rfcs repo.

Drawbacks

This RFC introduces more process and bureaucracy, and requires more meetings for some core Rust contributors. Precious time and energy will need to be devoted to discussions.

Alternatives

Benevolent dictator - a single person dictates style rules which will be followed without question by the community. This seems to work for Go, I suspect it will not work for Rust.

Parliamentary ‘democracy’ - the community ‘elects’ a style team (via the usual RFC consensus process, rather than actual voting). The style team decides on style issues without an open process. This would be more efficient, but doesn’t fit very well with the open ethos of the Rust community.

Use the RFCs repo, rather than a new repo. This would have the benefit that style RFCs would get more visibility, and it is one less place to keep track of for Rust community members. However, it risks overwhelming the RFC repo with style debate.

Use issues on Rustfmt. I feel that the discussions would not have enough visibility in this fashion, but perhaps that can be addressed by wide and regular announcement.

Use a book format for the style repo, rather than a collection of RFCs. This would make it easier to see how the ‘final product’ style guide would look. However, I expect there will be many issues that are important to be aware of while discussing an RFC, that are not important to include in a final guide.

Have an existing team handle the process, rather than create a new style team. Saves on a little bureaucracy. Candidate teams would be language and tools. However, the language team has very little free bandwidth, and the tools team is probably not broad enough to effectively handle the style decisions.

Unresolved questions

Summary

Removes the one-type-only restriction on format_args! arguments. Expressions like format_args!("{0:x} {0:o}", foo) now work as intended, where each argument is still evaluated only once, in order of appearance (i.e. left-to-right).

Motivation

The format_args! macro and its friends historically only allowed a single type per argument, such that trivial format strings like "{0:?} == {0:x}" or "rgb({r}, {g}, {b}) is #{r:02x}{g:02x}{b:02x}" are illegal. This is massively inconvenient and counter-intuitive, especially considering the formatting syntax is borrowed from Python where such things are perfectly valid.

Upon closer investigation, the restriction is in fact an artificial implementation detail. For mapping format placeholders to macro arguments the format_args! implementation did not bother to record type information for all the placeholders sequentially, but rather chose to remember only one type per argument. Also the formatting logic has not received significant attention since after its conception, but the uses have greatly expanded over the years, so the mechanism as a whole certainly needs more love.

Detailed design

Formatting is done during both compile-time (expansion-time to be pedantic) and runtime in Rust. As we are concerned with format string parsing, not outputting, this RFC only touches the compile-time side of the existing formatting mechanism which is libsyntax_ext and libfmt_macros.

Before continuing with the details, it is worth noting that the core flow of current Rust formatting is mapping arguments to placeholders to format specs. For clarity, we distinguish among placeholders, macro arguments and argument objects. They are all italicized to provide some visual hint for distinction.

To implement the proposed design, the following changes in behavior are made:

  • implicit references are resolved during parse of format string;
  • named macro arguments are resolved into positional ones;
  • placeholder types are remembered and de-duplicated for each macro argument,
  • the argument objects are emitted with information gathered in steps above.

As most of the details is best described in the code itself, we only illustrate some of the high-level changes below.

Implicit reference resolution

Currently two forms of implicit references exist: ArgumentNext and CountIsNextParam. Both take a positional macro argument and advance the same internal pointer, but format is parsed before position, as shown in format strings like "{foo:.*} {} {:.*}" which is in every way equivalent to "{foo:.0$} {1} {3:.2$}".

As the rule is already known even at compile-time, and does not require the whole format string to be known beforehand, the resolution can happen just inside the parser after a placeholder is successfully parsed. As a natural consequence, both forms can be removed from the rest of the compiler, simplifying work later.

Named argument resolution

Not seen elsewhere in Rust, named arguments in format macros are best seen as syntactic sugar, and we’d better actually treat them as such. Just after successfully parsing the macro arguments, we immediately rewrite every name to its respective position in the argument list, which again simplifies the process.

Processing and expansion

We only have absolute positional references to macro arguments at this point, and it’s straightforward to remember all unique placeholders encountered for each. The unique placeholders are emitted into argument objects in order, to preserve evaluation order, but no difference in behavior otherwise.

Drawbacks

Due to the added data structures and processing, time and memory costs of compilations may slightly increase. However this is mere speculation without actual profiling and benchmarks. Also the ergonomical benefits alone justifies the additional costs.

Alternatives

Do nothing

One can always write a little more code to simulate the proposed behavior, and this is what people have most likely been doing under today’s constraints. As in:

fn main() {
	let r = 0x66;
	let g = 0xcc;
	let b = 0xff;

	// rgb(102, 204, 255) == #66ccff
	// println!("rgb({r}, {g}, {b}) == #{r:02x}{g:02x}{b:02x}", r=r, g=g, b=b);
	println!("rgb({}, {}, {}) == #{:02x}{:02x}{:02x}", r, g, b, r, g, b);
}

Or slightly more verbose when side effects are in play:

fn do_something(i: &mut usize) -> usize {
	let result = *i;
	*i += 1;
	result
}

fn main() {
	let mut i = 0x1234usize;

	// 0b1001000110100 0o11064 0x1234
	// 0x1235
	// println!("{0:#b} {0:#o} {0:#x}", do_something(&mut i));
	// println!("{:#x}", i);

	// need to consider side effects, hence a temp var
	{
		let r = do_something(&mut i);
		println!("{:#b} {:#o} {:#x}", r, r, r);
		println!("{:#x}", i);
	}
}

While the effects are the same and nothing requires modification, the ergonomics is simply bad and the code becomes unnecessarily convoluted.

Unresolved questions

None.

Table of contents

Summary

This RFC proposes a 1.0 API for the regex crate and therefore a move out of the rust-lang-nursery organization and into the rust-lang organization. Since the API of regex has largely remained unchanged since its inception 2 years ago, significant emphasis is placed on retaining the existing API. Some minor breaking changes are proposed.

Motivation

Regular expressions are a widely used tool and most popular programming languages either have an implementation of regexes in their standard library, or there exists at least one widely used third party implementation. It therefore seems reasonable for Rust to do something similar.

The regex crate specifically serves many use cases, most of which are somehow related to searching strings for patterns. Describing regular expressions in detail is beyond the scope of this RFC, but briefly, these core use cases are supported in the main API:

  1. Testing whether a pattern matches some text.
  2. Finding the location of a match of a pattern in some text.
  3. Finding the location of a match of a pattern—and locations of all its capturing groups—in some text.
  4. Iterating over successive non-overlapping matches of (2) and (3).

The expected outcome is that the regex crate should be the preferred default choice for matching regular expressions when writing Rust code. This is already true today; this RFC formalizes it.

Detailed design

Syntax

Evolution

The public API of a regex library includes the syntax of a regular expression. A change in the semantics of the syntax can cause otherwise working programs to break, yet, we’d still like the option to expand the syntax if necessary. Thus, this RFC proposes:

  1. Any change that causes a previously invalid regex pattern to become valid is not a breaking change. For example, the escape sequence \y is not a valid pattern, but could become one in a future release without a major version bump.
  2. Any change that causes a previously valid regex pattern to become invalid is a breaking change.
  3. Any change that causes a valid regex pattern to change its matching semantics is a breaking change. (For example, changing \b from “word boundary assertion” to “backspace character.”)

Bug fixes and Unicode upgrades are exceptions to both (2) and (3).

Another interesting exception to (2) is that compiling a regex can fail if the entire compiled object would exceed some pre-defined user configurable size. In particular, future changes to the compiler could cause certain instructions to use more memory, or indeed, the representation of the compiled regex could change completely. This could cause a regex that fit under the size limit to no longer fit, and therefore fail to compile. These cases are expected to be extremely rare in practice. Notably, the default size limit is 10MB.

Concrete syntax

The syntax is exhaustively documented in the current public API documentation: http://doc.rust-lang.org/regex/regex/index.html#syntax

To my knowledge, the evolution as proposed in this RFC has been followed since regex was created. The syntax has largely remained unchanged with few additions.

Expansion concerns

There are a few possible avenues for expansion, and we take measures to make sure they are possible with respect to API evolution.

  • Escape sequences are often blessed with special semantics. For example, \d is a Unicode character class that matches any digit and \b is a word boundary assertion. We may one day like to add more escape sequences with special semantics. For this reason, any unrecognized escape sequence makes a pattern invalid.
  • If we wanted to expand the syntax with various look-around operators, then it would be possible since most common syntax is considered an invalid pattern today. In particular, all of the syntactic forms listed here are invalid patterns in regex.
  • Character class sets are another potentially useful feature that may be worth adding. Currently, various forms of set notation are treated as valid patterns, but this RFC proposes making them invalid patterns before 1.0.
  • Additional named Unicode classes or codepoints may be desirable to add. Today, any pattern of the form \p{NAME} where NAME is unrecognized is considered invalid, which leaves room for expansion.
  • If all else fails, we can introduce new flags that enable new features that conflict with stable syntax. This is possible because using an unrecognized flag results in an invalid pattern.

Core API

The core API of the regex crate is the Regex type:

pub struct Regex(_);

It has one primary constructor:

impl Regex {
  /// Creates a new regular expression. If the pattern is invalid or otherwise
  /// fails to compile, this returns an error.
  pub fn new(pattern: &str) -> Result<Regex, Error>;
}

And five core search methods. All searching completes in worst case linear time with respect to the search text (the size of the regex is taken as a constant).

impl Regex {
  /// Returns true if and only if the text matches this regex.
  pub fn is_match(&self, text: &str) -> bool;

  /// Returns the leftmost-first match of this regex in the text given. If no
  /// match exists, then None is returned.
  ///
  /// The leftmost-first match is defined as the first match that is found
  /// by a backtracking search.
  pub fn find<'t>(&self, text: &'t str) -> Option<Match<'t>>;

  /// Returns an iterator of successive non-overlapping matches of this regex
  /// in the text given.
  pub fn find_iter<'r, 't>(&'r self, text: &'t str) -> Matches<'r, 't>;

  /// Returns the leftmost-first match of this regex in the text given with
  /// locations for all capturing groups that participated in the match.
  pub fn captures(&self, text: &str) -> Option<Captures>;

  /// Returns an iterator of successive non-overlapping matches with capturing
  /// group information in the text given.
  pub fn captures_iter<'r, 't>(&'r self, text: &'t str) -> CaptureMatches<'r, 't>;
}

(N.B. The captures method can technically replace all uses of find and is_match, but is potentially slower. Namely, the API reflects a performance trade off: the more you ask for, the harder the regex engine has to work.)

There is one additional, but idiosyncratic, search method:

impl Regex {
  /// Returns the end location of a match if one exists in text.
  ///
  /// This may return a location preceding the end of a proper leftmost-first
  /// match. In particular, it may return the location at which a match is
  /// determined to exist. For example, matching `a+` against `aaaaa` will
  /// return `1` while the end of the leftmost-first match is actually `5`.
  ///
  /// This has the same performance characteristics as `is_match`.
  pub fn shortest_match(&self, text: &str) -> Option<usize>;
}

And two methods for splitting:

impl Regex {
  /// Returns an iterator of substrings of `text` delimited by a match of
  /// this regular expression. Each element yielded by the iterator corresponds
  /// to text that *isn't* matched by this regex.
  pub fn split<'r, 't>(&'r self, text: &'t str) -> Split<'r, 't>;

  /// Returns an iterator of at most `limit` substrings of `text` delimited by
  /// a match of this regular expression. Each element yielded by the iterator
  /// corresponds to text that *isn't* matched by this regex. The remainder of
  /// `text` that is not split will be the last element yielded by the
  /// iterator.
  pub fn splitn<'r, 't>(&'r self, text: &'t str, limit: usize) -> SplitN<'r, 't>;
}

And three methods for replacement. Replacement is discussed in more detail in a subsequent section.

impl Regex {
  /// Replaces matches of this regex in `text` with `rep`. If no matches were
  /// found, then the given string is returned unchanged, otherwise a new
  /// string is allocated.
  ///
  /// `replace` replaces the first match only. `replace_all` replaces all
  /// matches. `replacen` replaces at most `limit` matches.
  fn replace<'t, R: Replacer>(&self, text: &'t str, rep: R) -> Cow<'t, str>;
  fn replace_all<'t, R: Replacer>(&self, text: &'t str, rep: R) -> Cow<'t, str>;
  fn replacen<'t, R: Replacer>(&self, text: &'t str, limit: usize, rep: R) -> Cow<'t, str>;
}

And lastly, three simple accessors:

impl Regex {
  /// Returns the original pattern string.
  pub fn as_str(&self) -> &str;

  /// Returns an iterator over all capturing group in the pattern in the order
  /// they were defined (by position of the leftmost parenthesis). The name of
  /// the group is yielded if it has a name, otherwise None is yielded.
  pub fn capture_names(&self) -> CaptureNames;

  /// Returns the total number of capturing groups in the pattern. This
  /// includes the implicit capturing group corresponding to the entire
  /// pattern.
  pub fn captures_len(&self) -> usize;
}

Finally, Regex impls the Send, Sync, Display, Debug, Clone and FromStr traits from the standard library.

Error

The Error enum is an extensible enum, similar to std::io::Error, corresponding to the different ways that regex compilation can fail. In particular, this means that adding a new variant to this enum is not a breaking change. (Removing or changing an existing variant is still a breaking change.)

pub enum Error {
  /// A syntax error.
  Syntax(SyntaxError),
  /// The compiled program exceeded the set size limit.
  /// The argument is the size limit imposed.
  CompiledTooBig(usize),
  /// Hints that destructuring should not be exhaustive.
  ///
  /// This enum may grow additional variants, so this makes sure clients
  /// don't count on exhaustive matching. (Otherwise, adding a new variant
  /// could break existing code.)
  #[doc(hidden)]
  __Nonexhaustive,
}

Note that the Syntax variant could contain the Error type from the regex-syntax crate, but this couples regex-syntax to the public API of regex. We sidestep this hazard by defining a newtype in regex that internally wraps regex_syntax::Error. This also enables us to selectively expose more information in the future.

RegexBuilder

In most cases, the construction of a regex is done with Regex::new. There are however some options one might want to tweak. This can be done with a RegexBuilder:

impl RegexBuilder {
  /// Creates a new builder from the given pattern.
  pub fn new(pattern: &str) -> RegexBuilder;

  /// Compiles the pattern and all set options. If successful, a Regex is
  /// returned. Otherwise, if compilation failed, an Error is returned.
  ///
  /// N.B. `RegexBuilder::new("...").compile()` is equivalent to
  /// `Regex::new("...")`.
  pub fn build(&self) -> Result<Regex, Error>;

  /// Set the case insensitive flag (i).
  pub fn case_insensitive(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the multi line flag (m).
  pub fn multi_line(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the dot-matches-any-character flag (s).
  pub fn dot_matches_new_line(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the swap-greedy flag (U).
  pub fn swap_greed(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the ignore whitespace flag (x).
  pub fn ignore_whitespace(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the Unicode flag (u).
  pub fn unicode(&mut self, yes: bool) -> &mut RegexBuilder;

  /// Set the approximate size limit (in bytes) of the compiled regular
  /// expression.
  ///
  /// If compiling a pattern would approximately exceed this size, then
  /// compilation will fail.
  pub fn size_limit(&mut self, limit: usize) -> &mut RegexBuilder;

  /// Set the approximate size limit (in bytes) of the cache used by the DFA.
  ///
  /// This is a per thread limit. Once the DFA fills the cache, it will be
  /// wiped and refilled again. If the cache is wiped too frequently, the
  /// DFA will quit and fall back to another matching engine.
  pub fn dfa_size_limit(&mut self, limit: usize) -> &mut RegexBuilder;
}

Captures

A Captures value stores the locations of all matching capturing groups for a single match. It provides convenient access to those locations indexed by either number, or, if available, name.

The first capturing group (index 0) is always unnamed and always corresponds to the entire match. Other capturing groups correspond to groups in the pattern. Capturing groups are indexed by the position of their leftmost parenthesis in the pattern.

Note that Captures is a type constructor with a single parameter: the lifetime of the text searched by the corresponding regex. In particular, the lifetime of Captures is not tied to the lifetime of a Regex.

impl<'t> Captures<'t> {
  /// Returns the match associated with the capture group at index `i`. If
  /// `i` does not correspond to a capture group, or if the capture group
  /// did not participate in the match, then `None` is returned.
  pub fn get(&self, i: usize) -> Option<Match<'t>>;

  /// Returns the match for the capture group named `name`. If `name` isn't a
  /// valid capture group or didn't match anything, then `None` is returned.
  pub fn name(&self, name: &str) -> Option<Match<'t>>;

  /// Returns the number of captured groups. This is always at least 1, since
  /// the first unnamed capturing group corresponding to the entire match
  /// always exists.
  pub fn len(&self) -> usize;

  /// Expands all instances of $name in the text given to the value of the
  /// corresponding named capture group. The expanded string is written to
  /// dst.
  ///
  /// The name in $name may be integer corresponding to the index of a capture
  /// group or it can be the name of a capture group. If the name isn't a valid
  /// capture group, then it is replaced with an empty string.
  ///
  /// The longest possible name is used. e.g., $1a looks up the capture group
  /// named 1a and not the capture group at index 1. To exert more precise
  /// control over the name, use braces, e.g., ${1}a.
  ///
  /// To write a literal $, use $$.
  pub fn expand(&self, replacement: &str, dst: &mut String);
}

The Captures type impls Debug, Index<usize> (for numbered capture groups) and Index<str> (for named capture groups). A downside of the Index impls is that the return value is bounded to the lifetime of Captures instead of the lifetime of the actual text searched because of how the Index trait is defined. Callers can work around that limitation if necessary by using an explicit method such as get or name.

Replacer

The Replacer trait is a helper trait to make the various replace methods on Regex more ergonomic. In particular, it makes it possible to use either a standard string as a replacement, or a closure with more explicit access to a Captures value.

pub trait Replacer {
  /// Appends text to dst to replace the current match.
  ///
  /// The current match is represents by caps, which is guaranteed to have a
  /// match at capture group 0.
  ///
  /// For example, a no-op replacement would be
  /// dst.extend(caps.at(0).unwrap()).
  fn replace_append(&mut self, caps: &Captures, dst: &mut String);

  /// Return a fixed unchanging replacement string.
  ///
  /// When doing replacements, if access to Captures is not needed, then
  /// it can be beneficial from a performance perspective to avoid finding
  /// sub-captures. In general, this is called once for every call to replacen.
  fn no_expansion<'r>(&'r mut self) -> Option<Cow<'r, str>> {
    None
  }
}

Along with this trait, there is also a helper type, NoExpand that implements Replacer like so:

pub struct NoExpand<'t>(pub &'t str);

impl<'t> Replacer for NoExpand<'t> {
    fn replace_append(&mut self, _: &Captures, dst: &mut String) {
        dst.push_str(self.0);
    }

    fn no_expansion<'r>(&'r mut self) -> Option<Cow<'r, str>> {
        Some(Cow::Borrowed(self.0))
    }
}

This permits callers to use NoExpand with the replace methods to guarantee that the replacement string is never searched for $group replacement syntax.

We also provide two more implementations of the Replacer trait: &str and FnMut(&Captures) -> String.

quote

There is one free function in regex:

/// Escapes all regular expression meta characters in `text`.
///
/// The string returned may be safely used as a literal in a regex.
pub fn quote(text: &str) -> String;

RegexSet

A RegexSet represents the union of zero or more regular expressions. It is a specialized machine that can match multiple regular expressions simultaneously. Conceptually, it is similar to joining multiple regexes as alternates, e.g., re1|re2|...|reN, with one crucial difference: in a RegexSet, multiple expressions can match. This means that each pattern can be reasoned about independently. A RegexSet is ideal for building simpler lexers or an HTTP router.

Because of their specialized nature, they can only report which regexes match. They do not report match locations. In theory, this could be added in the future, but is difficult.

pub struct RegexSet(_);

impl RegexSet {
  /// Constructs a new RegexSet from the given sequence of patterns.
  ///
  /// The order of the patterns given is used to assign increasing integer
  /// ids starting from 0. Namely, matches are reported in terms of these ids.
  pub fn new<I, S>(patterns: I) -> Result<RegexSet, Error>
      where S: AsRef<str>, I: IntoIterator<Item=S>;

  /// Returns the total number of regexes in this set.
  pub fn len(&self) -> usize;

  /// Returns true if and only if one or more regexes in this set match
  /// somewhere in the given text.
  pub fn is_match(&self, text: &str) -> bool;

  /// Returns the set of regular expressions that match somewhere in the given
  /// text.
  pub fn matches(&self, text: &str) -> SetMatches;
}

RegexSet impls the Debug and Clone traits.

The SetMatches type is queryable and implements IntoIterator.

pub struct SetMatches(_);

impl SetMatches {
  /// Returns true if this set contains 1 or more matches.
  pub fn matched_any(&self) -> bool;

  /// Returns true if and only if the regex identified by the given id is in
  /// this set of matches.
  ///
  /// This panics if the id given is >= the number of regexes in the set that
  /// these matches came from.
  pub fn matched(&self, id: usize) -> bool;

  /// Returns the total number of regexes in the set that created these
  /// matches.
  pub fn len(&self) -> usize;

  /// Returns an iterator over the ids in the set that correspond to a match.
  pub fn iter(&self) -> SetMatchesIter;
}

SetMatches impls the Debug and Clone traits.

Note that a builder is not proposed for RegexSet in this RFC; however, it is likely one will be added at some point in a backwards compatible way.

The bytes submodule

All of the above APIs have thus far been explicitly for searching text where text has type &str. While this author believes that suits most use cases, it should also be possible to search a regex on arbitrary bytes, i.e., &[u8]. One particular use case is quickly searching a file via a memory map. If regexes could only search &str, then one would have to verify it was UTF-8 first, which could be costly. Moreover, if the file isn’t valid UTF-8, then you either can’t search it, or you have to allocate a new string and lossily copy the contents. Neither case is particularly ideal. It would instead be nice to just search the &[u8] directly.

This RFC including a bytes submodule in the crate. The API of this submodule is a clone of the API described so far, except with &str replaced by &[u8] for the search text (patterns are still &str). The clone includes Regex itself, along with all supporting types and traits such as Captures, Replacer, FindIter, RegexSet, RegexBuilder and so on. (This RFC describes some alternative designs in a subsequent section.)

Since the API is a clone of what has been seen so far, it is not written out again. Instead, we’ll discuss the key differences.

Again, the first difference is that a bytes::Regex can search &[u8] while a Regex can search &str.

The second difference is that a bytes::Regex can completely disable Unicode support and explicitly match arbitrary bytes. The details:

  1. The u flag can be disabled even when disabling it might cause the regex to match invalid UTF-8. When the u flag is disabled, the regex is said to be in “ASCII compatible” mode.
  2. In ASCII compatible mode, neither Unicode codepoints nor Unicode character classes are allowed.
  3. In ASCII compatible mode, Perl character classes (\w, \d and \s) revert to their typical ASCII definition. \w maps to [[:word:]], \d maps to [[:digit:]] and \s maps to [[:space:]].
  4. In ASCII compatible mode, word boundaries use the ASCII compatible \w to determine whether a byte is a word byte or not.
  5. Hexadecimal notation can be used to specify arbitrary bytes instead of Unicode codepoints. For example, in ASCII compatible mode, \xFF matches the literal byte \xFF, while in Unicode mode, \xFF is a Unicode codepoint that matches its UTF-8 encoding of \xC3\xBF. Similarly for octal notation.
  6. . matches any byte except for \n instead of any Unicode codepoint. When the s flag is enabled, . matches any byte.

An interesting property of the above is that while the Unicode flag is enabled, a bytes::Regex is guaranteed to match only valid UTF-8 in a &[u8]. Like Regex, the Unicode flag is enabled by default.

N.B. The Unicode flag can also be selectively disabled in a Regex, but not in a way that permits matching invalid UTF-8.

Drawbacks

Guaranteed linear time matching

A significant contract in the API of the regex crate is that all searching has worst case O(n) complexity, where n ~ length(text). (The size of the regular expression is taken as a constant.) This contract imposes significant restrictions on both the implementation and the set of features exposed in the pattern language. A full analysis is beyond the scope of this RFC, but here are the highlights:

  1. Unbounded backtracking can’t be used to implement matching. Backtracking can be quite fast in practice (indeed, the current implementation uses bounded backtracking in some cases), but has worst case exponential time.
  2. Permitting backreferences in the pattern language can cause matching to become NP-complete, which (probably) can’t be solved in linear time.
  3. Arbitrary look around is probably difficult to fit into a linear time guarantee in practice.

The benefit to the linear time guarantee is just that: no matter what, all searching completes in linear time with respect to the search text. This is a valuable guarantee to make, because it means that one can execute arbitrary regular expressions over arbitrary input and be absolutely sure that it will finish in some “reasonable” time.

Of course, in practice, constants that are omitted from complexity analysis actually matter. For this reason, the regex crate takes a number of steps to keep constants low. For example, by placing a limit on the size of the regular expression or choosing an appropriate matching engine when another might result in higher constant factors.

This particular drawback segregates Rust’s regular expression library from most other regular expression libraries that programmers may be familiar with. Languages such as Java, Python, Perl, Ruby, PHP and C++ support more flavorful regexes by default. Go is the only language this author knows of whose standard regex implementation guarantees linear time matching. Of course, RE2 is also worth mentioning, which is a C++ regex library that guarantees linear time matching. There are other implementations of regexes that guarantee linear time matching (TRE, for example), but none of them are particularly popular.

It is also worth noting that since Rust’s FFI is zero cost, one can bind to existing regex implementations that provide more features (bindings for both PCRE1 and Oniguruma exist today).

Allocation

The regex API assumes that the implementation can dynamically allocate memory. Indeed, the current implementation takes advantage of this. A regex library that has no requirement on dynamic memory allocation would look significantly different than the one that exists today. Dynamic memory allocation is utilized pervasively in the parser, compiler and even during search.

The benefit of permitting dynamic memory allocation is that it makes the implementation and API simpler. This does make use of the regex crate in environments that don’t have dynamic memory allocation impossible.

This author isn’t aware of any regex library that can work without dynamic memory allocation.

With that said, regex may want to grow custom allocator support when the corresponding traits stabilize.

Synchronization is implicit

Every Regex value can be safely used from multiple threads simultaneously. Since a Regex has interior mutable state, this implies that it must do some kind of synchronization in order to be safe.

There are some reasons why we might want to do synchronization automatically:

  1. Regex exposes an immutable API. That is, from looking at its set of methods, none of them borrow the Regex mutably (or otherwise claim to mutate the Regex). This author claims that since there is no observable mutation of a Regex, it not being thread safe would violate the principle of least surprise.
  2. Often, a Regex should be compiled once and reused repeatedly in multiple searches. To facilitate this, lazy_static! can be used to guarantee that compilation happens exactly once. lazy_static! requires its types to be Sync. A user of Regex could work around this by wrapping a Regex in a Mutex, but this would make misuse too easy. For example, locking a Regex in one thread would prevent simultaneous searching in another thread.

Synchronization has overhead, although it is extremely small (and dwarfed by general matching overhead). The author has ad hoc benchmarked the regex implementation with GNU Grep, and per match overhead is comparable in single threaded use. It is this author’s opinion, that it is good enough. If synchronization overhead across multiple threads is too much, callers may elect to clone the Regex so that each thread gets its own copy. Cloning a Regex is no more expensive than what would be done internally automatically, but it does eliminate contention.

An alternative is to increase the API surface and have types that are synchronized by default and types that aren’t synchronized. This was discussed at length in this thread. My conclusion from this thread is that we either expand the surface of the API, or we break the current API or we keep implicit synchronization as-is. In this author’s opinion, neither expanding the API or breaking the API is worth avoiding negligible synchronization overhead.

The implementation is complex

Regular expression engines have a lot of moving parts and it often requires quite a bit of context on how the whole library is organized in order to make significant contributions. Therefore, moving regex into rust-lang is a maintenance hazard. This author has tried to mitigate this hazard somewhat by doing the following:

  1. Offering to mentor contributions. Significant contributions have thus far fizzled, but minor contributions—even to complex code like the DFA—have been successful.
  2. Documenting not just the API, but the internals. The DFA is, for example, heavily documented.
  3. Wrote a HACKING.md guide that gives a sweeping overview of the design.
  4. Significant test and benchmark suites.

With that said, there is still a lot more that could be done to mitigate the maintenance hazard. In this author’s opinion, the interaction between the three parts of the implementation (parsing, compilation, searching) is not documented clearly enough.

Alternatives

Big picture

The most important alternative is to decide not to bless a particular implementation of regular expressions. We might want to go this route for any number of reasons (see: Drawbacks). However, the regex crate is already widely used, which provides at least some evidence that some set of programmers find it good enough for general purpose regex searching.

The impact of not moving regex into rust-lang is, plainly, that Rust won’t have an “officially blessed” regex implementation. Many programmers may appreciate the complexity of a regex implementation, and therefore might insist that one be officially maintained. However, to be honest, it isn’t quite clear what would happen in practice. This author is speculating.

bytes::Regex

This RFC proposes stabilizing the bytes sub-module of the regex crate in its entirety. The bytes sub-module is a near clone of the API at the crate level with one important difference: it searches &[u8] instead of &str. This design was motivated by a similar split in std, but there are alternatives.

A regex trait

One alternative is designing a trait that looks something like this:

trait Regex {
  type Text: ?Sized;

  fn is_match(&self, text: &Self::Text) -> bool;
  fn find(&self, text: &Self::Text) -> Option<Match>;
  fn find_iter<'r, 't>(&'r self, text: &'t Self::Text) -> Matches<'r, 't, Self::Text>;
  // and so on
}

However, there are a couple problems with this approach. First and foremost, the use cases of such a trait aren’t exactly clear. It does make writing generic code that searches either a &str or a &[u8] possible, but the semantics of searching &str (always valid UTF-8) or &[u8] are quite a bit different with respect to the original Regex. Secondly, the trait isn’t obviously implementable by others. For example, some of the methods return iterator types such as Matches that are typically implemented with a lower level API that isn’t exposed. This suggests that a straight-forward traitification of the current API probably isn’t appropriate, and perhaps, a better trait needs to be more fundamental to regex searching.

Perhaps the strongest reason to not adopt this design for regex 1.0 is that we don’t have any experience with it and there hasn’t been any demand for it. In particular, it could be prototyped in another crate.

Reuse some types

In the current proposal, the bytes submodule completely duplicates the top-level API, including all iterator types, Captures and even the Replacer trait. We could parameterize many of those types over the type of the text searched. For example, the proposed Replacer trait looks like this:

trait Replacer {
    fn replace_append(&mut self, caps: &Captures, dst: &mut String);

    fn no_expansion<'r>(&'r mut self) -> Option<Cow<'r, str>> {
        None
    }
}

We might add an associated type like so:

trait Replacer {
    type Text: ToOwned + ?Sized;

    fn replace_append(
        &mut self,
        caps: &Captures<Self::Text>,
        dst: &mut <Self::Text as ToOwned>::Owned,
    );

    fn no_expansion<'r>(&'r mut self) -> Option<Cow<'r, Self::Text>> {
        None
    }
}

But parameterizing the Captures type is a little bit tricky. Namely, methods like get want to slice the text at match offsets, but this can’t be done safely in generic code without introducing another public trait.

The final death knell in this idea is that these two implementations cannot co-exist:

impl<F> Replacer for F where F: FnMut(&Captures) -> String {
    type Text = str;

    fn replace_append(&mut self, caps: &Captures, dst: &mut String) {
        dst.push_str(&(*self)(caps));
    }
}

impl<F> Replacer for F where F: FnMut(&Captures) -> Vec<u8> {
    type Text = [u8];

    fn replace_append(&mut self, caps: &Captures, dst: &mut Vec<u8>) {
        dst.extend(&(*self)(caps));
    }
}

Perhaps there is a path through this using yet more types or more traits, but without a really strong motivating reason to find it, I’m not convinced it’s worth it. Duplicating all of the types is unfortunate, but it’s simple.

Unresolved questions

The regex repository has more than just the regex crate.

regex-syntax

This crate exposes a regular expression parser and abstract syntax that is completely divorced from compilation or searching. It is not part of regex proper since it may experience more frequent breaking changes and is far less frequently used. It is not clear whether this crate will ever see 1.0, and if it does, what criteria would be used to judge it suitable for 1.0. Nevertheless, it is a useful public API, but it is not part of this RFC.

regex-capi

Recently, regex-capi was built to provide a C API to this regex library. It has been used to build cgo bindings to this library for Go. Given its young age, it is not part of this proposal but will be maintained as a pre-1.0 crate in the same repository.

regex_macros

The regex! compiler plugin is a macro that can compile regular expressions when your Rust program compiles. Stated differently, regex!("...") is transformed into Rust code that executes a search of the given pattern directly. It was written two years ago and largely hasn’t changed since. When it was first written, it had two major benefits:

  1. If there was a syntax error in your regex, your Rust program would not compile.
  2. It was faster.

Today, (1) can be simulated in practice with the use of a Clippy lint and (2) is no longer true. In fact, regex! is at least one order of magnitude slower than the standard Regex implementation.

The future of regex_macros is not clear. In one sense, since it is a compiler plugin, there hasn’t been much interest in developing it further since its audience is necessarily limited. In another sense, it’s not entirely clear what its implementation path is. It would take considerable work for it to beat the current Regex implementation (if it’s even possible). More discussion on this is out of scope.

Dependencies

As of now, regex has several dependencies:

  • aho-corasick
  • memchr
  • thread_local
  • regex-syntax
  • utf8-ranges

All of them except for thread_local were written by this author, and were primarily motivated for use in the regex crate. They were split out because they seem generally useful.

There may be other things in regex (today or in the future) that may also be helpful to others outside the strict context of regex. Is it beneficial to split such things out and create a longer list of dependencies? Or should we keep regex as tight as possible?

Exposing more internals

It is conceivable that others might find interest in the regex compiler or more lower level access to the matching engines. We could do something similar to regex-syntax and expose some internals in a separate crate. However, there isn’t a pressing desire to do this at the moment, and would probably require a good deal of work.

Breaking changes

This section of the RFC lists all breaking changes between regex 0.1 and the API proposed in this RFC.

  • find and find_iter now return values of type Match instead of (usize, usize). The Match type has start and end methods which can be used to recover the original offsets, as well as an as_str method to get the matched text.
  • The Captures type no longer has any iterators defined. Instead, callers should use the Regex::capture_names method.
  • bytes::Regex enables the Unicode flag by default. Previously, it disabled it by default. The flag can be disabled in the pattern with (?-u).
  • The definition of the Replacer trait was completely re-worked. Namely, its API inverts control of allocation so that the caller must provide a String to write to. Previous implementors will need to examine the new API. Moving to the new API should be straight-forward.
  • The is_empty method on Captures was removed since it always returns false (because every Captures has at least one capture group corresponding to the entire match).
  • The PartialEq and Eq impls on Regex were removed. If you need this functionality, add a newtype around Regex and write the corresponding PartialEq and Eq impls.
  • The lifetime parameters for the iter and iter_named methods on Captures were fixed. The corresponding iterator types, SubCaptures and SubCapturesNamed, grew an additional lifetime parameter.
  • The constructor, Regex::with_size_limit, was removed. It can be replaced with use of RegexBuilder.
  • The is_match free function was removed. Instead, compile a Regex explicitly and call the is_match method.
  • Many iterator types were renamed. (e.g., RegexSplits to SplitsIter.)
  • Replacements now return a Cow<str> instead of a String. Namely, the subject text doesn’t need to be copied if there are no replacements. Callers may need to add into_owned() calls to convert the Cow<str> to a proper String.
  • The Error type no longer has the InvalidSet variant, since the error is no longer possible. Its Syntax variant was also modified to wrap a String instead of a regex_syntax::Error. If you need access to specific parse error information, use the regex-syntax crate directly.
  • To allow future growth, some character classes may no longer compile to make room for possibly adding class set notation in the future.
  • Various iterator types have been renamed.
  • The RegexBuilder type now takes an &mut self on most methods instead of self. Additionally, the final build step now uses build() instead of compile().

Summary

Let’s default lifetimes in static and const declarations to 'static.

Motivation

Currently, having references in static and const declarations is cumbersome due to having to explicitly write &'static ... Also the long lifetime name causes substantial rightwards drift, which makes it hard to format the code to be visually appealing.

For example, having a 'static default for lifetimes would turn this:

static my_awesome_tables: &'static [&'static HashMap<Cow<'static, str>, u32>] = ..

into this:

static my_awesome_table: &[&HashMap<Cow<str>, u32>] = ..

The type declaration still causes some rightwards drift, but at least all the contained information is useful. There is one exception to the rule: lifetime elision for function signatures will work as it does now (see example below).

Detailed design

The same default that RFC #599 sets up for trait object is to be used for statics and const declarations. In those declarations, the compiler will assume 'static when a lifetime is not explicitly given in all reference lifetimes, including reference lifetimes obtained via generic substitution.

Note that this RFC does not forbid writing the lifetimes, it only sets a default when no is given. Thus the change will not cause any breakage and is therefore backwards-compatible. It’s also very unlikely that implementing this RFC will restrict our design space for static and const definitions down the road.

The 'static default does not override lifetime elision in function signatures, but work alongside it:

static foo: fn(&u32) -> &u32 = ...;  // for<'a> fn(&'a u32) -> &'a u32
static bar: &Fn(&u32) -> &u32 = ...; // &'static for<'a> Fn(&'a u32) -> &'a u32

With generics, it will work as anywhere else, also differentiating between function lifetimes and reference lifetimes. Notably, writing out the lifetime is still possible.

trait SomeObject<'a> { .. }
static foo: &SomeObject = ...; // &'static SomeObject<'static>
static bar: &for<'a> SomeObject<'a> = ...; // &'static for<'a> SomeObject<'a>
static baz: &'static [u8] = ...;

struct SomeStruct<'a, 'b> {
    foo: &'a Foo,
    bar: &'a Bar,
    f: for<'b> Fn(&'b Foo) -> &'b Bar
}

static blub: &SomeStruct = ...; // &'static SomeStruct<'static, 'b> for any 'b

It will still be an error to omit lifetimes in function types not eligible for elision, e.g.

static blobb: FnMut(&Foo, &Bar) -> &Baz = ...; //~ ERROR: missing lifetimes for
                                               //^ &Foo, &Bar, &Baz

This ensures that the really hairy cases that need the full type documented aren’t unduly abbreviated.

It should also be noted that since statics and constants have no self type, elision will only work with distinct input lifetimes or one input+output lifetime.

Drawbacks

There are no known drawbacks to this change.

Alternatives

  • Leave everything as it is. Everyone using static references is annoyed by having to add 'static without any value to readability. People will resort to writing macros if they have many resources.
  • Write the aforementioned macro. This is inferior in terms of UX. Depending on the implementation it may or may not be possible to default lifetimes in generics.
  • Make all non-elided lifetimes 'static. This has the drawback of creating hard-to-spot errors (that would also probably occur in the wrong place) and confusing users.
  • Make all non-declared lifetimes 'static. This would not be backwards compatible due to interference with lifetime elision.
  • Infer types for statics. The absence of types makes it harder to reason about the code, so even if type inference for statics was to be implemented, defaulting lifetimes would have the benefit of pulling the cost-benefit relation in the direction of more explicit code. Thus it is advisable to implement this change even with the possibility of implementing type inference later.

Unresolved questions

  • Are there third party Rust-code handling programs that need to be updated to deal with this change?

Summary

(This is a result of discussion of issue #961 and related to RFCs 352 and 955.)

Let a loop { ... } expression return a value via break my_value;.

Motivation

Rust is an expression-oriented language. Currently loop constructs don’t provide any useful value as expressions, they are run only for their side-effects. But there clearly is a “natural-looking”, practical case, described in this thread and [this] RFC, where the loop expressions could have meaningful values. I feel that not allowing that case runs against the expression-oriented conciseness of Rust. comment by golddranks

Some examples which can be much more concisely written with this RFC:

// without loop-break-value:
let x = {
    let temp_bar;
    loop {
        ...
        if ... {
            temp_bar = bar;
            break;
        }
    }
    foo(temp_bar)
};

// with loop-break-value:
let x = foo(loop {
        ...
        if ... { break bar; }
    });

// without loop-break-value:
let computation = {
    let result;
    loop {
        if let Some(r) = self.do_something() {
            result = r;
            break;
        }
    }
    result.do_computation()
};
self.use(computation);

// with loop-break-value:
let computation = loop {
        if let Some(r) = self.do_something() {
            break r;
        }
    }.do_computation();
self.use(computation);

Detailed design

This proposal does two things: let break take a value, and let loop have a result type other than ().

Break Syntax

Four forms of break will be supported:

  1. break;
  2. break 'label;
  3. break EXPR;
  4. break 'label EXPR;

where 'label is the name of a loop and EXPR is an expression. break and break 'label become equivalent to break () and break 'label () respectively.

Result type of loop

Currently the result type of a ‘loop’ without ‘break’ is ! (never returns), which may be coerced to any type. The result type of a ‘loop’ with a ‘break’ is (). This is important since a loop may appear as the last expression of a function:

fn f() {
    loop {
        do_something();
        // never breaks
    }
}
fn g() -> () {
    loop {
        do_something();
        if Q() { break; }
    }
}
fn h() -> ! {
    loop {
        do_something();
        // this loop must diverge for the function to typecheck
    }
}

This proposal allows ‘loop’ expression to be of any type T, following the same typing and inference rules that are applicable to other expressions in the language. Type of EXPR in every break EXPR and break 'label EXPR must be coercible to the type of the loop the EXPR appears in.

It is an error if these types do not agree or if the compiler’s type deduction rules do not yield a concrete type.

Examples of errors:

// error: loop type must be () and must be i32
let a: i32 = loop { break; };
// error: loop type must be i32 and must be &str
let b: i32 = loop { break "I am not an integer."; };
// error: loop type must be Option<_> and must be &str
let c = loop {
    if Q() {
        break "answer";
    } else {
        break None;
    }
};
fn z() -> ! {
    // function does not return
    // error: loop may break (same behaviour as before)
    loop {
        if Q() { break; }
    }
}

Example showing the equivalence of break; and break ();:

fn y() -> () {
    loop {
        if coin_flip() {
            break;
        } else {
            break ();
        }
    }
}

Coercion examples:

// ! coerces to any type
loop {}: ();
loop {}: u32;
loop {
    break (loop {}: !);
}: u32;
loop {
    // ...
    break 42;
    // ...
    break panic!();
}: u32;

// break EXPRs are not of the same type, but both coerce to `&[u8]`.
let x = [0; 32];
let y = [0; 48];
loop {
    // ...
    break &x;
    // ...
    break &y;
}: &[u8];

Result value

A loop only yields a value if broken via some form of break ...; statement, in which case it yields the value resulting from the evaluation of the statement’s expression (EXPR above), or () if there is no EXPR expression.

Examples:

assert_eq!(loop { break; }, ());
assert_eq!(loop { break 5; }, 5);
let x = 'a loop {
    'b loop {
        break 'a 1;
    }
    break 'a 2;
};
assert_eq!(x, 1);

Drawbacks

The proposal changes the syntax of break statements, requiring updates to parsers and possibly syntax highlighters.

Alternatives

No alternatives to the design have been suggested. It has been suggested that the feature itself is unnecessary, and indeed much Rust code already exists without it, however the pattern solves some cases which are difficult to handle otherwise and allows more flexibility in code layout.

Unresolved questions

Extension to for, while, while let

A frequently discussed issue is extension of this concept to allow for, while and while let expressions to return values in a similar way. There is however a complication: these expressions may also terminate “naturally” (not via break), and no consensus has been reached on how the result value should be determined in this case, or even the result type.

There are three options:

  1. Do not adjust for, while or while let at this time
  2. Adjust these control structures to return an Option<T>, returning None in the default case
  3. Specify the default return value via some extra syntax

Via Option<T>

Unfortunately, option (2) is not possible to implement cleanly without breaking a lot of existing code: many functions use one of these control structures in tail position, where the current “value” of the expression, (), is implicitly used:

// function returns `()`
fn print_my_values(v: &Vec<i32>) {
    for x in v {
        println!("Value: {}", x);
    }
    // loop exits with `()` which is implicitly "returned" from the function
}

Two variations of option (2) are possible:

  • Only adjust the control structures where they contain a break EXPR; or break 'label EXPR; statement. This may work but would necessitate that break; and break (); mean different things.
  • As a special case, make break (); return () instead of Some(()), while for other values break x; returns Some(x).

Via extra syntax for the default value

Several syntaxes have been proposed for how a control structure’s default value is set. For example:

fn first<T: Copy>(list: Iterator<T>) -> Option<T> {
    for x in list {
        break Some(x);
    } else default {
        None
    }
}

or:

let x = for thing in things default "nope" {
    if thing.valid() { break "found it!"; }
}

There are two things to bear in mind when considering new syntax:

  • It is undesirable to add a new keyword to the list of Rust’s keywords
  • It is strongly desirable that unbounded lookahead is not required while syntax parsing Rust code

For more discussion on this topic, see issue #961.

  • Feature Name: document_all_features
  • Start Date: 2016-06-03
  • RFC PR: rust-lang/rfcs#1636
  • Rust Issue: https://github.com/rust-lang-nursery/reference/issues/9

Summary

One of the major goals of Rust’s development process is stability without stagnation. That means we add features regularly. However, it can be difficult to use those features if they are not publicly documented anywhere. Therefore, this RFC proposes requiring that all new language features and public standard library items must be documented before landing on the stable release branch (item documentation for the standard library; in the language reference for language features).

Outline

  • Summary
    • Outline
  • Motivation
    • The Current Situation
    • Precedent
  • Detailed design
    • New RFC section: “How do we teach this?”
    • New requirement to document changes before stabilizing
      • Language features
        • Reference
          • The state of the reference
        • The Rust Programming Language
      • Standard library
  • How do we teach this?
  • Drawbacks
  • Alternatives
  • Unresolved questions

Motivation

At present, new language features are often documented only in the RFCs which propose them and the associated announcement blog posts. Moreover, as features change, the existing official language documentation (the Rust Book, Rust by Example, and the language reference) can increasingly grow outdated.

Although the Rust Book and Rust by Example are kept relatively up to date, the reference is not:

While Rust does not have a specification, the reference tries to describe its working in detail. It tends to be out of date. (emphasis mine)

Importantly, though, this warning only appears on the main site, not in the reference itself. If someone searches for e.g. that deprecated attribute and does find the discussion of the deprecated attribute, they will have no reason to believe that the reference is wrong.

For example, the change in Rust 1.9 to allow users to use the #[deprecated] attribute for their own libraries was, at the time of writing this RFC, nowhere reflected in official documentation. (Many other examples could be supplied; this one was chosen for its relative simplicity and recency.) The Book’s discussion of attributes linked to the reference list of attributes, but as of the time of writing the reference still specifies that deprecated was a compiler-only feature. The two places where users might have become aware of the change are the Rust 1.9 release blog post and the RFC itself. Neither (yet) ranked highly in search; users were likely to be misled.

Changing this to require all language features to be documented before stabilization would mean Rust users can use the language documentation with high confidence that it will provide exhaustive coverage of all stable Rust features.

Although the standard library is in excellent shape regarding documentation, including it in this policy will help guarantee that it remains so going forward.

The Current Situation

Today, the canonical source of information about new language features is the RFCs which define them. The Rust Reference is substantially out of date, and not all new features have made their way into The Rust Programming Language.

There are several serious problems with the status quo of using RFCs as ad hoc documentation:

  1. Many users of Rust may simply not know that these RFCs exist. The number of users who do not know (or especially care) about the RFC process or its history will only increase as Rust becomes more popular.

  2. In many cases, especially in more complicated language features, some important elements of the decision, details of implementation, and expected behavior are fleshed out either in the pull-request discussion for the RFC, or in the implementation issues which follow them.

  3. The RFCs themselves, and even more so the associated pull request discussions, are often dense with programming language theory. This is as it should be in context, but it means that the relevant information may be inaccessible to Rust users without prior PLT background, or without the patience to wade through it.

  4. Similarly, information about the final decisions on language features is often buried deep at the end of long and winding threads (especially for a complicated feature like impl specialization).

  5. Information on how the features will be used is often closely coupled to information on how the features will be implemented, both in the RFCs and in the discussion threads. Again, this is as it should be, but it makes it difficult (at best!) for ordinary Rust users to read.

In short, RFCs are a poor source of information about language features for the ordinary Rust user. Rust users should not need to be troubled with details of how the language is implemented works simply to learn how pieces of it work. Nor should they need to dig through tens (much less hundreds) of comments to determine what the final form of the feature is.

However, there is currently no other documentation at all for many newer features. This is a significant barrier to adoption of the language, and equally of adoption of new features which will improve the ergonomics of the language.

Precedent

This exact idea has been adopted by the Ember community after their somewhat bumpy transitions at the end of their 1.x cycle and leading into their 2.x transition. As one commenter there put it:

The fact that 1.13 was released without updated guides is really discouraging to me as an Ember adopter. It may be much faster, the features may be much cooler, but to me, they don’t exist unless I can learn how to use them from documentation. Documentation IS feature work. (@davidgoli)

The Ember core team agreed, and embraced the principle outlined in this comment:

No version shall be released until guides and versioned API documentation is ready. This will allow newcomers the ability to understand the latest release. (@guarav0)

One of the main reasons not to adopt this approach, that it might block features from landing as soon as they otherwise might, was addressed in that discussion as well:

Now if this documentation effort holds up the releases people are going to grumble. But so be it. The challenge will be to effectively parcel out the effort and relieve the core team to do what they do best. No single person should be a gate. But lack of good documentation should gate releases. That way a lot of eyes are forced to focus on the problem. We can’t get the great new toys unless everybody can enjoy the toys. (@eccegordo)

The basic decision has led to a substantial improvement in the currency of the documentation (which is now updated the same day as a new version is released). Moreover, it has spurred ongoing development of better tooling around documentation to manage these releases. Finally, at least in the RFC author’s estimation, it has also led to a substantial increase in the overall quality of that documentation, possibly as a consequence of increasing the community involvement in the documentation process (including the formation of a documentation subteam).

Detailed design

The basic process of developing new language features will remain largely the same as today. The required changes are two additions:

  • a new section in the RFC, “How do we teach this?” modeled on Ember’s updated RFC process

  • a new requirement that the changes themselves be properly documented before being merged to stable

New RFC section: “How do we teach this?”

Following the example of Ember.js, we must add a new section to the RFC, just after Detailed design, titled How do we teach this? The section should explain what changes need to be made to documentation, and if the feature substantially changes what would be considered the “best” way to solve a problem or is a fairly mainstream issue, discuss how it might be incorporated into The Rust Programming Language and/or Rust by Example.

Here is the Ember RFC section, with appropriate substitutions and modifications:

How We Teach This

What names and terminology work best for these concepts and why? How is this idea best presented? As a continuation of existing Rust patterns, or as a wholly new one?

Would the acceptance of this proposal change how Rust is taught to new users at any level? What additions or changes to the Rust Reference, The Rust Programming Language, and/or Rust by Example does it entail?

How should this feature be introduced and taught to existing Rust users?

For a great example of this in practice, see the (currently open) Ember RFC: Module Unification, which includes several sections discussing conventions, tooling, concepts, and impacts on testing.

New requirement to document changes before stabilizing

Prior to stabilizing a feature, the features will now be documented as follows:

  • Language features:
    • must be documented in the Rust Reference.
    • should be documented in The Rust Programming Language.
    • may be documented in Rust by Example.
  • Standard library additions must include documentation in std API docs.
  • Both language features and standard library changes must include:
    • a single line for the changelog
    • a longer summary for the long-form release announcement.

Stabilization of a feature must not proceed until the requirements outlined in the How We Teach This section of the originating RFC have been fulfilled.

Language features

We will document all language features in the Rust Reference, as well as updating The Rust Programming Language and Rust by Example as appropriate. (Not all features or changes will require updates to the books.)

Reference

This will necessarily be a manual process, involving updates to the reference.md file. (It may at some point be sensible to break up the Reference file for easier maintenance; that is left aside as orthogonal to this discussion.)

Feature documentation does not need to be written by the feature author. In fact, this is one of the areas where the community may be most able to support the language/compiler developers even if not themselves programming language theorists or compiler hackers. This may free up the compiler developers’ time. It will also help communicate the features in a way that is accessible to ordinary Rust users.

New features do not need to be documented to be merged into master/nightly

Instead, the documentation process should immediately precede the move to stabilize. Once the feature has been deemed ready for stabilization, either the author or a community volunteer should write the reference material for the feature, to be incorporated into the Rust Reference.

The reference material need not be especially long, but it should be long enough for ordinary users to learn how to use the language feature without reading the RFCs.

Discussion of stabilizing a feature in a given release will now include the status of the reference material.

The current state of the reference

Since the reference is fairly out of date, we should create a “strike team” to update it. This can proceed in parallel with the documentation of new features.

Updating the reference should proceed stepwise:

  1. Begin by adding an appendix in the reference with links to all accepted RFCs which have been implemented but are not yet referenced in the documentation.
  2. As the reference material is written for each of those RFC features, remove it from that appendix.

The current presentation of the reference is also in need of improvement: a single web page with all of this content is difficult to navigate, or to update. Therefore, the strike team may also take this opportunity to reorganize the reference and update its presentation.

The Rust Programming Language

Most new language features should be added to The Rust Programming Language. However, since the book is planned to go to print, the main text of the book is expected to be fixed between major revisions. As such, new features should be documented in an online appendix to the book, which may be titled e.g. “Newest Features.”

The published version of the book should note that changes and languages features made available after the book went to print will be documented in that online appendix.

Standard library

In the case of the standard library, this could conceivably be managed by setting the #[forbid(missing_docs)] attribute on the library roots. In lieu of that, manual code review and general discipline should continue to serve. However, if automated tools can be employed here, they should.

How do we teach this?

Since this RFC promotes including this section, it includes it itself. (RFCs, unlike Rust struct or enum types, may be freely self-referential. No boxing required.)

To be most effective, this will involve some changes both at a process and core-team level, and at a community level.

  1. The RFC template must be updated to include the new section for teaching.
  2. The RFC process in the RFCs README must be updated, specifically by including “fail to include a plan for documenting the feature” in the list of possible problems in “Submit a pull request step” in What the process is.
  3. Make documentation and teachability of new features equally high priority with the features themselves, and communicate this clearly in discussion of the features. (Much of the community is already very good about including this in considerations of language design; this simply makes this an explicit goal of discussions around RFCs.)

This is also an opportunity to allow/enable community members with less experience to contribute more actively to The Rust Programming Language, Rust by Example, and the Rust Reference.

  1. We should write issues for feature documentation, and may flag them as approachable entry points for new users.

  2. We may use the more complicated language reference issues as points for mentoring developers interested in contributing to the compiler. Helping document a complex language feature may be a useful on-ramp for working on the compiler itself.

At a “messaging” level, we should continue to emphasize that documentation is just as valuable as code. For example (and there are many other similar opportunities): in addition to highlighting new language features in the release notes for each version, we might highlight any part of the documentation which saw substantial improvement in the release.

Drawbacks

  1. The largest drawback at present is that the language reference is already quite out of date. It may take substantial work to get it up to date so that new changes can be landed appropriately. (Arguably, however, this should be done regardless, since the language reference is an important part of the language ecosystem.)

  2. Another potential issue is that some sections of the reference are particularly thorny and must be handled with considerable care (e.g. lifetimes). Although in general it would not be necessary for the author of the new language feature to write all the documentation, considerable extra care and oversight would need to be in place for these sections.

  3. This may delay landing features on stable. However, all the points raised in Precedent on this apply, especially:

    We can’t get the great new toys unless everybody can enjoy the toys. (@eccegordo)

    For Rust to attain its goal of stability without stagnation, its documentation must also be stable and not stagnant.

  4. If the forthcoming docs team is unable to provide significant support, and perhaps equally if the rest of the community does not also increase involvement, this will simply not work. No individual can manage all of these docs alone.

Alternatives

  • Just add the “How do we teach this?” section.

    Of all the alternatives, this is the easiest (and probably the best). It does not substantially change the state with regard to the documentation, and even having the section in the RFC does not mean that it will end up added to the docs, as evidence by the #[deprecated] RFC, which included as part of its text:

    The language reference will be extended to describe this feature as outlined in this RFC. Authors shall be advised to leave their users enough time to react before removing a deprecated item.

    This is not a small downside by any stretch—but adding the section to the RFC will still have all the secondary benefits noted above, and it probably at least somewhat increases the likelihood that new features do get documented.

  • Embrace the documentation, but do not include “How do we teach this?” section in new RFCs.

    This still gives us most of the benefits (and was in fact the original form of the proposal), and does not place a new burden on RFC authors to make sure that knowing how to teach something is part of any new language or standard library feature.

    On the other hand, thinking about the impact on teaching should further improve consideration of the general ergonomics of a proposed feature. If something cannot be taught well, it’s likely the design needs further refinement.

  • No change; leave RFCs as canonical documentation.

    This approach can take (at least) two forms:

    1. We can leave things as they are, where the RFC and surrounding discussion form the primary point of documentation for newer-than-1.0 language features. As part of that, we could just link more prominently to the RFC repository and describe the process from the documentation pages.
    2. We could automatically render the text of the RFCs into part of the documentation used on the site (via submodules and the existing tooling around Markdown documents used for Rust documentation).

    However, for all the reasons highlighted above in Motivation: The Current Situation, RFCs and their associated threads are not a good canonical source of information on language features.

  • Add a rule for the standard library but not for language features.

    This would basically just turn the status quo into an official policy. It has all the same drawbacks as no change at all, but with the possible benefit of enabling automated checks on standard library documentation.

  • Add a rule for language features but not for the standard library.

    The standard library is in much better shape, in no small part because of the ease of writing inline documentation for new modules. Adding a formal rule may not be necessary if good habits are already in place.

    On the other hand, having a formal policy would not seem to hurt anything here; it would simply formalize what is already happening (and perhaps, via linting attributes, make it easy to spot when it has failed).

  • Eliminate the reference entirely.

    Since the reference is already substantially out of date, it might make sense to stop presenting it publicly at all, at least until such a time as it has been completely reworked and updated.

    The main upside to this is the reality that an outdated and inaccurate reference may be worse than no reference at all, as it may mislead espiecally new Rust users.

    The main downside, of course, is that this would leave very large swaths of the language basically without any documentation, and even more of it only documented in RFCs than is the case today.

Unresolved questions

  • How do we clearly distinguish between features on nightly, beta, and stable Rust—in the reference especially, but also in the book?
  • For the standard library, once it migrates to a crates structure, should it simply include the #[forbid(missing_docs)] attribute on all crates to set this as a build error?

Summary

This RFC adds the checked_* methods already known from primitives like usize to Duration.

Motivation

Generally this helps when subtracting Durations which can be the case quite often.

One abstract example would be executing a specific piece of code repeatedly after a constant amount of time.

Specific examples would be a network service or a rendering process emitting a constant amount of frames per second.

Example code would be as follows:


// This function is called repeatedly
fn render() {
    // 10ms delay results in 100 frames per second
    let wait_time = Duration::from_millis(10);

    // `Instant` for elapsed time
    let start = Instant::now();

    // execute code here
    render_and_output_frame();

    // there are no negative `Duration`s so this does nothing if the elapsed
    // time is longer than the defined `wait_time`
    start.elapsed().checked_sub(wait_time).and_then(std::thread::sleep);
}

Of course it is also suitable to not introduce panic!()s when adding Durations.

Detailed design

The detailed design would be exactly as the current sub() method, just returning an Option<Duration> and passing possible None values from the underlying primitive types:

impl Duration {
    fn checked_sub(self, rhs: Duration) -> Option<Duration> {
        if let Some(mut secs) = self.secs.checked_sub(rhs.secs) {
            let nanos = if self.nanos >= rhs.nanos {
                self.nanos - rhs.nanos
            } else {
                if let Some(secs) = secs.checked_sub(1) {
                    self.nanos + NANOS_PER_SEC - rhs.nanos
                }
                else {
                    return None;
                }
            };
            debug_assert!(nanos < NANOS_PER_SEC);
            Some(Duration { secs: secs, nanos: nanos })
        }
        else {
            None
        }
    }
}

The same accounts for all other added methods, namely:

  • checked_add()
  • checked_sub()
  • checked_mul()
  • checked_div()

Drawbacks

None.

Alternatives

The alternatives are simply not doing this and forcing the programmer to code the check on their behalf. This is not what you want.

Unresolved questions

None.

Summary

Incorporate a strike team dedicated to preparing rules and guidelines for writing unsafe code in Rust (commonly referred to as Rust’s “memory model”), in cooperation with the lang team. The discussion will generally proceed in phases, starting with establishing high-level principles and gradually getting down to the nitty gritty details (though some back and forth is expected). The strike team will produce various intermediate documents that will be submitted as normal RFCs.

Motivation

Rust’s safe type system offers very strong aliasing information that promises to be a rich source of compiler optimization. For example, in safe code, the compiler can infer that if a function takes two &mut T parameters, those two parameters must reference disjoint areas of memory (this allows optimizations similar to C99’s restrict keyword, except that it is both automatic and fully enforced). The compiler also knows that given a shared reference type &T, the referent is immutable, except for data contained in an UnsafeCell.

Unfortunately, there is a fly in the ointment. Unsafe code can easily be made to violate these sorts of rules. For example, using unsafe code, it is trivial to create two &mut references that both refer to the same memory (and which are simultaneously usable). In that case, if the unsafe code were to (say) return those two points to safe code, that would undermine Rust’s safety guarantees – hence it’s clear that this code would be “incorrect”.

But things become more subtle when we just consider what happens within the abstraction. For example, is unsafe code allowed to use two overlapping &mut references internally, without returning it to the wild? Is it all right to overlap with *mut? And so forth.

It is the contention of this RFC that a complete guidelines for unsafe code are far too big a topic to be fruitfully addressed in a single RFC. Therefore, this RFC proposes the formation of a dedicated strike team (that is, a temporary, single-purpose team) that will work on hammering out the details over time. Precise membership of this team is not part of this RFC, but will be determined by the lang team as well as the strike team itself.

The unsafe guidelines work will proceed in rough stages, described below. An initial goal is to produce a high-level summary detailing the general approach of the guidelines. Ideally, this summary should be sufficient to help guide unsafe authors in best practices that are most likely to be forwards compatible. Further work will then expand on the model to produce a more detailed set of rules, which may in turn require revisiting the high-level summary if contradictions are uncovered.

This new “unsafe code” strike team is intended to work in collaboration with the existing lang team. Ultimately, whatever rules are crafted must be adopted with the general consensus of both the strike team and the lang team. It is expected that lang team members will be more involved in the early discussions that govern the overall direction and less involved in the fine details.

History and recent discussions

The history of optimizing C can be instructive. All code in C is effectively unsafe, and so in order to perform optimizations, compilers have come to lean heavily on the notion of “undefined behavior” as well as various ad-hoc rules about what programs ought not to do (see e.g. these three posts entitled “What Every C Programmer Should Know About Undefined Behavior”, by Chris Lattner). This can cause some very surprising behavior (see e.g. “What Every Compiler Author Should Know About Programmers” or this blog post by John Regehr, which is quite humorous). Note that Rust has a big advantage over C here, in that only the authors of unsafe code should need to worry about these rules.

In terms of Rust itself, there has been a large amount of discussion over the years. Here is a (non-comprehensive) set of relevant links, with a strong bias towards recent discussion:

Other factors

Another factor that must be considered is the interaction with weak memory models. Most of the links above focus purely on sequential code: Rust has more-or-less adopted the C++ memory model for governing interactions across threads. But there may well be subtle cases that arise we delve deeper. For more on the C++ memory model, see Hans Boehm’s excellent webpage.

Detailed design

Scope

Here are some of the issues that should be resolved as part of these unsafe code guidelines. The following list is not intended as comprehensive (suggestions for additions welcome):

  • Legal aliasing rules and patterns of memory accesses
    • e.g., which of the patterns listed in rust-lang/rust#19733 are legal?
    • can unsafe code create (but not use) overlapping &mut?
    • under what conditions is it legal to dereference a *mut T?
    • when can an &mut T legally alias an *mut T?
  • Struct layout guarantees
  • Interactions around zero-sized types
    • e.g., what pointer values can legally be considered a Box<ZST>?
  • Allocator dependencies

One specific area that we can hopefully “outsource” is detailed rules regarding the interaction of different threads. Rust exposes atomics that roughly correspond to C++11 atomics, and the intention is that we can layer our rules for sequential execution atop those rules for parallel execution.

Termination conditions

The unsafe code guidelines team is intended as a temporary strike team with the goal of producing the documents described below. Once the RFC for those documents have been approved, responsibility for maintaining the documents falls to the lang team.

Time frame

Working out a a set of rules for unsafe code is a detailed process and is expected to take months (or longer, depending on the level of detail we ultimately aim for). However, the intention is to publish preliminary documents as RFCs as we go, so hopefully we can be providing ever more specific guidance for unsafe code authors.

Note that even once an initial set of guidelines is adopted, problems or inconsistencies may be found. If that happens, the guidelines will be adjusted as needed to correct the problem, naturally with an eye towards backwards compatibility. In other words, the unsafe guidelines, like the rules for Rust language itself, should be considered a “living document”.

As a note of caution, experience from other languages such as Java or C++ suggests that the work on memory models can take years. Moreover, even once a memory model is adopted, it can be unclear whether common compiler optimizations are actually permitted under the model. The hope is that by focusing on sequential and Rust-specific issues we can sidestep some of these quandries.

Intermediate documents

Because hammering out the finer points of the memory model is expected to possibly take some time, it is important to produce intermediate agreements. This section describes some of the documents that may be useful. These also serve as a rough guideline to the overall “phases” of discussion that are expected, though in practice discussion will likely go back and forth:

  • Key examples and optimizations: highlighting code examples that ought to work, or optimizations we should be able to do, as well as some that will not work, or those whose outcome is in doubt.
  • High-level design: describe the rules at a high-level. This would likely be the document that unsafe code authors would read to know if their code is correct in the majority of scenarios. Think of this as the “user’s guide”.
  • Detailed rules: More comprehensive rules. Think of this as the “reference manual”.

Note that both the “high-level design” and “detailed rules”, once considered complete, will be submitted as RFCs and undergo the usual final comment period.

Key examples and optimizations

Probably a good first step is to agree on some key examples and overall principles. Examples would fall into several categories:

  • Unsafe code that we feel must be considered legal by any model
  • Unsafe code that we feel must be considered illegal by any model
  • Unsafe code that we feel may or may not be considered legal
  • Optimizations that we must be able to perform
  • Optimizations that we should not expect to be able to perform
  • Optimizations that it would be nice to have, but which may be sacrificed if needed

Having such guiding examples naturally helps to steer the effort, but it also helps to provide guidance for unsafe code authors in the meantime. These examples illustrate patterns that one can adopt with reasonable confidence.

Deciding about these examples should also help in enumerating the guiding principles we would like to adhere to. The design of a memory model ultimately requires balancing several competing factors and it may be useful to state our expectations up front on how these will be weighed:

  • Optimization. The stricter the rules, the more we can optimize.
    • on the other hand, rules that are overly strict may prevent people from writing unsafe code that they would like to write, ultimately leading to slower execution.
  • Comprehensibility. It is important to strive for rules that end users can readily understand. If learning the rules requires diving into academic papers or using Coq, it’s a non-starter.
  • Effect on existing code. No matter what model we adopt, existing unsafe code may or may not comply. If we then proceed to optimize, this could cause running code to stop working. While RFC 1122 explicitly specified that the rules for unsafe code may change, we will have to decide where to draw the line in terms of how much to weight backwards compatibility.

It is expected that the lang team will be highly involved in this discussion.

It is also expected that we will gather examples in the following ways:

  • survey existing unsafe code;
  • solicit suggestions of patterns from the Rust-using public:
    • scenarios where they would like an official judgement;
    • interesting questions involving the standard library.

High-level design

The next document to produce is to settle on a high-level design. There have already been several approaches floated. This phase should build on the examples from before, in that proposals can be weighed against their effect on the examples and optimizations.

There will likely also be some feedback between this phase and the previous: as new proposals are considered, that may generate new examples that were not relevant previously.

Note that even once a high-level design is adopted, it will be considered “tentative” and “unstable” until the detailed rules have been worked out to a reasonable level of confidence.

Once a high-level design is adopted, it may also be used by the compiler team to inform which optimizations are legal or illegal. However, if changes are later made, the compiler will naturally have to be adjusted to match.

It is expected that the lang team will be highly involved in this discussion.

Detailed rules

Once we’ve settled on a high-level path – and, no doubt, while in the process of doing so as well – we can begin to enumerate more detailed rules. It is also expected that working out the rules may uncover contradictions or other problems that require revisiting the high-level design.

Lints and other checkers

Ideally, the team will also consider whether automated checking for conformance is possible. It is not a responsibility of this strike team to produce such automated checking, but automated checking is naturally a big plus!

Repository

In general, the memory model discussion will be centered on a specific repository (perhaps https://github.com/nikomatsakis/rust-memory-model, but perhaps moved to the rust-lang organization). This allows for multi-faced discussion: for example, we can open issues on particular questions, as well as storing the various proposals and litmus tests in their own directories. We’ll work out and document the procedures and conventions here as we go.

Drawbacks

The main drawback is that this discussion will require time and energy which could be spent elsewhere. The justification for spending time on developing the memory model instead is that it is crucial to enable the compiler to perform aggressive optimizations. Until now, we’ve limited ourselves by and large to conservative optimizations (though we do supply some LLVM aliasing hints that can be affected by unsafe code). As the transition to MIR comes to fruition, it is clear that we will be in a place to perform more aggressive optimization, and hence the need for rules and guidelines is becoming more acute. We can continue to adopt a conservative course, but this risks growing an ever larger body of code dependent on the compiler not performing aggressive optimization, which may close those doors forever.

Alternatives

  • Adopt a memory model in one fell swoop:
    • considered too complicated
  • Defer adopting a memory model for longer:
    • considered too risky

Unresolved questions

None.

Summary

This RFC proposes an update to error reporting in rustc. Its focus is to change the format of Rust error messages and improve –explain capabilities to focus on the user’s code. The end goal is for errors and explain text to be more readable, more friendly to new users, while still helping Rust coders fix bugs as quickly as possible. We expect to follow this RFC with a supplemental RFC that provides a writing style guide for error messages and explain text with a focus on readability and education.

Motivation

Default error format

Rust offers a unique value proposition in the landscape of languages in part by codifying concepts like ownership and borrowing. Because these concepts are unique to Rust, it’s critical that the learning curve be as smooth as possible. And one of the most important tools for lowering the learning curve is providing excellent errors that serve to make the concepts less intimidating, and to help ‘tell the story’ about what those concepts mean in the context of the programmer’s code.

[as text]

src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29:22: 29:30 error: cannot borrow `foo.bar1` as mutable more than once at a time [E0499]
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29     let _bar2 = &mut foo.bar1;
                                                                                         ^~~~~~~~
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29:22: 29:30 help: run `rustc --explain E0499` to see a detailed explanation
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:28:21: 28:29 note: previous borrow of `foo.bar1` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `foo.bar1` until the borrow ends
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:28     let bar1 = &mut foo.bar1;
                                                                                        ^~~~~~~~
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:31:2: 31:2 note: previous borrow ends here
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:26 fn borrow_same_field_twice_mut_mut() {
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:27     let mut foo = make_foo();
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:28     let bar1 = &mut foo.bar1;
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29     let _bar2 = &mut foo.bar1;
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:30     *bar1;
src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:31 }
                                                                    ^

[as image] Image of new error flow

Example of a borrow check error in the current compiler

Though a lot of time has been spent on the current error messages, they have a couple flaws which make them difficult to use. Specifically, the current error format:

  • Repeats the file position on the left-hand side. This offers no additional information, but instead makes the error harder to read.
  • Prints messages about lines often out of order. This makes it difficult for the developer to glance at the error and recognize why the error is occurring
  • Lacks a clear visual break between errors. As more errors occur it becomes more difficult to tell them apart.
  • Uses technical terminology that is difficult for new users who may be unfamiliar with compiler terminology or terminology specific to Rust.

This RFC details a redesign of errors to focus more on the source the programmer wrote. This format addresses the above concerns by eliminating clutter, following a more natural order for help messages, and pointing the user to both “what” the error is and “why” the error is occurring by using color-coded labels. Below you can see the same error again, this time using the proposed format:

[as text]

error[E0499]: cannot borrow `foo.bar1` as mutable more than once at a time
  --> src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29:22
   |
28 |      let bar1 = &mut foo.bar1;
   |                      -------- first mutable borrow occurs here
29 |      let _bar2 = &mut foo.bar1;
   |                       ^^^^^^^^ second mutable borrow occurs here
30 |      *bar1;
31 |  }
   |  - first borrow ends here

[as image]

Example of the same borrow check error in the proposed format

Expanded error format (revised –explain)

Languages like Elm have shown how effective an educational tool error messages can be if the explanations like our –explain text are mixed with the user’s code. As mentioned earlier, it’s crucial for Rust to be easy-to-use, especially since it introduces a fair number of concepts that may be unfamiliar to the user. Even experienced users may need to use –explain text from time to time when they encounter unfamiliar messages.

While we have –explain text today, it uses generic examples that require the user to mentally translate the given example into what works for their specific situation.

You tried to move out of a value which was borrowed. Erroneous code example:

use std::cell::RefCell;

struct TheDarkKnight;

impl TheDarkKnight {
    fn nothing_is_true(self) {}
}
...

Example of the current –explain (showing E0507)

To help users, this RFC proposes a new --explain errors. This new mode is more textual error reporting mode that gives additional explanation to help better understand compiler messages. The end result is a richer, on-demand error reporting style.

error: cannot move out of borrowed content
   --> /Users/jturner/Source/errors/borrowck-move-out-of-vec-tail.rs:30:17

I’m trying to track the ownership of the contents of `tail`, which is borrowed, through this match
statement:

29  |              match tail {

In this match, you use an expression of the form [...]. When you do this, it’s like you are opening
up the `tail` value and taking out its contents. Because `tail` is borrowed, you can’t safely move
the contents.

30  |                  [Foo { string: aa },
    |                                 ^^ cannot move out of borrowed content

You can avoid moving the contents out by working with each part using a reference rather than a
move. A naive fix might look this:

30  |                  [Foo { string: ref aa },

Detailed design

The RFC is separated into two parts: the format of error messages and the format of expanded error messages (using --explain errors).

Format of error messages

The proposal is a lighter error format focused on the code the user wrote. Messages that help understand why an error occurred appear as labels on the source. The goals of this new format are to:

  • Create something that’s visually easy to parse
  • Remove noise/unnecessary information
  • Present information in a way that works well for new developers, post-onboarding, and experienced developers without special configuration
  • Draw inspiration from Elm as well as Dybuk and other systems that have already improved on the kind of errors that Rust has.

In order to accomplish this, the proposed design needs to satisfy a number of constraints to make the result maximally flexible across various terminals:

  • Multiple errors beside each other should be clearly separate and not muddled together.
  • Each error message should draw the eye to where the error occurs with sufficient context to understand why the error occurs.
  • Each error should have a “header” section that is visually distinct from the code section.
  • Code should visually stand out from text and other error messages. This allows the developer to immediately recognize their code.
  • Error messages should be just as readable when not using colors (eg for users of black-and-white terminals, color-impaired readers, weird color schemes that we can’t predict, or just people that turn colors off)
  • Be careful using “ascii art” and avoid unicode. Instead look for ways to show the information concisely that will work across the broadest number of terminals. We expect IDEs to possibly allow for a more graphical error in the future.
  • Where possible, use labels on the source itself rather than sentence “notes” at the end.
  • Keep filename:line easy to spot for people who use editors that let them click on errors
error[E0499]: cannot borrow `foo.bar1` as mutable more than once at a time
  --> src/test/compile-fail/borrowck/borrowck-borrow-from-owned-ptr.rs:29:22

The header still serves the original purpose of knowing: a) if it’s a warning or error, b) the text of the warning/error, and c) the location of this warning/error. We keep the error code, now a part of the error indicator, as a way to help improve search results.

Line number column

   |
28 |
   |
29 |
   |
30 |
31 |
   |

The line number column lets you know where the error is occurring in the file. Because we only show lines that are of interest for the given error/warning, we elide lines if they are not annotated as part of the message (we currently use the heuristic to elide after one un-annotated line).

Inspired by Dybuk and Elm, the line numbers are separated with a ‘wall’, a separator formed from pipe(‘|’) characters, to clearly distinguish what is a line number from what is source at a glance.

As the wall also forms a way to visually separate distinct errors, we propose extending this concept to also support span-less notes and hints. For example:

92 |         config.target_dir(&pkg)
   |                           ^^^^ expected `core::workspace::Workspace`, found `core::package::Package`
   = note: expected type `&core::workspace::Workspace<'_>`
   = note:    found type `&core::package::Package`

Source area

      let bar1 = &mut foo.bar1;
                      -------- first mutable borrow occurs here
      let _bar2 = &mut foo.bar1;
                       ^^^^^^^^ second mutable borrow occurs here
      *bar1;
  }
  - first borrow ends here

The source area shows the related source code for the error/warning. The source is laid out in the order it appears in the source file, giving the user a way to map the message against the source they wrote.

Key parts of the code are labeled with messages to help the user understand the message.

The primary label is the label associated with the main warning/error. It explains the what of the compiler message. By reading it, the user can begin to understand what the root cause of the error or warning is. This label is colored to match the level of the message (yellow for warning, red for error) and uses the ^^^ underline.

Secondary labels help to understand the error and use blue text and — underline. These labels explain the why of the compiler message. You can see one such example in the above message where the secondary labels explain that there is already another borrow going on. In another example, we see another way that primary and secondary work together to tell the whole story for why the error occurred.

Taken together, primary and secondary labels create a ‘flow’ to the message. Flow in the message lets the user glance at the colored labels and quickly form an educated guess as to how to correctly update their code.

Note: We’ll talk more about additional style guidance for wording to help create flow in the subsequent style RFC.

Expanded error messages

Currently, –explain text focuses on the error code. You invoke the compiler with –explain and receive a verbose description of what causes errors of that number. The resulting message can be helpful, but it uses generic sample code which makes it feel less connected to the user’s code.

We propose adding a new --explain errors. By passing this to the compiler (or to cargo), the compiler will switch to an expanded error form which incorporates the same source and label information the user saw in the default message with more explanation text.

error: cannot move out of borrowed content
   --> /Users/jturner/Source/errors/borrowck-move-out-of-vec-tail.rs:30:17

I’m trying to track the ownership of the contents of `tail`, which is borrowed, through this match
statement:

29  |              match tail {

In this match, you use an expression of the form [...]. When you do this, it’s like you are opening
up the `tail` value and taking out its contents. Because `tail` is borrowed, you can’t safely move
the contents.

30  |                  [Foo { string: aa },
    |                                 ^^ cannot move out of borrowed content

You can avoid moving the contents out by working with each part using a reference rather than a
move. A naive fix might look this:

30  |                  [Foo { string: ref aa },

Example of an expanded error message

The expanded error message effectively becomes a template. The text of the template is the educational text that is explaining the message more more detail. The template is then populated using the source lines, labels, and spans from the same compiler message that’s printed in the default mode. This lets the message writer call out each label or span as appropriate in the expanded text.

It’s possible to also add additional labels that aren’t necessarily shown in the default error mode but would be available in the expanded error format. This gives the explain text writer maximal flexibility without impacting the readability of the default message. I’m currently prototyping an implementation of how this templating could work in practice.

Tying it together

Lastly, we propose that the final error message:

error: aborting due to 2 previous errors

Be changed to notify users of this ability:

note: compile failed due to 2 errors. You can compile again with `--explain errors` for more information

Drawbacks

Changes in the error format can impact integration with other tools. For example, IDEs that use a simple regex to detect the error would need to be updated to support the new format. This takes time and community coordination.

While the new error format has a lot of benefits, it’s possible that some errors will feel “shoehorned” into it and, even after careful selection of secondary labels, may still not read as well as the original format.

There is a fair amount of work involved to update the errors and explain text to the proposed format.

Alternatives

Rather than using the proposed error format format, we could only provide the verbose –explain style that is proposed in this RFC. Respected programmers like John Carmack have praised the Elm error format.

Detected errors in 1 module.

-- TYPE MISMATCH ---------------------------------------------------------------
The right argument of (+) is causing a type mismatch.

25|       model + "1"
                  ^^^
(+) is expecting the right argument to be a:

    number

But the right argument is:

    String

Hint: To append strings in Elm, you need to use the (++) operator, not (+).
<http://package.elm-lang.org/packages/elm-lang/core/latest/Basics#++>

Hint: I always figure out the type of the left argument first and if it is acceptable on its own, I
assume it is "correct" in subsequent checks. So the problem may actually be in how the left and
right arguments interact.

Example of an Elm error

In developing this RFC, we experimented with both styles. The Elm error format is great as an educational tool, and we wanted to leverage its style in Rust. For day-to-day work, though, we favor an error format that puts heavy emphasis on quickly guiding the user to what the error is and why it occurred, with an easy way to get the richer explanations (using –explain) when the user wants them.

Stabilization

Currently, this new rust error format is available on nightly using the export RUST_NEW_ERROR_FORMAT=true environment variable. Ultimately, this should become the default. In order to get there, we need to ensure that the new error format is indeed an improvement over the existing format in practice.

We also have not yet implemented the extended error format. This format will also be gated by its own flag while we explore and stabilize it. Because of the relative difference in maturity here, the default error message will be behind a flag for a cycle before it becomes default. The extended error format will be implemented and a follow-up RFC will be posted describing its design. This will start its stabilization period, after which time it too will be enabled.

How do we measure the readability of error messages? This RFC details an educated guess as to what would improve the current state but shows no ways to measure success.

Likewise, while some of us have been dogfooding these errors, we don’t know what long-term use feels like. For example, after a time does the use of color feel excessive? We can always update the errors as we go, but it’d be helpful to catch it early if possible.

Unresolved questions

There are a few unresolved questions:

  • Editors that rely on pattern-matching the compiler output will need to be updated. It’s an open question how best to transition to using the new errors. There is on-going discussion of standardizing the JSON output, which could also be used.
  • Can additional error notes be shown without the “rainbow problem” where too many colors and too much boldness cause errors to become less readable?
  • Feature Name: allow_self_in_where_clauses
  • Start Date: 2016-06-13
  • RFC PR: #1647
  • Rust Issue: #38864

Summary

This RFC proposes allowing the Self type to be used in every position in trait implementations, including where clauses and other parameters to the trait being implemented.

Motivation

Self is a useful tool to have to reduce churn when the type changes for various reasons. One would expect to be able to write

impl SomeTrait for MySuperLongType<T, U, V, W, X> where
  Self: SomeOtherTrait,

but this will fail to compile today, forcing you to repeat the type, and adding one more place that has to change if the type ever changes.

By this same logic, we would also like to be able to reference associated types from the traits being implemented. When dealing with generic code, patterns like this often emerge:

trait MyTrait {
    type MyType: SomeBound;
}

impl<T, U, V> MyTrait for SomeStruct<T, U, V> where
    SomeOtherStruct<T, U, V>: SomeBound,
{
    type MyType = SomeOtherStruct<T, U, V>;
}

the only reason the associated type is repeated at all is to restate the bound on the associated type. It would be nice to reduce some of that duplication.

Detailed design

Instead of blocking Self from being used in the “header” of a trait impl, it will be understood to be a reference to the implementation type. For example, all of these would be valid:

impl SomeTrait for SomeType where Self: SomeOtherTrait { }

impl SomeTrait<Self> for SomeType { }

impl SomeTrait for SomeType where SomeOtherType<Self>: SomeTrait { }

impl SomeTrait for SomeType where Self::AssocType: SomeOtherTrait {
    AssocType = SomeOtherType;
}

If the Self type is parameterized by Self, an error that the type definition is recursive is thrown, rather than not recognizing self.

// The error here is because this would be Vec<Vec<Self>>, Vec<Vec<Vec<Self>>>, ...
impl SomeTrait for Vec<Self> { }

Drawbacks

Self is always less explicit than the alternative.

Alternatives

Not implementing this is an alternative, as is accepting Self only in where clauses and not other positions in the impl header.

Unresolved questions

None

q- Feature Name: atomic_access

Summary

This RFC adds the following methods to atomic types:

impl AtomicT {
    fn get_mut(&mut self) -> &mut T;
    fn into_inner(self) -> T;
}

It also specifies that the layout of an AtomicT type is always the same as the underlying T type. So, for example, AtomicI32 is guaranteed to be transmutable to and from i32.

Motivation

get_mut and into_inner

These methods are useful for accessing the value inside an atomic object directly when there are no other threads accessing it. This is guaranteed by the mutable reference and the move, since it means there can be no other live references to the atomic.

A normal load/store is different from a load(Relaxed) or store(Relaxed) because it has much weaker synchronization guarantees, which means that the compiler can produce more efficient code. In particular, LLVM currently treats all atomic operations (even relaxed ones) as volatile operations, which means that it does not perform any optimizations on them. For example, it will not eliminate a load(Relaxed) even if the results of the load is not used anywhere.

get_mut in particular is expected to be useful in Drop implementations where you have a &mut self and need to read the value of an atomic. into_inner somewhat overlaps in functionality with get_mut, but it is included to allow extracting the value without requiring the atomic object to be mutable. These methods mirror Mutex::get_mut and Mutex::into_inner.

Atomic type layout

The layout guarantee is mainly intended to be used for FFI, where a variable of a non-atomic type needs to be modified atomically. The most common example of this is the Linux futex system call which takes an int* parameter pointing to an integer that is atomically modified by both userspace and the kernel.

Rust code invoking the futex system call so far has simply passed the address of the atomic object directly to the system call. However this makes the assumption that the atomic type has the same layout as the underlying integer type, which is not currently guaranteed by the documentation.

This also allows the reverse operation by casting a pointer: it allows Rust code to atomically modify a value that was not declared as a atomic type. This is useful when dealing with FFI structs that are shared with a thread managed by a C library. Another example would be to atomically modify a value in a memory mapped file that is shared with another process.

Detailed design

The actual implementations of these functions are mostly trivial since they are based on UnsafeCell::get.

The existing implementations of atomic types already have the same layout as the underlying types (even AtomicBool and bool), so no change is needed here apart from the documentation.

Drawbacks

The functionality of into_inner somewhat overlaps with get_mut.

We lose the ability to change the layout of atomic types, but this shouldn’t be necessary since these types map directly to hardware primitives.

Alternatives

The functionality of get_mut and into_inner can be implemented using load(Relaxed), however the latter can result in worse code because it is poorly handled by the optimizer.

Unresolved questions

None

Summary

Extend Cell to work with non-Copy types.

Motivation

It allows safe inner-mutability of non-Copy types without the overhead of RefCell’s reference counting.

The key idea of Cell is to provide a primitive building block to safely support inner mutability. This must be done while maintaining Rust’s aliasing requirements for mutable references. Unlike RefCell which enforces this at runtime through reference counting, Cell does this statically by disallowing any reference (mutable or immutable) to the data contained in the cell.

While the current implementation only supports Copy types, this restriction isn’t actually necessary to maintain Rust’s aliasing invariants. The only affected API is the get function which, by design, is only usable with Copy types.

Detailed design

impl<T> Cell<T> {
    fn set(&self, val: T);
    fn replace(&self, val: T) -> T;
    fn into_inner(self) -> T;
}

impl<T: Copy> Cell<T> {
    fn get(&self) -> T;
}

impl<T: Default> Cell<T> {
    fn take(&self) -> T;
}

The get method is kept but is only available for T: Copy.

The set method is available for all T. It will need to be implemented by calling replace and dropping the returned value. Dropping the old value in-place is unsound since the Drop impl will hold a mutable reference to the cell contents.

The into_inner and replace methods are added, which allow the value in a cell to be read even if T is not Copy. The get method can’t be used since the cell must always contain a valid value.

Finally, a take method is added which is equivalent to self.replace(Default::default()).

Drawbacks

It makes the Cell type more complicated.

Cell will only be able to derive traits like Eq and Ord for types that are Copy, since there is no way to non-destructively read the contents of a non-Copy Cell.

Alternatives

The alternative is to use the MoveCell type from crates.io which provides the same functionality.

Unresolved questions

None

Summary

assert_ne is a macro that takes 2 arguments and panics if they are equal. It works and is implemented identically to assert_eq and serves as its complement. This proposal also includes a debug_asset_ne, matching debug_assert_eq.

Motivation

This feature, among other reasons, makes testing more readable and consistent as it complements asset_eq. It gives the same style panic message as assert_eq, which eliminates the need to write it yourself.

Detailed design

This feature has exactly the same design and implementation as assert_eq.

Here is the definition:

macro_rules! assert_ne {
    ($left:expr , $right:expr) => ({
        match (&$left, &$right) {
            (left_val, right_val) => {
                if *left_val == *right_val {
                    panic!("assertion failed: `(left != right)` \
                           (left: `{:?}`, right: `{:?}`)", left_val, right_val)
                }
            }
        }
    })
}

This is complemented by a debug_assert_ne (similar to debug_assert_eq):

macro_rules! debug_assert_ne {
    ($($arg:tt)*) => (if cfg!(debug_assertions) { assert_ne!($($arg)*); })
}

Drawbacks

Any addition to the standard library will need to be maintained forever, so it is worth weighing the maintenance cost of this over the value add. Given that it is so similar to assert_eq, I believe the weight of this drawback is low.

Alternatives

Alternatively, users implement this feature themselves, or use the crate assert_ne that I published.

Unresolved questions

None at this moment.

Summary

Introduce non-panicking borrow methods on RefCell<T>.

Motivation

Whenever something is built from user input, for example a graph in which nodes are RefCell<T> values, it is primordial to avoid panicking on bad input. The only way to avoid panics on cyclic input in this case is a way to conditionally-borrow the cell contents.

Detailed design

/// Returned when `RefCell::try_borrow` fails.
pub struct BorrowError { _inner: () }

/// Returned when `RefCell::try_borrow_mut` fails.
pub struct BorrowMutError { _inner: () }

impl RefCell<T> {
    /// Tries to immutably borrows the value. This returns `Err(_)` if the cell
    /// was already borrowed mutably.
    pub fn try_borrow(&self) -> Result<Ref<T>, BorrowError> { ... }

    /// Tries to mutably borrows the value. This returns `Err(_)` if the cell
    /// was already borrowed.
    pub fn try_borrow_mut(&self) -> Result<RefMut<T>, BorrowMutError> { ... }
}

Drawbacks

This departs from the fallible/infallible convention where we avoid providing both panicking and non-panicking methods for the same operation.

Alternatives

The alternative is to provide a borrow_state method returning the state of the borrow flag of the cell, i.e:

pub enum BorrowState {
    Reading,
    Writing,
    Unused,
}

impl<T> RefCell<T> {
    pub fn borrow_state(&self) -> BorrowState { ... }
}

See the Rust tracking issue for this feature.

Unresolved questions

There are no unresolved questions.

Summary

Rust programs compiled for Windows will always allocate a console window on startup. This behavior is controlled via the SUBSYSTEM parameter passed to the linker, and so can be overridden with specific compiler flags. However, doing so will bypass the Rust-specific initialization code in libstd, as when using the MSVC toolchain, the entry point must be named WinMain.

This RFC proposes supporting this case explicitly, allowing libstd to continue to be initialized correctly.

Motivation

The WINDOWS subsystem is commonly used on Windows: desktop applications typically do not want to flash up a console window on startup.

Currently, using the WINDOWS subsystem from Rust is undocumented, and the process is non-trivial when targeting the MSVC toolchain. There are a couple of approaches, each with their own downsides:

Define a WinMain symbol

A new symbol pub extern "system" WinMain(...) with specific argument and return types must be declared, which will become the new entry point for the program.

This is unsafe, and will skip the initialization code in libstd.

The GNU toolchain will accept either entry point.

Override the entry point via linker options

This uses the same method as will be described in this RFC. However, it will result in build scripts also being compiled for the WINDOWS subsystem, which can cause additional console windows to pop up during compilation, making the system unusable while a build is in progress.

Detailed design

When an executable is linked while compiling for a Windows target, it will be linked for a specific subsystem. The subsystem determines how the operating system will run the executable, and will affect the execution environment of the program.

In practice, only two subsystems are very commonly used: CONSOLE and WINDOWS, and from a user’s perspective, they determine whether a console will be automatically created when the program is started.

New crate attribute

This RFC proposes two changes to solve this problem. The first is adding a top-level crate attribute to allow specifying which subsystem to use:

#![windows_subsystem = "windows"]

Initially, the set of possible values will be {windows, console}, but may be extended in future if desired.

The use of this attribute in a non-executable crate will result in a compiler warning. If compiling for a non-Windows target, the attribute will be silently ignored.

Additional linker argument

For the GNU toolchain, this will be sufficient. However, for the MSVC toolchain, the linker will be expecting a WinMain symbol, which will not exist.

There is some complexity to the way in which a different entry point is expected when using the WINDOWS subsystem. Firstly, the C-runtime library exports two symbols designed to be used as an entry point:

mainCRTStartup
WinMainCRTStartup

LINK.exe will use the subsystem to determine which of these symbols to use as the default entry point if not overridden.

Each one performs some unspecified initialization of the CRT, before calling out to a symbol defined within the program (main or WinMain respectively).

The second part of the solution is to pass an additional linker option when targeting the MSVC toolchain: /ENTRY:mainCRTStartup

This will override the entry point to always be mainCRTStartup. For console-subsystem programs this will have no effect, since it was already the default, but for WINDOWS subsystem programs, it will eliminate the need for a WinMain symbol to be defined.

This command line option will always be passed to the linker, regardless of the presence or absence of the windows_subsystem crate attribute, except when the user specifies their own entry point in the linker arguments. This will require rustc to perform some basic parsing of the linker options.

Drawbacks

  • A new platform-specific crate attribute.
  • The difficulty of manually calling the Rust initialization code is potentially a more general problem, and this only solves a specific (if common) case.
  • The subsystem must be specified earlier than is strictly required: when compiling C/C++ code only the linker, not the compiler, needs to actually be aware of the subsystem.
  • It is assumed that the initialization performed by the two CRT entry points is identical. This seems to currently be the case, and is unlikely to change as this technique appears to be used fairly widely.

Alternatives

  • Only emit one of either WinMain or main from rustc based on a new command line option.

    This command line option would only be applicable when compiling an executable, and only for Windows platforms. No other supported platforms require a different entry point or additional linker arguments for programs designed to run with a graphical user interface.

    rustc will react to this command line option by changing the exported name of the entry point to WinMain, and passing additional arguments to the linker to configure the correct subsystem. A mismatch here would result in linker errors.

    A similar option would need to be added to Cargo.toml to make usage as simple as possible.

    There’s some bike-shedding which can be done on the exact command line interface, but one possible option is shown below.

    Rustc usage: rustc foo.rs --crate-subsystem windows

    Cargo.toml

    [package]
    # ...
    
    [[bin]]
    name = "foo"
    path = "src/foo.rs"
    subsystem = "windows"
    

    The crate-subsystem command line option would exist on all platforms, but would be ignored when compiling for a non-Windows target, so as to support cross-compiling. If not compiling a binary crate, specifying the option is an error regardless of the target.

Unresolved questions

None

Summary

Add “panic-safe” or “total” alternatives to the existing panicking indexing syntax.

Motivation

SliceExt::get and SliceExt::get_mut can be thought as non-panicking versions of the simple indexing syntax, a[idx], and SliceExt::get_unchecked and SliceExt::get_unchecked_mut can be thought of as unsafe versions with bounds checks elided. However, there is no such equivalent for a[start..end], a[start..], or a[..end]. This RFC proposes such methods to fill the gap.

Detailed design

The get, get_mut, get_unchecked, and get_unchecked_mut will be made generic over usize as well as ranges of usize like slice’s Index implementation currently is. This will allow e.g. a.get(start..end) which will behave analagously to a[start..end].

Because methods cannot be overloaded in an ad-hoc manner in the same way that traits may be implemented, we introduce a SliceIndex trait which is implemented by types which can index into a slice:

pub trait SliceIndex<T> {
    type Output: ?Sized;

    fn get(self, slice: &[T]) -> Option<&Self::Output>;
    fn get_mut(self, slice: &mut [T]) -> Option<&mut Self::Output>;
    unsafe fn get_unchecked(self, slice: &[T]) -> &Self::Output;
    unsafe fn get_mut_unchecked(self, slice: &[T]) -> &mut Self::Output;
    fn index(self, slice: &[T]) -> &Self::Output;
    fn index_mut(self, slice: &mut [T]) -> &mut Self::Output;
}

impl<T> SliceIndex<T> for usize {
    type Output = T;
    // ...
}

impl<T, R> SliceIndex<T> for R
    where R: RangeArgument<usize>
{
    type Output = [T];
    // ...
}

And then alter the Index, IndexMut, get, get_mut, get_unchecked, and get_mut_unchecked implementations to be generic over SliceIndex:

impl<T> [T] {
    pub fn get<I>(&self, idx: I) -> Option<I::Output>
        where I: SliceIndex<T>
    {
        idx.get(self)
    }

    pub fn get_mut<I>(&mut self, idx: I) -> Option<I::Output>
        where I: SliceIndex<T>
    {
        idx.get_mut(self)
    }

    pub unsafe fn get_unchecked<I>(&self, idx: I) -> I::Output
        where I: SliceIndex<T>
    {
        idx.get_unchecked(self)
    }

    pub unsafe fn get_mut_unchecked<I>(&mut self, idx: I) -> I::Output
        where I: SliceIndex<T>
    {
        idx.get_mut_unchecked(self)
    }
}

impl<T, I> Index<I> for [T]
    where I: SliceIndex<T>
{
    type Output = I::Output;

    fn index(&self, idx: I) -> &I::Output {
        idx.index(self)
    }
}

impl<T, I> IndexMut<I> for [T]
    where I: SliceIndex<T>
{
    fn index_mut(&self, idx: I) -> &mut I::Output {
        idx.index_mut(self)
    }
}

Drawbacks

  • The SliceIndex trait is unfortunate - it’s tuned for exactly the set of methods it’s used by. It only exists because inherent methods cannot be overloaded the same way that trait implementations can be. It would most likely remain unstable indefinitely.
  • Documentation may suffer. Rustdoc output currently explicitly shows each of the ways you can index a slice, while there will simply be a single generic implementation with this change. This may not be that bad, though. The doc block currently seems to provided the most valuable information to newcomers rather than the trait bound, and that will still be present with this change.

Alternatives

  • Stay as is.
  • A previous version of this RFC introduced new get_slice etc methods rather than overloading get etc. This avoids the utility trait but is somewhat less ergonomic.
  • Instead of one trait amalgamating all of the required methods, we could have one trait per method. This would open a more reasonable door to stabilizing those traits, but adds quite a lot more surface area. Replacing an unstable SliceIndex trait with a collection would be backwards compatible.

Unresolved questions

None

Summary

Extract a very small sliver of today’s procedural macro system in the compiler, just enough to get basic features like custom derive working, to have an eventually stable API. Ensure that these features will not pose a maintenance burden on the compiler but also don’t try to provide enough features for the “perfect macro system” at the same time. Overall, this should be considered an incremental step towards an official “macros 2.0”.

Motivation

Some large projects in the ecosystem today, such as serde and diesel, effectively require the nightly channel of the Rust compiler. Although most projects have an alternative to work on stable Rust, this tends to be far less ergonomic and comes with its own set of downsides, and empirically it has not been enough to push the nightly users to stable as well.

These large projects, however, are often the face of Rust to external users. Common knowledge is that fast serialization is done using serde, but to others this just sounds like “fast Rust needs nightly”. Over time this persistent thought process creates a culture of “well to be serious you require nightly” and a general feeling that Rust is not “production ready”.

The good news, however, is that this class of projects which require nightly Rust almost all require nightly for the reason of procedural macros. Even better, the full functionality of procedural macros is rarely needed, only custom derive! Even better, custom derive typically doesn’t require the features one would expect from a full-on macro system, such as hygiene and modularity, that normal procedural macros typically do. The purpose of this RFC, as a result, is to provide these crates a method of working on stable Rust with the desired ergonomics one would have on nightly otherwise.

Unfortunately today’s procedural macros are not without their architectural shortcomings as well. For example they’re defined and imported with arcane syntax and don’t participate in hygiene very well. To address these issues, there are a number of RFCs to develop a “macros 2.0” story:

Many of these designs, however, will require a significant amount of work to not only implement but also a significant amount of work to stabilize. The current understanding is that these improvements are on the time scale of years, whereas the problem of nightly Rust is today!

As a result, it is an explicit non-goal of this RFC to architecturally improve on the current procedural macro system. The drawbacks of today’s procedural macros will be the same as those proposed in this RFC. The major goal here is to simply minimize the exposed surface area between procedural macros and the compiler to ensure that the interface is well defined and can be stably implemented in future versions of the compiler as well.

Put another way, we currently have macros 1.0 unstable today, we’re shooting for macros 2.0 stable in the far future, but this RFC is striking a middle ground at macros 1.1 today!

Detailed design

First, before looking how we’re going to expose procedural macros, let’s take a detailed look at how they work today.

Today’s procedural macros

A procedural macro today is loaded into a crate with the #![plugin(foo)] annotation at the crate root. This in turn looks for a crate named foo via the same crate loading mechanisms as extern crate, except with the restriction that the target triple of the crate must be the same as the target the compiler was compiled for. In other words, if you’re on x86 compiling to ARM, macros must also be compiled for x86.

Once a crate is found, it’s required to be a dynamic library as well, and once that’s all verified the compiler opens it up with dlopen (or the equivalent therein). After loading, the compiler will look for a special symbol in the dynamic library, and then call it with a macro context.

So as we’ve seen macros are compiled as normal crates into dynamic libraries. One function in the crate is tagged with #[plugin_registrar] which gets wired up to this “special symbol” the compiler wants. When the function is called with a macro context, it uses the passed in plugin registry to register custom macros, attributes, etc.

After a macro is registered, the compiler will then continue the normal process of expanding a crate. Whenever the compiler encounters this macro it will call this registration with essentially and AST and morally gets back a different AST to splice in or replace.

Today’s drawbacks

This expansion process suffers from many of the downsides mentioned in the motivation section, such as a lack of hygiene, a lack of modularity, and the inability to import macros as you would normally other functionality in the module system.

Additionally, though, it’s essentially impossible to ever stabilize because the interface to the compiler is… the compiler! We clearly want to make changes to the compiler over time, so this isn’t acceptable. To have a stable interface we’ll need to cut down this surface area dramatically to a curated set of known-stable APIs.

Somewhat more subtly, the technical ABI of procedural macros is also exposed quite thinly today as well. The implementation detail of dynamic libraries, and especially that both the compiler and the macro dynamically link to libraries like libsyntax, cannot be changed. This precludes, for example, a completely statically linked compiler (e.g. compiled for x86_64-unknown-linux-musl). Another goal of this RFC will also be to hide as many of these technical details as possible, allowing the compiler to flexibly change how it interfaces to macros.

Macros 1.1

Ok, with the background knowledge of what procedural macros are today, let’s take a look at how we can solve the major problems blocking its stabilization:

  • Sharing an API of the entire compiler
  • Frozen interface between the compiler and macros

librustc_macro

Proposed in RFC 1566 and described in this blog post the distribution will now ship with a new librustc_macro crate available for macro authors. The intention here is that the gory details of how macros actually talk to the compiler is entirely contained within this one crate. The stable interface to the compiler is then entirely defined in this crate, and we can make it as small or large as we want. Additionally, like the standard library, it can contain unstable APIs to test out new pieces of functionality over time.

The initial implementation of librustc_macro is proposed to be incredibly bare bones:

#![crate_name = "macro"]

pub struct TokenStream {
    // ...
}

#[derive(Debug)]
pub struct LexError {
    // ...
}

impl FromStr for TokenStream {
    type Err = LexError;

    fn from_str(s: &str) -> Result<TokenStream, LexError> {
        // ...
    }
}

impl fmt::Display for TokenStream {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        // ...
    }
}

That is, there will only be a handful of exposed types and TokenStream can only be converted to and from a String. Eventually TokenStream type will more closely resemble token streams in the compiler itself, and more fine-grained manipulations will be available as well.

Defining a macro

A new crate type will be added to the compiler, rustc-macro (described below), indicating a crate that’s compiled as a procedural macro. There will not be a “registrar” function in this crate type (like there is today), but rather a number of functions which act as token stream transformers to implement macro functionality.

A macro crate might look like:

#![crate_type = "rustc-macro"]
#![crate_name = "double"]

extern crate rustc_macro;

use rustc_macro::TokenStream;

#[rustc_macro_derive(Double)]
pub fn double(input: TokenStream) -> TokenStream {
    let source = input.to_string();

    // Parse `source` for struct/enum declaration, and then build up some new
    // source code representing a number of items in the implementation of
    // the `Double` trait for the struct/enum in question.
    let source = derive_double(&source);

    // Parse this back to a token stream and return it
    source.parse().unwrap()
}

This new rustc_macro_derive attribute will be allowed inside of a rustc-macro crate but disallowed in other crate types. It defines a new #[derive] mode which can be used in a crate. The input here is the entire struct that #[derive] was attached to, attributes and all. The output is expected to include the struct/enum itself as well as any number of items to be contextually “placed next to” the initial declaration.

Again, though, there is no hygiene. More specifically, the TokenStream::from_str method will use the same expansion context as the derive attribute itself, not the point of definition of the derive function. All span information for the TokenStream structures returned by from_source will point to the original #[derive] annotation. This means that error messages related to struct definitions will get worse if they have a custom derive attribute placed on them, because the entire struct’s span will get folded into the #[derive] annotation. Eventually, though, more span information will be stable on the TokenStream type, so this is just a temporary limitation.

The rustc_macro_derive attribute requires the signature (similar to macros 2.0):

fn(TokenStream) -> TokenStream

If a macro cannot process the input token stream, it is expected to panic for now, although eventually it will call methods in rustc_macro to provide more structured errors. The compiler will wrap up the panic message and display it to the user appropriately. Eventually, however, librustc_macro will provide more interesting methods of signaling errors to users.

Customization of user-defined #[derive] modes can still be done through custom attributes, although it will be required for rustc_macro_derive implementations to remove these attributes when handing them back to the compiler. The compiler will still gate unknown attributes by default.

rustc-macro crates

Like the rlib and dylib crate types, the rustc-macro crate type is intended to be an intermediate product. What it actually produces is not specified, but if a -L path is provided to it then the compiler will recognize the output artifacts as a macro and it can be loaded for a program.

Initially if a crate is compiled with the rustc-macro crate type (and possibly others) it will forbid exporting any items in the crate other than those functions tagged #[rustc_macro_derive] and those functions must also be placed at the crate root. Finally, the compiler will automatically set the cfg(rustc_macro) annotation whenever any crate type of a compilation is the rustc-macro crate type.

While these properties may seem a bit odd, they’re intended to allow a number of forwards-compatible extensions to be implemented in macros 2.0:

  • Macros eventually want to be imported from crates (e.g. use foo::bar!) and limiting where #[derive] can be defined reduces the surface area for possible conflict.
  • Macro crates eventually want to be compiled to be available both at runtime and at compile time. That is, an extern crate foo annotation may load both a rustc-macro crate and a crate to link against, if they are available. Limiting the public exports for now to only custom-derive annotations should allow for maximal flexibility here.

Using a procedural macro

Using a procedural macro will be very similar to today’s extern crate system, such as:

#[macro_use]
extern crate double;

#[derive(Double)]
pub struct Foo;

fn main() {
    // ...
}

That is, the extern crate directive will now also be enhanced to look for crates compiled as rustc-macro in addition to those compiled as dylib and rlib. Today this will be temporarily limited to finding either a rustc-macro crate or an rlib/dylib pair compiled for the target, but this restriction may be lifted in the future.

The custom derive annotations loaded from rustc-macro crates today will all be placed into the same global namespace. Any conflicts (shadowing) will cause the compiler to generate an error, and it must be resolved by loading only one or the other of the rustc-macro crates (eventually this will be solved with a more principled use system in macros 2.0).

Initial implementation details

This section lays out what the initial implementation details of macros 1.1 will look like, but none of this will be specified as a stable interface to the compiler. These exact details are subject to change over time as the requirements of the compiler change, and even amongst platforms these details may be subtly different.

The compiler will essentially consider rustc-macro crates as --crate-type dylib -C prefer-dynamic. That is, compiled the same way they are today. This namely means that these macros will dynamically link to the same standard library as the compiler itself, therefore sharing resources like a global allocator, etc.

The librustc_macro crate will compiled as an rlib and a static copy of it will be included in each macro. This crate will provide a symbol known by the compiler that can be dynamically loaded. The compiler will dlopen a macro crate in the same way it does today, find this symbol in librustc_macro, and call it.

The rustc_macro_derive attribute will be encoded into the crate’s metadata, and the compiler will discover all these functions, load their function pointers, and pass them to the librustc_macro entry point as well. This provides the opportunity to register all the various expansion mechanisms with the compiler.

The actual underlying representation of TokenStream will be basically the same as it is in the compiler today. (the details on this are a little light intentionally, shouldn’t be much need to go into too much detail).

Initial Cargo integration

Like plugins today, Cargo needs to understand which crates are rustc-macro crates and which aren’t. Cargo additionally needs to understand this to sequence compilations correctly and ensure that rustc-macro crates are compiled for the host platform. To this end, Cargo will understand a new attribute in the [lib] section:

[lib]
rustc-macro = true

This annotation indicates that the crate being compiled should be compiled as a rustc-macro crate type for the host platform in the current compilation.

Eventually Cargo may also grow support to understand that a rustc-macro crate should be compiled twice, once for the host and once for the target, but this is intended to be a backwards-compatible extension to Cargo.

Pieces to stabilize

Eventually this RFC is intended to be considered for stabilization (after it’s implemented and proven out on nightly, of course). The summary of pieces that would become stable are:

  • The rustc_macro crate, and a small set of APIs within (skeleton above)
  • The rustc-macro crate type, in addition to its current limitations
  • The #[rustc_macro_derive] attribute
  • The signature of the #![rustc_macro_derive] functions
  • Semantically being able to load macro crates compiled as rustc-macro into the compiler, requiring that the crate was compiled by the exact compiler.
  • The semantic behavior of loading custom derive annotations, in that they’re just all added to the same global namespace with errors on conflicts. Additionally, definitions end up having no hygiene for now.
  • The rustc-macro = true attribute in Cargo

Macros 1.1 in practice

Alright, that’s a lot to take in! Let’s take a look at what this is all going to look like in practice, focusing on a case study of #[derive(Serialize)] for serde.

First off, serde will provide a crate, let’s call it serde_macros. The Cargo.toml will look like:

[package]
name = "serde-macros"
# ...

[lib]
rustc-macro = true

[dependencies]
syntex_syntax = "0.38.0"

The contents will look similar to

extern crate rustc_macro;
extern crate syntex_syntax;

use rustc_macro::TokenStream;

#[rustc_macro_derive(Serialize)]
pub fn derive_serialize(input: TokenStream) -> TokenStream {
    let input = input.to_string();

    // use syntex_syntax from crates.io to parse `input` into an AST

    // use this AST to generate an impl of the `Serialize` trait for the type in
    // question

    // convert that impl to a string

    // parse back into a token stream
    return impl_source.parse().unwrap()
}

Next, crates will depend on this such as:

[dependencies]
serde = "0.9"
serde-macros = "0.9"

And finally use it as such:

extern crate serde;
#[macro_use]
extern crate serde_macros;

#[derive(Serialize)]
pub struct Foo {
    a: usize,
    #[serde(rename = "foo")]
    b: String,
}

Drawbacks

  • This is not an interface that would be considered for stabilization in a void, there are a number of known drawbacks to the current macro system in terms of how it architecturally fits into the compiler. Additionally, there’s work underway to solve all these problems with macros 2.0.

    As mentioned before, however, the stable version of macros 2.0 is currently quite far off, and the desire for features like custom derive are very real today. The rationale behind this RFC is that the downsides are an acceptable tradeoff from moving a significant portion of the nightly ecosystem onto stable Rust.

  • This implementation is likely to be less performant than procedural macros are today. Round tripping through strings isn’t always a speedy operation, especially for larger expansions. Strings, however, are a very small implementation detail that’s easy to see stabilized until the end of time. Additionally, it’s planned to extend the TokenStream API in the future to allow more fine-grained transformations without having to round trip through strings.

  • Users will still have an inferior experience to today’s nightly macros specifically with respect to compile times. The syntex_syntax crate takes quite a few seconds to compile, and this would be required by any crate which uses serde. To offset this, though, the syntex_syntax could be massively stripped down as all it needs to do is parse struct declarations mostly. There are likely many other various optimizations to compile time that can be applied to ensure that it compiles quickly.

  • Plugin authors will need to be quite careful about the code which they generate as working with strings loses much of the expressiveness of macros in Rust today. For example:

    macro_rules! foo {
        ($x:expr) => {
            #[derive(Serialize)]
            enum Foo { Bar = $x, Baz = $x * 2 }
        }
    }
    foo!(1 + 1);

    Plugin authors would have to ensure that this is not naively interpreted as Baz = 1 + 1 * 2 as this will cause incorrect results. The compiler will also need to be careful to parenthesize token streams like this when it generates a stringified source.

  • By having separate library and macro crate support today (e.g. serde and serde_macros) it’s possible for there to be version skew between the two, making it tough to ensure that the two versions you’re using are compatible with one another. This would be solved if serde itself could define or reexport the macros, but unfortunately that would require a likely much larger step towards “macros 2.0” to solve and would greatly increase the size of this RFC.

  • Converting to a string and back loses span information, which can lead to degraded error messages. For example, currently we can make an effort to use the span of a given field when deriving code that is caused by that field, but that kind of precision will not be possible until a richer interface is available.

Alternatives

  • Wait for macros 2.0, but this likely comes with the high cost of postponing a stable custom-derive experience on the time scale of years.

  • Don’t add rustc_macro as a new crate, but rather specify that #[rustc_macro_derive] has a stable-ABI friendly signature. This does not account, however, for the eventual planned introduction of the rustc_macro crate and is significantly harder to write. The marginal benefit of being slightly more flexible about how it’s run likely isn’t worth it.

  • The syntax for defining a macro may be different in the macros 2.0 world (e.g. pub macro foo vs an attribute), that is it probably won’t involve a function attribute like #[rustc_macro_derive]. This interim system could possibly use this syntax as well, but it’s unclear whether we have a concrete enough idea in mind to implement today.

  • The TokenStream state likely has some sort of backing store behind it like a string interner, and in the APIs above it’s likely that this state is passed around in thread-local-storage to avoid threading through a parameter like &mut Context everywhere. An alternative would be to explicitly pass this parameter, but it might hinder trait implementations like fmt::Display and FromStr. Additionally, threading an extra parameter could perhaps become unwieldy over time.

  • In addition to allowing definition of custom-derive forms, definition of custom procedural macros could also be allowed. They are similarly transformers from token streams to token streams, so the interface in this RFC would perhaps be appropriate. This addition, however, adds more surface area to this RFC and the macro 1.1 system which may not be necessary in the long run. It’s currently understood that only custom derive is needed to move crates like serde and diesel onto stable Rust.

  • Instead of having a global namespace of #[derive] modes which rustc-macro crates append to, we could at least require something along the lines of #[derive(serde_macros::Deserialize)]. This is unfortunately, however, still disconnected from what name resolution will actually be eventually and also deviates from what you actually may want, #[derive(serde::Deserialize)], for example.

Unresolved questions

  • Is the interface between macros and the compiler actually general enough to be implemented differently one day?

  • The intention of macros 1.1 is to be as close as possible to macros 2.0 in spirit and implementation, just without stabilizing vast quantities of features. In that sense, it is the intention that given a stable macros 1.1, we can layer on features backwards-compatibly to get to macros 2.0. Right now, though, the delta between what this RFC proposes and where we’d like to is very small, and can get get it down to actually zero?

  • Eventually macro crates will want to be loaded both at compile time and runtime, and this means that Cargo will need to understand to compile these crates twice, once as rustc-macro and once as an rlib. Does Cargo have enough information to do this? Are the extensions needed here backwards-compatible?

  • What sort of guarantees will be provided about the runtime environment for plugins? Are they sandboxed? Are they run in the same process?

  • Should the name of this library be rustc_macros? The rustc_ prefix normally means “private”. Other alternatives are macro (make it a contextual keyword), macros, proc_macro.

  • Should a Context or similar style argument be threaded through the APIs? Right now they sort of implicitly require one to be threaded through thread-local-storage.

  • Should the APIs here be namespaced, perhaps with a _1_1 suffix?

  • To what extent can we preserve span information through heuristics? Should we adopt a slightly different API, for example one based on concatenation, to allow preserving spans?

Summary

When initializing a data structure (struct, enum, union) with named fields, allow writing fieldname as a shorthand for fieldname: fieldname. This allows a compact syntax for initialization, with less duplication.

Example usage:

struct SomeStruct { field1: ComplexType, field2: AnotherType }

impl SomeStruct {
    fn new() -> Self {
        let field1 = {
            // Various initialization code
        };
        let field2 = {
            // More initialization code
        };
        SomeStruct { field1, field2 }
    }
}

Motivation

When writing initialization code for a data structure, the names of the structure fields often become the most straightforward names to use for their initial values as well. At the end of such an initialization function, then, the initializer will contain many patterns of repeated field names as field values: field: field, field2: field2, field3: field3.

Such repetition of the field names makes it less ergonomic to separately declare and initialize individual fields, and makes it tempting to instead embed complex code directly in the initializer to avoid repetition.

Rust already allows similar syntax for destructuring in pattern matches: a pattern match can use SomeStruct { field1, field2 } => ... to match field1 and field2 into values with the same names. This RFC introduces symmetrical syntax for initializers.

A family of related structures will often use the same field name for a semantically-similar value. Combining this new syntax with the existing pattern-matching syntax allows simple movement of data between fields with a pattern match: Struct1 { field1, .. } => Struct2 { field1 }.

The proposed syntax also improves structure initializers in closures, such as might appear in a chain of iterator adapters: |field1, field2| SomeStruct { field1, field2 }.

This RFC takes inspiration from the Haskell NamedFieldPuns extension, and from ES6 shorthand property names.

Detailed design

Grammar

In the initializer for a struct with named fields, a union with named fields, or an enum variant with named fields, accept an identifier field as a shorthand for field: field.

With reference to the grammar in parser-lalr.y, this proposal would expand the field_init rule to the following:

field_init
: ident
| ident ':' expr
;

Interpretation

The shorthand initializer field always behaves in every possible way like the longhand initializer field: field. This RFC introduces no new behavior or semantics, only a purely syntactic shorthand. The rest of this section only provides further examples to explicitly clarify that this new syntax remains entirely orthogonal to other initializer behavior and semantics.

Examples

If the struct SomeStruct has fields field1 and field2, the initializer SomeStruct { field1, field2 } behaves in every way like the initializer SomeStruct { field1: field1, field2: field2 }.

An initializer may contain any combination of shorthand and full field initializers:

let a = SomeStruct { field1, field2: expression, field3 };
let b = SomeStruct { field1: field1, field2: expression, field3: field3 };
assert_eq!(a, b);

An initializer may use shorthand field initializers together with update syntax:

let a = SomeStruct { field1, .. someStructInstance };
let b = SomeStruct { field1: field1, .. someStructInstance };
assert_eq!(a, b);

Compilation errors

This shorthand initializer syntax does not introduce any new compiler errors that cannot also occur with the longhand initializer syntax field: field. Existing compiler errors that can occur with the longhand initializer syntax field: field also apply to the shorthand initializer syntax field:

  • As with the longhand initializer field: field, if the structure has no field with the specified name field, the shorthand initializer field results in a compiler error for attempting to initialize a non-existent field.

  • As with the longhand initializer field: field, repeating a field name within the same initializer results in a compiler error (E0062); this occurs with any combination of shorthand initializers or full field: expression initializers.

  • As with the longhand initializer field: field, if the name field does not resolve, the shorthand initializer field results in a compiler error for an unresolved name (E0425).

  • As with the longhand initializer field: field, if the name field resolves to a value with type incompatible with the field field in the structure, the shorthand initializer field results in a compiler error for mismatched types (E0308).

Drawbacks

This new syntax could significantly improve readability given clear and local field-punning variables, but could also be abused to decrease readability if used with more distant variables.

As with many syntactic changes, a macro could implement this instead. See the Alternatives section for discussion of this.

The shorthand initializer syntax looks similar to positional initialization of a structure without field names; reinforcing this, the initializer will commonly list the fields in the same order that the struct declares them. However, the shorthand initializer syntax differs from the positional initializer syntax (such as for a tuple struct) in that the positional syntax uses parentheses instead of braces: SomeStruct(x, y) is unambiguously a positional initializer, while SomeStruct { x, y } is unambiguously a shorthand initializer for the named fields x and y.

Alternatives

Wildcards

In addition to this syntax, initializers could support omitting the field names entirely, with syntax like SomeStruct { .. }, which would implicitly initialize omitted fields from identically named variables. However, that would introduce far too much magic into initializers, and the context-dependence seems likely to result in less readable, less obvious code.

Macros

A macro wrapped around the initializer could implement this syntax, without changing the language; for instance, pun! { SomeStruct { field1, field2 } } could expand to SomeStruct { field1: field1, field2: field2 }. However, this change exists to make structure construction shorter and more expressive; having to use a macro would negate some of the benefit of doing so, particularly in places where brevity improves readability, such as in a closure in the middle of a larger expression. There is also precedent for language-level support. Pattern matching already allows using field names as the destination for the field values via destructuring. This change adds a symmetrical mechanism for construction which uses existing names as sources.

Sigils

To minimize confusing shorthand expressions with the construction of tuple-like structs, we might elect to prefix expanded field names with sigils.

For example, if the sigil were :, the existing syntax S { x: x } would be expressed as S { :x }. This is used in MoonScript.

This particular choice of sigil may be confusing, due to the already-overloaded use of : for fields and type ascription. Additionally, in languages such as Ruby and Elixir, :x denotes a symbol or atom, which may be confusing for newcomers.

Other sigils could be used instead, but even then we are then increasing the amount of new syntax being introduced. This both increases language complexity and reduces the gained compactness, worsening the cost/benefit ratio of adding a shorthand. Any use of a sigil also breaks the symmetry between binding pattern matching and the proposed shorthand.

Keyword-prefixed

Similarly to sigils, we could use a keyword like Nix uses inherit. Some forms we could decide upon (using use as the keyword of choice here, but it could be something else), it could look like the following.

  • S { use x, y, z: 10}
  • S { use (x, y), z: 10 }
  • S { use {x, y}, z: 10 }
  • S { use x, use y, z: 10}

This has the same drawbacks as sigils except that it won’t be confused for symbols in other languages or adding more sigils. It also has the benefit of being something that can be searched for in documentation.

Summary

Create a team responsible for documentation for the Rust project.

Motivation

RFC 1068 introduced a federated governance model for the Rust project. Several initial subteams were set up. There was a note after the original subteam list saying this:

In the long run, we will likely also want teams for documentation and for community events, but these can be spun up once there is a more clear need (and available resources).

Now is the time for a documentation subteam.

Why documentation was left out

Documentation was left out of the original list because it wasn’t clear that there would be anyone but me on it. Furthermore, one of the original reasons for the subteams was to decide who gets counted amongst consensus for RFCs, but it was unclear how many documentation-related RFCs there would even be.

Chicken, meet egg

However, RFCs are not only what subteams do. To quote the RFC:

  • Shepherding RFCs for the subteam area. As always, that means (1) ensuring that stakeholders are aware of the RFC, (2) working to tease out various design tradeoffs and alternatives, and (3) helping build consensus.
  • Accepting or rejecting RFCs in the subteam area.
  • Setting policy on what changes in the subteam area require RFCs, and reviewing direct PRs for changes that do not require an RFC.
  • Delegating reviewer rights for the subteam area. The ability to r+ is not limited to team members, and in fact earning r+ rights is a good stepping stone toward team membership. Each team should set reviewing policy, manage reviewing rights, and ensure that reviews take place in a timely manner. (Thanks to Nick Cameron for this suggestion.)

The first two are about RFCs themselves, but the second two are more pertinent to documentation. In particular, deciding who gets r+ rights is important. A lack of clarity in this area has been unfortunate, and has led to a chicken and egg situation: without a documentation team, it’s unclear how to be more involved in working on Rust’s documentation, but without people to be on the team, there’s no reason to form a team. For this reason, I think a small initial team will break this logjam, and provide room for new contributors to grow.

Detailed design

The Rust documentation team will be responsible for all of the things listed above. Specifically, they will pertain to these areas of the Rust project:

  • The standard library documentation
  • The book and other long-form docs
  • Cargo’s documentation
  • The Error Index

Furthermore, the documentation team will be available to help with ecosystem documentation, in a few ways. Firstly, in an advisory capacity: helping people who want better documentation for their crates to understand how to accomplish that goal. Furthermore, monitoring the overall ecosystem documentation, and identifying places where we could contribute and make a large impact for all Rustaceans. If the Rust project itself has wonderful docs, but the ecosystem has terrible docs, then people will still be frustrated with Rust’s documentation situation, especially given our anti-batteries-included attitude. To be clear, this does not mean owning the ecosystem docs, but rather working to contribute in more ways than just the Rust project itself.

We will coordinate in the #rust-docs IRC room, and have regular meetings, as the team sees fit. Regular meetings will be important to coordinate broader goals; and participation will be important for team members. We hold meetings weekly.

Membership

  • @steveklabnik, team lead
  • @GuillaumeGomez
  • @jonathandturner
  • @peschkaj

It’s important to have a path towards attaining team membership; there are some other people who have already been doing docs work that aren’t on this list. These guidelines are not hard and fast, however, anyone wanting to eventually be a member of the team should pursue these goals:

  • Contributing documentation patches to Rust itself
  • Attending doc team meetings, which are open to all
  • generally being available on [IRC]1 to collaborate with others

I am not quantifying this exactly because it’s not about reaching some specific number; adding someone to the team should make sense if someone is doing all of these things.

Drawbacks

This is Yet Another Team. Do we have too many teams? I don’t think so, but someone might.

Alternatives

The main alternative is not having a team. This is the status quo, so the situation is well-understood.

It’s possible that docs come under the purvew of “tools”, and so maybe the docs team would be an expansion of the tools team, rather than its own new team. Or some other subteam.

Unresolved questions

None.


  1. The #rust-docs channel on irc.mozilla.org

Summary

Currently Rust allows anonymous parameters in trait methods:

trait T {
    fn foo(i32);

    fn bar_with_default_impl(String, String) {

    }
}

This RFC proposes to deprecate this syntax. This RFC intentionally does not propose to remove this syntax.

Motivation

Anonymous parameters are a historic accident. They cause a number of technical annoyances.

  1. Surprising pattern syntax in traits

    trait T {
        fn foo(x: i32);        // Ok
        fn bar(&x: &i32);      // Ok
        fn baz(&&x: &&i32);    // Ok
        fn quux(&&&x: &&&i32); // Syntax error
    }
    

    That is, patterns more complex than _, foo, &foo, &&foo, mut foo are forbidden.

  2. Inconsistency between default implementations in traits and implementations in impl blocks

    trait T {
        fn foo((x, y): (usize, usize)) { // Syntax error
        }
    }
    
    impl T for S {
        fn foo((x, y): (usize, usize)) { // Ok
        }
    }
    
  3. Inconsistency between method declarations in traits and in extern blocks

    trait T {
        fn foo(i32);  // Ok
    }
    
    extern "C" {
        fn foo(i32); // Syntax error
    }
    
  4. Slightly more complicated syntax analysis for LL style parsers. The parser must guess if it currently parses a pattern or a type.

  5. Small complications for source code analyzers (e.g. IntelliJ Rust) and potential alternative implementations.

  6. Potential future parsing ambiguities with named and default parameters syntax.

None of these issues is significant, but they exist.

Even if we exclude these technical drawbacks, it can be argued that allowing to omit parameter names unnecessary complicates the language. It is unnecessary because it does not make Rust more expressive and does not provide noticeable ergonomic improvements. It is trivial to add parameter name, and only a small fraction of method declarations actually omits it.

Another drawback of this syntax is its impact on the learning curve. One needs to have a C background to understand that fn foo(T); means a function with single parameter of type T. If one comes from dynamically typed language like Python or JavaScript, this T looks more like a parameter name.

Anonymous parameters also cause inconsistencies between trait definitions and implementations. One way to write an implementation is to copy the method prototypes from the trait into the impl block. With anonymous parameters this leads to syntax errors.

Detailed design

Backward compatibility

Removing anonymous parameters from the language is formally a breaking change. The breakage can be trivially and automatically fixed by adding _: (suggested by @nagisa):

trait T {
    fn foo(_: i32);

    fn bar_with_default_impl(_: String, _: String) {

    }
}

However this is also a major breaking change from the practical point of view. Parameter names are rarely omitted, but it happens. For example, std::fmt::Display is currently defined as follows:

trait Display {
    fn fmt(&self, &mut Formatter) -> Result;
}

Of the 5560 packages from crates.io, 416 include at least one usage of an anonymous parameter (full report).

Benefits of deprecation

So the proposal is just to deprecate this syntax. Phasing the syntax out of usage will mostly solve the learning curve problems. The technical problems would not be solved until the actual removal becomes feasible and practical. This hypothetical future may include:

  • Rust 2.0 release.
  • A widely deployed tool to automatically fix deprecation warnings.
  • Storing crates on crates.io in “elaborated” syntax independent format.

Enabling deprecation early makes potential future removal easier in practice.

Deprecation strategy

There are two possible ways to deprecate this syntax:

Hard deprecation

One option is to produce a warning for anonymous parameters. This is backwards compatible, but in practice will force crate authors to actively change their code to avoid the warnings, causing code churn.

Soft deprecation

Another option is to clearly document this syntax as deprecated and add an allow-by-default lint, a clippy lint, and an IntelliJ Rust inspection, but do not produce compiler warnings by default. This will make the update process more gradual, but will delay the benefits of deprecation.

Automatic transition

Rustfmt and IntelliJ Rust can automatically change anonymous parameters to _. However it is better to manually add real names to make it obvious what name is expected on the impl side.

Drawbacks

  • Hard deprecation will cause code churn.

  • Soft deprecation might not be as efficient at removing the syntax from usage.

  • The technical issues can not be solved nicely until the deprecation is turned into a hard error.

  • It is not clear if it will ever be possible to remove this syntax entirely.

Alternatives

  • Status quo.

  • Decide on the precise removal plan prior to deprecation.

  • Try to solve the underlying annoyances in some other way. For example, unbounded look ahead can be used in the parser to allow both anonymous parameters and the full pattern syntax.

Unresolved questions

  • What deprecation strategy should be chosen?

Summary

This RFC proposes adding a new macro to libcore, compile_error! which will unconditionally cause compilation to fail with the given error message when encountered.

Motivation

Crates which work with macros or annotations such as cfg have no tools to communicate error cases in a meaningful way on stable. For example, given the following macro:

macro_rules! give_me_foo_or_bar {
    (foo) => {};
    (bar) => {};
}

when invoked with baz, the error message will be error: no rules expected the token baz. In a real world scenario, this error may actually occur deep in a stack of macro calls, with an even more confusing error message. With this RFC, the macro author could provide the following:

macro_rules! give_me_foo_or_bar {
    (foo) => {};
    (bar) => {};
    ($x:ident) => {
        compile_error!("This macro only accepts `foo` or `bar`");
    }
}

When combined with attributes, this also provides a way for authors to validate combinations of features.

#[cfg(not(any(feature = "postgresql", feature = "sqlite")))]
compile_error!("At least one backend must be used with this crate. \
    Please specify `features = ["postgresql"]` or `features = ["sqlite"]`")

Detailed design

The span given for the failure should be the invocation of the compile_error! macro. The macro must take exactly one argument, which is a string literal. The macro will then call span_err with the provided message on the expansion context, and will not expand to any further code.

Drawbacks

None

Alternatives

Wait for the stabilization of procedural macros, at which point a crate could provide this functionality.

Unresolved questions

None

Summary

Add a function that extracts the discriminant from an enum variant as a comparable, hashable, printable, but (for now) opaque and unorderable type.

Motivation

When using an ADT enum that contains data in some of the variants, it is sometimes desirable to know the variant but ignore the data, in order to compare two values by variant or store variants in a hash map when the data is either unhashable or unimportant.

The motivation for this is mostly identical to RFC 639.

Detailed design

The proposed design has been implemented at #34785 (after some back-and-forth). That implementation is copied at the end of this section for reference.

A struct Discriminant<T> and a free function fn discriminant<T>(v: &T) -> Discriminant<T> are added to std::mem (for lack of a better home, and noting that std::mem already contains similar parametricity escape hatches such as size_of). For now, the Discriminant struct is simply a newtype over u64, because that’s what the discriminant_value intrinsic returns, and a PhantomData to allow it to be generic over T.

Making Discriminant generic provides several benefits:

  • discriminant(&EnumA::Variant) == discriminant(&EnumB::Variant) is statically prevented.
  • In the future, we can implement different behavior for different kinds of enums. For example, if we add a way to distinguish C-like enums at the type level, then we can add a method like Discriminant::into_inner for only those enums. Or enums with certain kinds of discriminants could become orderable.

The function no longer requires a Reflect bound on its argument even though discriminant extraction is a partial violation of parametricity, in that a generic function with no bounds on its type parameters can nonetheless find out some information about the input types, or perform a “partial equality” comparison. This is debatable (see this comment, this comment and open question #2), especially in light of specialization. The situation is comparable to TypeId::of (which requires the bound) and mem::size_of_val (which does not). Note that including a bound is the conservative decision, because it can be backwards-compatibly removed.

/// Returns a value uniquely identifying the enum variant in `v`.
///
/// If `T` is not an enum, calling this function will not result in undefined behavior, but the
/// return value is unspecified.
///
/// # Stability
///
/// Discriminants can change if enum variants are reordered, if a new variant is added
/// in the middle, or (in the case of a C-like enum) if explicitly set discriminants are changed.
/// Therefore, relying on the discriminants of enums outside of your crate may be a poor decision.
/// However, discriminants of an identical enum should not change between minor versions of the
/// same compiler.
///
/// # Examples
///
/// This can be used to compare enums that carry data, while disregarding
/// the actual data:
///
/// ```
/// #![feature(discriminant_value)]
/// use std::mem;
///
/// enum Foo { A(&'static str), B(i32), C(i32) }
///
/// assert!(mem::discriminant(&Foo::A("bar")) == mem::discriminant(&Foo::A("baz")));
/// assert!(mem::discriminant(&Foo::B(1))     == mem::discriminant(&Foo::B(2)));
/// assert!(mem::discriminant(&Foo::B(3))     != mem::discriminant(&Foo::C(3)));
/// ```
pub fn discriminant<T>(v: &T) -> Discriminant<T> {
    unsafe {
        Discriminant(intrinsics::discriminant_value(v), PhantomData)
    }
}

/// Opaque type representing the discriminant of an enum.
///
/// See the `discriminant` function in this module for more information.
pub struct Discriminant<T>(u64, PhantomData<*const T>);

impl<T> Copy for Discriminant<T> {}

impl<T> clone::Clone for Discriminant<T> {
    fn clone(&self) -> Self {
        *self
    }
}

impl<T> cmp::PartialEq for Discriminant<T> {
    fn eq(&self, rhs: &Self) -> bool {
        self.0 == rhs.0
    }
}

impl<T> cmp::Eq for Discriminant<T> {}

impl<T> hash::Hash for Discriminant<T> {
    fn hash<H: hash::Hasher>(&self, state: &mut H) {
        self.0.hash(state);
    }
}

impl<T> fmt::Debug for Discriminant<T> {
    fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
        self.0.fmt(fmt)
    }
}

Drawbacks

  1. Anytime we reveal more details about the memory representation of a repr(rust) type, we add back-compat guarantees. The author is of the opinion that the proposed Discriminant newtype still hides enough to mitigate this drawback. (But see open question #1.)
  2. Adding another function and type to core implies an additional maintenance burden, especially when more enum layout optimizations come around (however, there is hardly any burden on top of that associated with the extant discriminant_value intrinsic).

Alternatives

  1. Do nothing: there is no stable way to extract the discriminant from an enum variant. Users who need such a feature will need to write (or generate) big match statements and hope they optimize well (this has been servo’s approach).
  2. Directly stabilize the discriminant_value intrinsic, or a wrapper that doesn’t use an opaque newtype. This more drastically precludes future enum representation optimizations, and won’t be able to take advantage of future type system improvements that would let discriminant return a type dependent on the enum.

Unresolved questions

  1. Can the return value of discriminant(&x) be considered stable between subsequent compilations of the same code? How about if the enum in question is changed by modifying a variant’s name? by adding a variant?
  2. Is the T: Reflect bound necessary?
  3. Can Discriminant implement PartialOrd?

Summary

Make compiler aware of the association between library names adorning extern blocks and symbols defined within the block. Add attributes and command line switches that leverage this association.

Motivation

Most of the time a linkage directive is only needed to inform the linker about what native libraries need to be linked into a program. On some platforms, however, the compiler needs more detailed knowledge about what’s being linked from where in order to ensure that symbols are wired up correctly.

On Windows, when a symbol is imported from a dynamic library, the code that accesses this symbol must be generated differently than for symbols imported from a static library.

Currently the compiler is not aware of associations between the libraries and symbols imported from them, so it cannot alter code generation based on library kind.

Detailed design

Library <-> symbol association

The compiler shall assume that symbols defined within extern block are imported from the library mentioned in the #[link] attribute adorning the block.

Changes to code generation

On platforms other than Windows the above association will have no effect. On Windows, however, #[link(..., kind="dylib") shall be presumed to mean linking to a dll, whereas #[link(..., kind="static") shall mean static linking. In the former case, all symbols associated with that library will be marked with LLVM dllimport storage class.

Library name and kind variance

Many native libraries are linked via the command line via -l which is passed in through Cargo build scripts instead of being written in the source code itself. As a recap, a native library may change names across platforms or distributions or it may be linked dynamically in some situations and statically in others which is why build scripts are leveraged to make these dynamic decisions. In order to support this kind of dynamism, the following modifications are proposed:

  • Extend syntax of the -l flag to -l [KIND=]lib[:NEWNAME]. The NEWNAME part may be used to override name of a library specified in the source.
  • Add new meaning to the KIND part: if “lib” is already specified in the source, this will override its kind with KIND. Note that this override is possible only for libraries defined in the current crate.

Example:

// mylib.rs
#[link(name="foo", kind="dylib")]
extern {
    // dllimport applied
}

#[link(name="bar", kind="static")]
extern {
    // dllimport not applied
}

#[link(name="baz")]
extern {
    // kind defaults to "dylib", dllimport applied
}
rustc mylib.rs -l static=foo # change foo's kind to "static", dllimport will not be applied
rustc mylib.rs -l foo:newfoo # link newfoo instead of foo, keeping foo's kind as "dylib"
rustc mylib.rs -l dylib=bar # change bar's kind to "dylib", dllimport will be applied

Unbundled static libs (optional)

It had been pointed out that sometimes one may wish to link to a static system library (i.e. one that is always available to the linker) without bundling it into .lib’s and .rlib’s. For this use case we’ll introduce another library “kind”, “static-nobundle”. Such libraries would be treated in the same way as “static”, except they will not be bundled into the target .lib/.rlib.

Drawbacks

For libraries to work robustly on MSVC, the correct #[link] annotation will be required. Most cases will “just work” on MSVC due to the compiler strongly favoring static linkage, but any symbols imported from a dynamic library or exported as a Rust dynamic library will need to be tagged appropriately to ensure that they work in all situations. Worse still, the #[link] annotations on an extern block are not required on any other platform to work correctly, meaning that it will be common that these attributes are left off by accident.

Alternatives

  • Instead of enhancing #[link], a #[linked_from = "foo"] annotation could be added. This has the drawback of not being able to handle native libraries whose name is unpredictable across platforms in an easy fashion, however. Additionally, it adds an extra attribute to the compiler that wasn’t known previously.

  • Support a #[dllimport] on extern blocks (or individual symbols, or both). This has the following drawbacks, however:

    • This attribute would duplicate the information already provided by #[link(kind="...")].
    • It is not always known whether #[dllimport] is needed. Native libraries are not always known whether they’re linked dynamically or statically (e.g. that’s what a build script decides), so dllimport will need to be guarded by cfg_attr.
  • When linking native libraries, the compiler could attempt to locate each library on the filesystem and probe the contents for what symbol names are exported from the native library. This list could then be cross-referenced with all symbols declared in the program locally to understand which symbols are coming from a dylib and which are being linked statically. Some downsides of this approach may include:

    • It’s unclear whether this will be a performant operation and not cause undue runtime overhead during compiles.

    • On Windows linking to a DLL involves linking to its “import library”, so it may be difficult to know whether a symbol truly comes from a DLL or not.

    • Locating libraries on the system may be difficult as the system linker often has search paths baked in that the compiler does not know about.

  • As was already mentioned, “kind” override can affect codegen of the current crate only. This overloading the -l flag for this purpose may be confusinfg to developers. A new codegen flag might be a better fit for this, for example -C libkind=KIND=LIB.

Unresolved questions

  • Should we allow dropping a library specified in the source from linking via -l lib: (i.e. “rename to empty”)?

Summary

Enable the compiler to select whether a target dynamically or statically links to a platform’s standard C runtime (“CRT”) through the introduction of three orthogonal and otherwise general purpose features, one of which will likely never become stable and can be considered an implementation detail of std. These features do not require the compiler or language to have intrinsic knowledge of the existence of C runtimes.

The end result is that rustc will be able to reuse its existing standard library binaries for the MSVC and musl targets to build code that links either statically or dynamically to libc.

The design herein additionally paves the way for improved support for dllimport/dllexport, and cpu-specific features, particularly when combined with a std-aware cargo.

Motivation

Today all targets of rustc hard-code how they link to the native C runtime. For example the x86_64-unknown-linux-gnu target links to glibc dynamically, x86_64-unknown-linux-musl links statically to musl, and x86_64-pc-windows-msvc links dynamically to MSVCRT. There are many use cases, however, where these decisions are not suitable. For example binaries on Alpine Linux want to link dynamically to musl and creating portable binaries on Windows is most easily done by linking statically to MSVCRT.

Today rustc has no mechanism for accomplishing this besides defining an entirely new target specification and distributing a build of the standard library for it. Because target specifications must be described by a target triple, and target triples have preexisting conventions into which such a scheme does not fit, we have resisted doing so.

Detailed design

This RFC introduces three separate features to the compiler and Cargo. When combined they will enable the compiler to change whether the C standard library is linked dynamically or statically. In isolation each feature is a natural extension of existing features, and each should be useful on its own.

A key insight is that, for practical purposes, the object code for the standard library does not need to change based on how the C runtime is being linked; though it is true that on Windows, it is generally important to properly manage the use of dllimport/dllexport attributes based on the linkage type, and C code does need to be compiled with specific options based on the linkage type. So it is technically possible to produce Rust executables and dynamic libraries that either link to libc statically or dynamically from a single std binary by correctly manipulating the arguments to the linker.

A second insight is that there are multiple existing, unserved use cases for configuring features of the hardware architecture, underlying platform, or runtime 1, which require the entire ‘world’, possibly including std, to be compiled a certain way. C runtime linkage is another example of this requirement.

From these observations we can design a cross-platform solution spanning both Cargo and the compiler by which Rust programs may link to either a dynamic or static C library, using only a single std binary. As future work this RFC discusses how the proposed scheme scheme can be extended to rebuild std specifically for a particular C-linkage scenario, which may have minor advantages on Windows due to issues around dllimport and dllexport; and how this scheme naturally extends to recompiling std in the presence of modified CPU features.

This RFC does not propose unifying how the C runtime is linked across platforms (e.g. always dynamically or always statically) but instead leaves that decision to each target, and to future work.

In summary the new mechanics are:

  • Specifying C runtime linkage via -C target-feature=+crt-static or -C target-feature=-crt-static. This extends -C target-feature to mean not just “CPU feature” ala LLVM, but “feature of the Rust target”. Several existing properties of this flag, the ability to add, with +, or remove, with -, the feature, as well as the automatic lowering to cfg values, are crucial to later aspects of the design. This target feature will be added to targets via a small extension to the compiler’s target specification.
  • Lowering cfg values to Cargo build script environment variables. This will enable build scripts to understand all enabled features of a target (like crt-static above) to, for example, compile C code correctly on MSVC.
  • Lazy link attributes. This feature is only required by std’s own copy of the libc crate, and only because std is distributed in binary form and it may yet be a long time before Cargo itself can rebuild std.

Specifying dynamic/static C runtime linkage

A new target-feature flag will now be supported by the compiler for relevant targets: crt-static. This can be enabled and disabled in the compiler via:

rustc -C target-feature=+crt-static ...
rustc -C target-feature=-crt-static ...

Currently all target-feature flags are passed through straight to LLVM, but this proposes extending the meaning of target-feature to Rust-target-specific features as well. Target specifications will be able to indicate what custom target-features can be defined, and most existing targets will define a new crt-static feature which is turned off by default (except for musl).

The default of crt-static will be different depending on the target. For example x86_64-unknown-linux-musl will have it on by default, whereas arm-unknown-linux-musleabi will have it turned off by default.

Lowering cfg values to Cargo build script environment variables

Cargo will begin to forward cfg values from the compiler into build scripts. Currently the compiler supports --print cfg as a flag to print out internal cfg directives, which Cargo uses to implement platform-specific dependencies.

When Cargo runs a build script it already sets a number of environment variables, and it will now set a family of CARGO_CFG_* environment variables as well. For each key printed out from rustc --print cfg, Cargo will set an environment variable for the build script to learn about.

For example, locally rustc --print cfg prints:

target_os="linux"
target_family="unix"
target_arch="x86_64"
target_endian="little"
target_pointer_width="64"
target_env="gnu"
unix
debug_assertions

And with this Cargo would set the following environment variables for build script invocations for this target.

export CARGO_CFG_TARGET_OS=linux
export CARGO_CFG_TARGET_FAMILY=unix
export CARGO_CFG_TARGET_ARCH=x86_64
export CARGO_CFG_TARGET_ENDIAN=little
export CARGO_CFG_TARGET_POINTER_WIDTH=64
export CARGO_CFG_TARGET_ENV=gnu
export CARGO_CFG_UNIX
export CARGO_CFG_DEBUG_ASSERTIONS

As mentioned in the previous section, the linkage of the C standard library will be specified as a target feature, which is lowered to a cfg value, thus giving build scripts the ability to modify compilation options based on C standard library linkage. One important complication here is that cfg values in Rust may be defined multiple times, and this is the case with target features. When a cfg value is defined multiple times, Cargo will create a single environment variable with a comma-separated list of values.

So for a target with the following features enabled

target_feature="sse"
target_feature="crt-static"

Cargo would convert it to the following environment variable:

export CARGO_CFG_TARGET_FEATURE=sse,crt-static

Through this method build scripts will be able to learn how the C standard library is being linked. This is crucially important for the MSVC target where code needs to be compiled differently depending on how the C library is linked.

This feature ends up having the added benefit of informing build scripts about selected CPU features as well. For example once the target_feature #[cfg] is stabilized build scripts will know whether SSE/AVX/etc are enabled features for the C code they might be compiling.

After this change, the gcc-rs crate will be modified to check for the CARGO_CFG_TARGET_FEATURE directive, and parse it into a list of enabled features. If the crt-static feature is not enabled it will compile C code on the MSVC target with /MD, indicating dynamic linkage. Otherwise if the value is static it will compile code with /MT, indicating static linkage. Because today the MSVC targets use dynamic linkage and gcc-rs compiles C code with /MD, gcc-rs will remain forward and backwards compatible with existing and future Rust MSVC toolchains until such time as the decision is made to change the MSVC toolchain to +crt-static by default.

The final feature that will be added to the compiler is the ability to “lazily” interpret the linkage requirements of a native library depending on values of cfg at compile time of downstream crates, not of the crate with the #[link] directives. This feature is never intended to be stabilized, and is instead targeted at being an unstable implementation detail of the libc crate linked to std (but not the stable libc crate deployed to crates.io).

Specifically, the #[link] attribute will be extended with a new argument that it accepts, cfg(..), such as:

#[link(name = "foo", cfg(bar))]

This cfg indicates to the compiler that the #[link] annotation only applies if the bar directive is matched. This interpretation is done not during compilation of the crate in which the #[link] directive appears, but during compilation of the crate in which linking is finally performed. The compiler will then use this knowledge in two ways:

  • When dllimport or dllexport needs to be applied, it will evaluate the final compilation unit’s #[cfg] directives and see if upstream #[link] directives apply or not.

  • When deciding what native libraries should be linked, the compiler will evaluate whether they should be linked or not depending on the final compilation’s #[cfg] directives and the upstream #[link] directives.

Customizing linkage to the C runtime

With the above features, the following changes will be made to select the linkage of the C runtime at compile time for downstream crates.

First, the libc crate will be modified to contain blocks along the lines of:

cfg_if! {
    if #[cfg(target_env = "musl")] {
        #[link(name = "c", cfg(target_feature = "crt-static"), kind = "static")]
        #[link(name = "c", cfg(not(target_feature = "crt-static")))]
        extern {}
    } else if #[cfg(target_env = "msvc")] {
        #[link(name = "msvcrt", cfg(not(target_feature = "crt-static")))]
        #[link(name = "libcmt", cfg(target_feature = "crt-static"))]
        extern {}
    } else {
        // ...
    }
}

This informs the compiler that, for the musl target, if the CRT is statically linked then the library named c is included statically in libc.rlib. If the CRT is linked dynamically, however, then the library named c will be linked dynamically. Similarly for MSVC, a static CRT implies linking to libcmt and a dynamic CRT implies linking to msvcrt (as we do today).

Finally, an example of compiling for MSVC and linking statically to the C runtime would look like:

set RUSTFLAGS=-C target-feature=+crt-static
cargo build --target x86_64-pc-windows-msvc

and similarly, compiling for musl but linking dynamically to the C runtime would look like:

RUSTFLAGS='-C target-feature=-crt-static' cargo build --target x86_64-unknown-linux-musl

Future work

The features proposed here are intended to be the absolute bare bones of support needed to configure how the C runtime is linked. A primary drawback, however, is that it’s somewhat cumbersome to select the non-default linkage of the CRT. Similarly, however, it’s cumbersome to select target CPU features which are not the default, and these two situations are very similar. Eventually it’s intended that there’s an ergonomic method for informing the compiler and Cargo of all “compilation codegen options” over the usage of RUSTFLAGS today.

Furthermore, it would have arguably been a “more correct” choice for Rust to by default statically link to the CRT on MSVC rather than dynamically. While this would be a breaking change today due to how C components are compiled, if this RFC is implemented it should not be a breaking change to switch the defaults in the future, after a reasonable transition period.

The support in this RFC implies that the exact artifacts that we’re shipping will be usable for both dynamically and statically linking the CRT. Unfortunately, however, on MSVC code is compiled differently if it’s linking to a dynamic library or not. The standard library uses very little of the MSVCRT, so this won’t be a problem in practice for now, but runs the risk of binding our hands in the future. It’s intended, though, that Cargo will eventually support custom-compiling the standard library. The crt-static feature would simply be another input to this logic, so Cargo would custom-compile the standard library if it differed from the upstream artifacts, solving this problem.

References

  • [Issue about MSVCRT static linking] (https://github.com/rust-lang/libc/issues/290)
  • [Issue about musl dynamic linking] (https://github.com/rust-lang/rust/issues/34987)
  • [Discussion on issues around glgobal codegen configuration] (https://internals.rust-lang.org/t/pre-rfc-a-vision-for-platform-architecture-configuration-specific-apis/3502)
  • [std-aware Cargo RFC] (https://github.com/rust-lang/libc/issues/290). A proposal to teach Cargo to build the standard library. Rebuilding of std will likely in the future be influenced by -C target-feature.
  • [Cargo’s documentation on build-script environment variables] (https://github.com/rust-lang/libc/issues/290)

Drawbacks

  • Working with RUSTFLAGS can be cumbersome, but as explained above it’s planned that eventually there’s a much more ergonomic configuration method for other codegen options like target-cpu which would also encompass the linkage of the CRT.

  • Adding a feature which is intended to never be stable (#[link(.., cfg(..))]) is somewhat unfortunate but allows sidestepping some of the more thorny questions with how this works. The stable semantics will be that for some targets the --cfg crt_link=... directive affects the linkage of the CRT, which seems like a worthy goal regardless.

  • The lazy semantics of #[link(cfg(..))] are not so obvious from the name (no other cfg attribute is treated this way). But this seems a minor issue since the feature serves one implementation-specif purpose and isn’t intended for stabilization.

Alternatives

  • One alternative is to add entirely new targets, for example x86_64-pc-windows-msvc-static. Unfortunately though we don’t have a great naming convention for this, and it also isn’t extensible to other codegen options like target-cpu. Additionally, adding a new target is a pretty heavyweight solution as we’d have to start distributing new artifacts and such.

  • Another possibility would be to start storing metadata in the “target name” along the lines of x86_64-pc-windows-msvc+static. This is a pretty big design space, though, which may not play well with Cargo and build scripts, so for now it’s preferred to avoid this rabbit hole of design if possible.

  • Finally, the compiler could simply have an environment variable which indicates the CRT linkage. This would then be read by the compiler and by build scripts, and the compiler would have its own back channel for changing the linkage of the C library along the lines of #[link(.., cfg(..))] above.

  • Another approach has been proposed recently that has rustc define an environment variable to specify the C runtime kind.

  • Instead of extending the semantics of -C target-feature beyond “CPU features”, we could instead add a new flag for the purpose, e.g. -C custom-feature.

Unresolved questions

  • What happens during the cfg to environment variable conversion for values that contain commas? It’s an unusual corner case, and build scripts should not depend on such values, but it needs to be handled sanely.

  • Is it really true that lazy linking is only needed by std’s libc? What about in a world where we distribute more precompiled binaries than just std?

Summary

Add two functions, ptr::read_unaligned and ptr::write_unaligned, which allows reading/writing to an unaligned pointer. All other functions that access memory (ptr::{read,write}, ptr::copy{_nonoverlapping}, etc) require that a pointer be suitably aligned for its type.

Motivation

One major use case is to make working with packed structs easier:

#[repr(packed)]
struct Packed(u8, u16, u8);

let mut a = Packed(0, 1, 0);
unsafe {
    let b = ptr::read_unaligned(&a.1);
    ptr::write_unaligned(&mut a.1, b + 1);
}

Other use cases generally involve parsing some file formats or network protocols that use unaligned values.

Detailed design

The implementation of these functions are simple wrappers around ptr::copy_nonoverlapping. The pointers are cast to u8 to ensure that LLVM does not make any assumptions about the alignment.

pub unsafe fn read_unaligned<T>(p: *const T) -> T {
    let mut r = mem::uninitialized();
    ptr::copy_nonoverlapping(p as *const u8,
                             &mut r as *mut _ as *mut u8,
                             mem::size_of::<T>());
    r
}

pub unsafe fn write_unaligned<T>(p: *mut T, v: T) {
    ptr::copy_nonoverlapping(&v as *const _ as *const u8,
                             p as *mut u8,
                             mem::size_of::<T>());
}

Drawbacks

There functions aren’t strictly necessary since they are just convenience wrappers around ptr::copy_nonoverlapping.

Alternatives

We could simply not add these, however figuring out how to do unaligned access properly is extremely unintuitive: you need to cast the pointer to *mut u8 and then call ptr::copy_nonoverlapping.

Unresolved questions

None

Summary

A refinement of the Rust planning and reporting process, to establish a shared vision of the project among contributors, to make clear the roadmap toward that vision, and to celebrate our achievements.

Rust’s roadmap will be established in year-long cycles, where we identify up front - together, as a project - the most critical problems facing the language and its ecosystem, along with the story we want to be able to tell the world about Rust. Work toward solving those problems, our short-term goals, will be decided by the individual teams, as they see fit, and regularly re-triaged. For the purposes of reporting the project roadmap, goals will be assigned to release cycle milestones.

At the end of the year we will deliver a public facing retrospective, describing the goals we achieved and how to use the new features in detail. It will celebrate the year’s progress toward our goals, as well as the achievements of the wider community. It will evaluate our performance and anticipate its impact on the coming year.

The primary outcome for these changes to the process are that we will have a consistent way to:

  • Decide our project-wide goals through consensus.
  • Advertise our goals as a published roadmap.
  • Celebrate our achievements with an informative publicity-bomb.

Motivation

Rust is a massive project and ecosystem, developed by a massive team of mostly-independent contributors. What we’ve achieved together already is mind-blowing: we’ve created a uniquely powerful platform that solves problems that the computing world had nearly given up on, and jumpstarted a new era in systems programming. Now that Rust is out in the world, proving itself to be a stable foundation for building the next generation of computing systems, the possibilities open to us are nearly endless.

And that’s a big problem.

In the run-up to the release of Rust 1.0 we had a clear, singular goal: get Rust done and deliver it to the world. We established the discrete steps necessary to get there, and although it was a tense period where the entire future of the project was on the line, we were united in a single mission. As The Rust Project Developers we were pumped up, and our user base - along with the wider programming world - were excited to see what we would deliver.

But 1.0 is a unique event, and since then our efforts have become more diffuse even as the scope of our ambitions widen. This shift is inevitable: our success post-1.0 depends on making improvements in increasingly broad and complex ways. The downside, of course, is that a less singular focus can make it much harder to rally our efforts, to communicate a clear story - and ultimately, to ship.

Since 1.0, we’ve attempted to lay out some major goals, both through the internals forum and the blog. We’ve done pretty well in actually achieving these goals, and in some cases - particularly MIR - the community has really come together to produce amazing, focused results. But in general, there are several problems with the status quo:

  • We have not systematically tracked or communicated our progression through the completion of these goals, making it difficult for even the most immersed community members to know where things stand, and making it difficult for anyone to know how or where to get involved. A symptom is that questions like “When is MIR landing?” or “What are the blockers for ? stabilizing” become extremely frequently-asked. We should provide an at-a-glance view what Rust’s current strategic priorities are and how they are progressing.

  • We are overwhelmed by an avalanche of promising ideas, with major RFCs demanding attention (and languishing in the queue for months) while subteams focus on their strategic goals. This state of affairs produces needless friction and loss of momentum. We should agree on and disseminate our priorities, so we can all be pulling in roughly the same direction.

  • We do not have any single point of release, like 1.0, that gathers together a large body of community work into a single, polished product. Instead, we have a rapid release process, which results in a remarkably stable and reliable product but can paradoxically reduce pressure to ship new features in a timely fashion. We should find a balance, retaining rapid release but establishing some focal point around which to rally the community, polish a product, and establish a clear public narrative.

All told, there’s a lot of room to do better in establishing, communicating, and driving the vision for Rust.

This RFC proposes changes to the way The Rust Project plans its work, communicates and monitors its progress, directs contributors to focus on the strategic priorities of the project, and finally, delivers the results of its effort to the world.

The changes proposed here are intended to work with the particular strengths of our project - community development, collaboration, distributed teams, loose management structure, constant change and uncertainty. It should introduce minimal additional burden on Rust team members, who are already heavily overtasked. The proposal does not attempt to solve all problems of project management in Rust, nor to fit the Rust process into any particular project management structure. Let’s make a few incremental improvements that will have the greatest impact, and that we can accomplish without disruptive changes to the way we work today.

Detailed design

Rust’s roadmap will be established in year-long cycles, where we identify up front the most critical problems facing the project, formulated as problem statements. Work toward solving those problems, goals, will be planned as part of the release cycles by individual teams. For the purposes of reporting the project roadmap, goals will be assigned to release cycle milestones, which represent the primary work performed each release cycle. Along the way, teams will be expected to maintain tracking issues that communicate progress toward the project’s goals.

At the end of the year we will deliver a public facing retrospective, which is intended as a ‘rallying point’. Its primary purposes are to create anticipation of a major event in the Rust world, to motivate (rally) contributors behind the goals we’ve established to get there, and generate a big PR-bomb where we can brag to the world about what we’ve done. It can be thought of as a ‘state of the union’. This is where we tell Rust’s story, describe the new best practices enabled by the new features we’ve delivered, celebrate those contributors who helped achieve our goals, honestly evaluate our performance, and look forward to the year to come.

Summary of terminology

Key terminology used in this RFC:

  • problem statement - A description of a major issue facing Rust, possibly spanning multiple teams and disciplines. We decide these together, every year, so that everybody understands the direction the project is taking. These are used as the broad basis for decision making throughout the year, and are captured in the yearly “north star RFC”, and tagged R-problem-statement on the issue tracker.

  • goal - These are set by individual teams quarterly, in service of solving the problems identified by the project. They have estimated deadlines, and those that result in stable features have estimated release numbers. Goals may be subdivided into further discrete tasks on the issue tracker. They are tagged R-goal.

  • retrospective - At the end of the year we deliver a retrospective report. It presents the result of work toward each of our goals in a way that serves to reinforce the year’s narrative. These are written for public consumption, showing off new features, surfacing interesting technical details, and celebrating those who contribute to achieving the project’s goals and resolving it’s problems.

  • release cycle milestone - All goals have estimates for completion, placed on milestones that correspond to the 6 week release cycle. These milestones are timed to correspond to a release cycle, but don’t represent a specific release. That is, work toward the current nightly, the current beta, or even that doesn’t directly impact a specific release, all goes into the release cycle milestone corresponding to the time period in which the work is completed.

Problem statements and the north star RFC

The full planning cycle spans one year. At the beginning of the cycle we identify areas of Rust that need the most improvement, and at the end of the cycle is a ‘rallying point’ where we deliver to the world the results of our efforts. We choose year-long cycles because a year is enough time to accomplish relatively large goals; and because having the rallying point occur at the same time every year makes it easy to know when to anticipate big news from the project. Being calendar-based avoids the temptation to slip or produce feature-based releases, instead providing a fixed point of accountability for shipping.

This planning effort is problem-oriented. Focusing on “how” may seem like an obvious thing to do, but in practice it’s very easy to become enamored of particular technical ideas and lose sight of the larger context. By codifying a top-level focus on motivation, we ensure we are focusing on the right problems and keeping an open mind on how to solve them. Consensus on the problem space then frames the debate on solutions, helping to avoid surprises and hurt feelings, and establishing a strong causal record for explaining decisions in the future.

At the beginning of the cycle we spend no more than one month deciding on a small set of problem statements for the project, for the year. The number needs to be small enough to present to the community managably, while also sufficiently motivating the primary work of all the teams for the year. 8-10 is a reasonable guideline. This planning takes place via the RFC process and is open to the entire community. The result of the process is the yearly ‘north star RFC’.

The problem statements established here determine the strategic direction of the project. They identify critical areas where the project is lacking and represent a public commitment to fixing them. They should be informed in part by inputs like the survey and production user outreach, as well as an open discussion process. And while the end-product is problem-focused, the discussion is likely to touch on possible solutions as well. We shouldn’t blindly commit to solving a problem without some sense for the plausibility of a solution in terms of both design and resources.

Problem statements consist of a single sentence summarizing the problem, and one or more paragraphs describing it (and its importance!) in detail. Examples of good problem statements might be:

  • The Rust compiler is too slow for a tight edit-compile-test cycle
  • Rust lacks world-class IDE support
  • The Rust story for asynchronous I/O is very primitive
  • Rust compiler errors are difficult to understand
  • Rust plugins have no clear path to stabilization
  • Rust doesn’t integrate well with garbage collectors
  • Rust’s trait system doesn’t fully support zero-cost abstractions
  • The Rust community is insufficiently diverse
  • Rust needs more training materials
  • Rust’s CI infrastructure is unstable
  • It’s too hard to obtain Rust for the platforms people want to target

During the actual process each of these would be accompanied by a paragraph or more of justification.

We strictly limit the planning phase to one month in order to keep the discussion focused and to avoid unrestrained bikeshedding. The activities specified here are not the focus of the project and we need to get through them efficiently and get on with the actual work.

The core team is responsible for initiating the process, either on the internals forum or directly on the RFC repository, and the core team is responsible for merging the final RFC, thus it will be their responsibility to ensure that the discussion drives to a reasonable conclusion in time for the deadline.

Once the year’s problem statements are decided, a metabug is created for each on the rust-lang/rust issue tracker and tagged R-problem-statement. In the OP of each metabug the teams are responsible for maintaining a list of their goals, linking to tracking issues.

Like other RFCs, the north star RFC is not immutable, and if new motivations arise during the year, it may be amended, even to the extent of adding additional problem statements; though it is not appropriate for the project to continually rehash the RFC.

Goal setting and tracking progress

During the regular 6-week release cycles is where the solutions take shape and are carried out. Each cycle teams are expected to set concrete goals that work toward solving the project’s stated problems; and to review and revise their previous goals. The exact forum and mechanism for doing this evaluation and goal-setting is left to the individual teams, and to future experimentation, but the end result is that each release cycle each team will document their goals and progress in a standard format.

A goal describes a task that contributes to solving the year’s problems. It may or may not involve a concrete deliverable, and it may be in turn subdivided into further goals. Not all the work items done by teams in a quarter should be considered a goal. Goals only need to be granular enough to demonstrate consistent progress toward solving the project’s problems. Work that contributes toward quarterly goals should still be tracked as sub-tasks of those goals, but only needs to be filed on the issue tracker and not reported directly as goals on the roadmap.

For each goal the teams will create an issue on the issue tracker tagged with R-goal. Each goal must be described in a single sentence summary with an end-result or deliverable that is as crisply stated as possible. Goals with sub-goals and sub-tasks must list them in the OP in a standard format.

During each cycle all R-goal and R-unstable issues assigned to each team must be triaged and updated for the following information:

  • The set of sub-goals and sub-tasks and their status
  • The release cycle milestone

Goals that will be likely completed in this cycle or the next should be assigned to the appropriate milestone. Some goals may be expected to be completed in the distant future, and these do not need to be assigned a milestone.

The release cycle milestone corresponds to a six week period of time and contains the work done during that time. It does not correspond to a specific release, nor do the goals assigned to it need to result in a stable feature landing in any specific release.

Release cycle milestones serve multiple purposes, not just tracking of the goals defined in this RFC: R-goal tracking, tracking of stabilization of R-unstable and R-RFC-approved features, tracking of critical bug fixes.

Though the release cycle milestones are time-oriented and are not strictly tied to a single upcoming release, from the set of assigned R-unstable issues one can derive the new features landing in upcoming releases.

During the last week of every release cycle each team will write a brief report summarizing their goal progress for the cycle. Some project member will compile all the team reports and post them to internals.rust-lang.org. In addition to providing visibility into progress, these will be sources to draw from for the subsequent release announcements.

The retrospective (rallying point)

The retrospective is an opportunity to showcase the best of Rust and its community to the world.

It is a report covering all the Rust activity of the past year. It is written for a broad audience: contributors, users and non-users alike. It reviews each of the problems we tackled this year and the goals we achieved toward solving them, and it highlights important work in the broader community and ecosystem. For both these things the retrospective provides technical detail, as though it were primary documentation; this is where we show our best side to the world. It explains new features in depth, with clear prose and plentiful examples, and it connects them all thematically, as a demonstration of how to write cutting-edge Rust code.

While we are always lavish with our praise of contributors, the retrospective is the best opportunity to celebrate specific individuals and their contributions toward the strategic interests of the project, as defined way back at the beginning of the year.

Finally, the retrospective is an opportunity to evaluate our performance. Did we make progress toward solving the problems we set out to solve? Did we outright solve any of them? Where did we fail to meet our goals and how might we do better next year?

Since the retrospective must be a high-quality document, and cover a lot of material, it is expected to require significant planning, editing and revision. The details of how this will work are to be determined.

Presenting the roadmap

As a result of this process the Rust roadmap for the year is encoded in three main ways, that evolve over the year:

  • The north-star RFC, which contains the problem statements collected in one place
  • The R-problem-statement issues, which contain the individual problem statements, each linking to supporting goals
  • The R-goal issues, which contain a hierarchy of work items, tagged with metadata indicating their statuses.

Alone, these provide the raw data for a roadmap. A user could run a GitHub query for all R-problem-statement issues, and by digging through them get a reasonably accurate picture of the roadmap.

However, for the process to be a success, we need to present the roadmap in a way that is prominent, succinct, and layered with progressive detail. There is a lot of opportunity for design here; an early prototype of one possible view is available here.

Again, the details are to be determined.

Calendar

The timing of the events specified by this RFC is precisely specified in order to set clear expectations and accountability, and to avoid process slippage. The activities specified here are not the focus of the project and we need to get through them efficiently and get on with the actual work.

The north star RFC development happens during the month of September, starting September 1 and ending by October 1. This means that an RFC must be ready for FCP by the last week of September. We choose September for two reasons: it is the final month of a calendar quarter, allowing the beginning of the years work to commence at the beginning of calendar Q4; we choose Q4 because it is the traditional conference season and allows us opportunities to talk publicly about both our previous years progress as well as next years ambitions. By contrast, starting with Q1 of the calendar year is problematic due to the holiday season.

Following from the September planning month, the quarterly planning cycles take place for exactly one week at the beginning of the calendar quarter; likewise, the planning for each subsequent quarter at the beginning of the calendar quarter; and the development of the yearly retrospective approximately for the month of August.

The survey and other forms of outreach and data gathering should be timed to fit well into the overall calendar.

References

  • [Refining RFCs part 1: Roadmap] (https://internals.rust-lang.org/t/refining-rfcs-part-1-roadmap/3656), the internals.rust-lang.org thread that spawned this RFC.
  • [Post-1.0 priorities thread on internals.rust-lang.org] (https://internals.rust-lang.org/t/priorities-after-1-0/1901).
  • [Post-1.0 blog post on project direction] (https://blog.rust-lang.org/2015/08/14/Next-year.html).
  • [Blog post on MIR] (https://blog.rust-lang.org/2016/04/19/MIR.html), a large success in strategic community collaboration.
  • [“Stability without stagnation”] (http://blog.rust-lang.org/2014/10/30/Stability.html), outlining Rust’s philosophy on rapid iteration while maintaining strong stability guarantees.
  • [The 2016 state of Rust survey] (https://blog.rust-lang.org/2016/06/30/State-of-Rust-Survey-2016.html), which indicates promising directions for future work.
  • [Production user outreach thread on internals.rust-lang.org] (https://internals.rust-lang.org/t/production-user-research-summary/2530), another strong indicator of Rust’s needs.
  • [rust-z] (https://brson.github.io/rust-z), a prototype tool to organize the roadmap.

Drawbacks

The yearly north star RFC could be an unpleasant bikeshed, because it simultaneously raises the stakes of discussion while moving away from concrete proposals. That said, the problem orientation should help facilitate discussion, and in any case it’s vital to be explicit about our values and prioritization.

While part of the aim of this proposal is to increase the effectiveness of our team, it also imposes some amount of additional work on everyone. Hopefully the benefits will outweigh the costs.

The end-of-year retrospective will require significant effort. It’s not clear who will be motivated to do it, and at the level of quality it demands. This is the piece of the proposal that will probably need the most follow-up work.

Alternatives

Instead of imposing further process structure on teams we might attempt to derive a roadmap solely from the data they are currently producing.

To serve the purposes of a ‘rallying point’, a high-profile deliverable, we might release a software product instead of the retrospective. A larger-scope product than the existing rustc+cargo pair could accomplish this, i.e. The Rust Platform idea.

Another rallying point could be a long-term support release.

Unresolved questions

Are 1 year cycles long enough?

Are 1 year cycles too long? What happens if important problems come up mid-cycle?

Does the yearly report serve the purpose of building anticipation, motivation, and creating a compelling PR-bomb?

Is a consistent time-frame for the big cycle really the right thing? One of the problems we have right now is that our release cycles are so predictable they are almost boring. It could be more exciting to not know exactly when the cycle is going to end, to experience the tension of struggling to cross the finish line.

How can we account for work that is not part of the planning process described here?

How do we address problems that are outside the scope of the standard library and compiler itself? (See The Rust Platform for an alternative aimed at this goal.)

How do we motivate the improvement of rust-lang crates and other libraries? Are they part of the planning process? The retrospective?

‘Problem statement’ is not inspiring terminology. We don’t want to our roadmap to be front-loaded with ‘problems’. Likewise, ‘goal’ and ‘retrospective’ could be more colorful.

Can we call the yearly RFC the ‘north star RFC’? Too many concepts?

What about tracking work that is not part of R-problem-statement and R-goal? I originally wanted to track all features in a roadmap, but this does not account for anything that has not been explicitly identified as supporting the roadmap. As formulated this proposal does not provide an easy way to find the status of arbitrary features in the RFC pipeline.

How do we present the roadmap? Communicating what the project is working on and toward is one of the primary goals of this RFC and the solution it proposes is minimal - read the R-problem-statement issues.

Summary

Traits can be aliased with the trait TraitAlias = …; construct. Currently, the right hand side is a bound – a single trait, a combination with + traits and lifetimes. Type parameters and lifetimes can be added to the trait alias if needed.

Motivation

First motivation: impl

Sometimes, some traits are defined with parameters. For instance:

pub trait Foo<T> {
  // ...
}

It’s not uncommon to do that in generic crates and implement them in backend crates, where the T template parameter gets substituted with a backend type.

// in the backend crate
pub struct Backend;

impl Foo<Backend> for i32 {
  // ...
}

Users who want to use that crate will have to export both the trait Foo from the generic crate and the backend singleton type from the backend crate. Instead, we would like to be able to leave the backend singleton type hidden in the crate. The first shot would be to create a new trait for our backend:

pub trait FooBackend: Foo<Backend> {
  // ...
}

fn use_foo<A>(_: A) where A: FooBackend {}

If you try to pass an object that implements Foo<Backend>, that won’t work, because it doesn’t implement FooBackend. However, we can make it work with the following universal impl:

impl<T> FooBackend for T where T: Foo<Backend> {}

With that, it’s now possible to pass an object that implements Foo<Backend> to a function expecting a FooBackend. However, what about impl blocks? What happens if we implement only FooBackend? Well, we cannot, because the trait explicitly states that we need to implement Foo<Backend>. We hit a problem here. The problem is that even though there’s a compatibility at the trait bound level between Foo<Backend> and FooBackend, there’s none at the impl level, so all we’re left with is implementing Foo<Backend> – that will also provide an implementation for FooBackend because of the universal implementation just above.

Second example: ergonomic collections and scrapping boilerplate

Another example is associated types. Take the following trait from tokio:

pub trait Service {
  type Request;
  type Response;
  type Error;
  type Future: Future<Item=Self::Response, Error=Self::Error>;
  fn call(&self, req: Self::Request) -> Self::Future;
}

It would be nice to be able to create a few aliases to remove boilerplate for very common combinations of associated types with Service.

Service<Request = http::Request, Response = http::Response, Error = http::Error>;

The trait above is a http service trait which only the associated type Future is left to be implemented. Such an alias would be very appealing because it would remove copying the whole Service trait into use sites – trait bounds, or even trait impls. Scrapping such an annoying boilerplate is a definitive plus to the language and might be one of the most interesting use case.

Detailed design

Syntax

Declaration

The syntax chosen to declare a trait alias is:

trait TraitAlias = Trait;

Trait aliasing to combinations of traits is also provided with the standard + construct:

trait DebugDefault = Debug + Default;

Optionally, if needed, one can provide a where clause to express bounds:

trait DebugDefault = Debug where Self: Default; // same as the example above

Furthermore, it’s possible to use only the where clause by leaving the list of traits empty:

trait DebugDefault = where Self: Debug + Default;

It’s also possible to partially bind associated types of the right hand side:

trait IntoIntIterator = IntoIterator<Item=i32>;

This would leave IntoIntIterator with a free parameter being IntoIter, and it should be bind the same way associated types are bound with regular traits:

fn foo<I>(int_iter: I) where I: IntoIntIterator<IntoIter = std::slice::Iter<i32>> {}

A trait alias can be parameterized over types and lifetimes, just like traits themselves:

trait LifetimeParametric<'a> = Iterator<Item=Cow<'a, [i32]>>;`

trait TypeParametric<T> = Iterator<Item=Cow<'static, [T]>>;

Specifically, the grammar being added is, in informal notation:

ATTRIBUTE* VISIBILITY? trait IDENTIFIER(<GENERIC_PARAMS>)? = GENERIC_BOUNDS (where PREDICATES)?;

GENERIC_BOUNDS is a list of zero or more traits and lifetimes separated by +, the same as the current syntax for bounds on a type parameter, and PREDICATES is a comma-separated list of zero or more predicates, just like any other where clause. GENERIC_PARAMS is a comma-separated list of zero or more lifetime and type parameters, with optional bounds, just like other generic definitions.

Use semantics

You cannot directly impl a trait alias, but you can have them as bounds, trait objects and impl Trait.


It is an error to attempt to override a previously specified equivalence constraint with a non-equivalent type. For example:

trait SharableIterator = Iterator + Sync;
trait IntIterator = Iterator<Item=i32>;

fn quux1<T: SharableIterator<Item=f64>>(...) { ... } // ok
fn quux2<T: IntIterator<Item=i32>>(...) { ... } // ok (perhaps subject to lint warning)
fn quux3<T: IntIterator<Item=f64>>(...) { ... } // ERROR: `Item` already constrained

trait FloIterator = IntIterator<Item=f64>; // ERROR: `Item` already constrained

When using a trait alias as a trait object, it is subject to object safety restrictions after substituting the aliased traits. This means:

  1. it contains an object safe trait, optionally a lifetime, and zero or more of these other bounds: Send, Sync (that is, trait Show = Display + Debug; would not be object safe);
  2. all the associated types of the trait need to be specified;
  3. the where clause, if present, only contains bounds on Self.

Some examples:

trait Sink = Sync;
trait ShareableIterator = Iterator + Sync;
trait PrintableIterator = Iterator<Item=i32> + Display;
trait IntIterator = Iterator<Item=i32>;

fn foo1<T: ShareableIterator>(...) { ... } // ok
fn foo2<T: ShareableIterator<Item=i32>>(...) { ... } // ok
fn bar1(x: Box<ShareableIterator>) { ... } // ERROR: associated type not specified
fn bar2(x: Box<ShareableIterator<Item=i32>>) { ... } // ok
fn bar3(x: Box<PrintableIterator>) { ... } // ERROR: too many traits (*)
fn bar4(x: Box<IntIterator + Sink + 'static>) { ... } // ok (*)

The lines marked with (*) assume that #24010 is fixed.

Ambiguous constraints

If there are multiple associated types with the same name in a trait alias, then it is a static error (“ambiguous associated type”) to attempt to constrain that associated type via the trait alias. For example:

trait Foo { type Assoc; }
trait Bar { type Assoc; } // same name!

// This works:
trait FooBar1 = Foo<Assoc = String> + Bar<Assoc = i32>;

// This does not work:
trait FooBar2 = Foo + Bar;
fn badness<T: FooBar2<Assoc = String>>() { } // ERROR: ambiguous associated type

// Here are ways to workaround the above error:
fn better1<T: FooBar2 + Foo<Assoc = String>>() { } // (leaves Bar::Assoc unconstrained)
fn better2<T: FooBar2 + Foo<Assoc = String> + Bar<Assoc = i32>>() { } // constrains both

Teaching

Traits are obviously a huge prerequisite. Traits aliases could be introduced at the end of that chapter.

Conceptually, a trait alias is a syntax shortcut used to reason about one or more trait(s). Inherently, the trait alias is usable in a limited set of places:

  • as a bound: exactly like a trait, a trait alias can be used to constraint a type (type parameters list, where-clause)
  • as a trait object: same thing as with a trait, a trait alias can be used as a trait object if it fits object safety restrictions (see above in the semantics section)
  • in an impl Trait

Examples should be showed for all of the three cases above:

As a bound

trait StringIterator = Iterator<Item=String>;

fn iterate<SI>(si: SI) where SI: StringIterator {} // used as bound

As a trait object

fn iterate_object(si: &StringIterator) {} // used as trait object

In an impl Trait

fn string_iterator_debug() -> impl Debug + StringIterator {} // used in an impl Trait

As shown above, a trait alias can substitute associated types. It doesn’t have to substitute them all. In that case, the trait alias is left incomplete and you have to pass it the associated types that are left. Example with the tokio case:

pub trait Service {
  type Request;
  type Response;
  type Error;
  type Future: Future<Item=Self::Response, Error=Self::Error>;
  fn call(&self, req: Self::Request) -> Self::Future;
}

trait HttpService = Service<Request = http::Request, Response = http::Response, Error = http::Error>;

trait MyHttpService = HttpService<Future = MyFuture>; // assume MyFuture exists and fulfills the rules to be used in here

Drawbacks

  • Adds another construct to the language.

  • The syntax trait TraitAlias = Trait requires lookahead in the parser to disambiguate a trait from a trait alias.

Alternatives

Should we use type as the keyword instead of trait?

type Foo = Bar; already creates an alias Foo that can be used as a trait object.

If we used type for the keyword, this would imply that Foo could also be used as a bound as well. If we use trait as proposed in the body of the RFC, then type Foo = Bar; and trait Foo = Bar; both create an alias for the object type, but only the latter creates an alias that can be used as a bound, which is a confusing bit of redundancy.

However, this mixes the concepts of types and traits, which are different, and allows nonsense like type Foo = Rc<i32> + f32; to parse.

Supertraits & universal impl

It’s possible to create a new trait that derives the trait to alias, and provide a universal impl

trait Foo {}

trait FooFakeAlias: Foo {}

impl<T> Foo for T where T: FooFakeAlias {}

This works for trait objects and trait bounds only. You cannot implement FooFakeAlias directly because you need to implement Foo first – hence, you don’t really need FooFakeAlias if you can implement Foo.

There’s currently no alternative to the impl problem described here.

ConstraintKinds

Similar to GHC’s ConstraintKinds, we could declare an entire predicate as a reified list of constraints, instead of creating an alias for a set of supertraits and predicates. Syntax would be something like constraint Foo<T> = T: Bar, Vec<T>: Baz;, used as fn quux<T>(...) where Foo<T> { ... } (i.e. direct substitution). Trait object usage is unclear.

Syntax for sole where clause.

The current RFC specifies that it is possible to use only the where clause by leaving the list of traits empty:

trait DebugDefault = where Self: Debug + Default;

This is one of many syntaxes that are available for this construct. Alternatives include:

Unresolved questions

Trait alias containing only lifetimes

This is annoying. Consider:

trait Static = 'static;

fn foo<T>(t: T) where T: Static {}

Such an alias is legit. However, I feel concerned about the actual meaning of the declaration – i.e. using the trait keyword to define alias on lifetimes seems a wrong design choice and seems not very consistent.

If we chose another keyword, like constraint, I feel less concerned and it would open further opportunities – see the ConstraintKinds alternative discussion above.

Which bounds need to be repeated when using a trait alias?

RFC 1927 intends to change the rules here for traits, and we likely want to have the rules for trait aliases be the same to avoid confusion.

The constraint alternative sidesteps this issue.

What about bounds on type variable declaration in the trait alias?

trait Foo<T: Bar> = PartialEq<T>;

PartialEq has no super-trait Bar, but we’re adding one via our trait alias. What is the behavior of such a feature? One possible desugaring is:

trait Foo<T> = where Self: PartialEq<T>, T: Bar;

Issue 21903 explains the same problem for type aliasing.

Note: what about the following proposal below?

When using a trait alias as a bound, you cannot add extra bound on the input parameters, like in the following:

trait Foo<T: Bar> = PartialEq<T>;

Here, T adds a Bar bound. Now consider:

trait Bar<T> = PartialEq<T: Bar>;

Currently, we don’t have a proper understanding of that situation, because we’re adding in both cases a bound, and we don’t know how to disambiguate between pre-condition and implication. That is, is that added Bar bound a constraint that T must fulfil in order for the trait alias to be met, or is it a constraint the trait alias itself adds? To disambiguate, consider:

trait BarPrecond<T> where T: Bar = PartialEq<T>;
trait BarImplic<T> = PartialEq<T> where T: Bar;
trait BarImpossible<T> where T: Bar = PartialEq<T> where T: Bar;

BarPrecond would require the use-site code to fulfil the constraint, like the following:

fn foo<A, T>() where A: BarPrecond<T>, T: Bar {}

BarImplic would give us T: Bar:

fn foo<A, T>() where A: BarImplic<T> {
  // T: Bar because given by BarImplic<T>
}

BarImpossible wouldn’t compile because we try to express a pre-condition and an implication for the same bound at the same time. However, it’d be possible to have both a pre-condition and an implication on a parameter:

trait BarBoth<T> where T: Bar = PartialEq<T> where T: Debug;

fn foo<A, T>() where A: BarBoth<T>, T: Bar {
  // T: Debug because given by BarBoth
}
  • Feature Name: repr_transparent
  • Start Date: 2016-09-26
  • RFC PR: rust-lang/rfcs#1758
  • Rust Issue:https://github.com/rust-lang/rust/issues/43036

Summary

Extend the existing #[repr] attribute on newtypes with a transparent option specifying that the type representation is the representation of its only field. This matters in FFI context where struct Foo(T) might not behave the same as T.

Motivation

On some ABIs, structures with one field aren’t handled the same way as values of the same type as the single field. For example on ARM64, functions returning a structure with a single f64 field return nothing and take a pointer to be filled with the return value, whereas functions returning a f64 return the floating-point number directly.

This means that if someone wants to wrap a f64 value in a struct tuple wrapper and use that wrapper as the return type of a FFI function that actually returns a bare f64, the calls to this function will be compiled incorrectly by Rust and the execution of the program will segfault.

This also means that UnsafeCell<T> cannot be soundly used in place of a bare T in FFI context, which might be necessary to signal to the Rust side of things that this T value may unexpectedly be mutated.

// The value is returned directly in a floating-point register on ARM64.
double do_something_and_return_a_double(void);
mod bogus {
    #[repr(C)]
    struct FancyWrapper(f64);

    extern {
        // Incorrect: the wrapped value on ARM64 is indirectly returned and the
        // function takes a pointer to where the return value must be stored.
        fn do_something_and_return_a_double() -> FancyWrapper;
    }
}

mod correct {
    #[repr(transparent)]
    struct FancyWrapper(f64);

    extern {
        // Correct: FancyWrapper is handled exactly the same as f64 on all
        // platforms.
        fn do_something_and_return_a_double() -> FancyWrapper;
    }
}

Given this attribute delegates all representation concerns, no other repr attribute should be present on the type. This means the following definitions are illegal:

#[repr(transparent, align = "128")]
struct BogusAlign(f64);

#[repr(transparent, packed)]
struct BogusPacked(f64);

Detailed design

The #[repr] attribute on newtypes will be extended to include a form such as:

#[repr(transparent)]
struct TransparentNewtype(f64);

This structure will still have the same representation as a raw f64 value.

Syntactically, the repr meta list will be extended to accept a meta item with the name “transparent”. This attribute can be placed on newtypes, i.e. structures (and structure tuples) with a single field, and on structures that are logically equivalent to a newtype, i.e. structures with multiple fields where only a single one of them has a non-zero size.

Some examples of #[repr(transparent)] are:

// Transparent struct tuple.
#[repr(transparent)]
struct TransparentStructTuple(i32);

// Transparent structure.
#[repr(transparent)]
struct TransparentStructure { only_field: f64 }

// Transparent struct wrapper with a marker.
#[repr(transparent)]
struct TransparentWrapper<T> {
    only_non_zero_sized_field: f64,
    marker: PhantomData<T>,
}

This new representation is mostly useful when the structure it is put on must be used in FFI context as a wrapper to the underlying type without actually being affected by any ABI semantics.

It is also useful for AtomicUsize-like types, which RFC 1649 states should have the same representation as their underlying types.

This new representation cannot be used with any other representation attribute:

#[repr(transparent, align = "128")]
struct BogusAlign(f64); // Error, must be aligned like the underlying type.

#[repr(C, transparent)]
struct BogusRepr(f64); // Error, repr cannot be C and transparent.

As a matter of optimisation, eligible #[repr(Rust)] structs behave as if they were #[repr(transparent)] but as an implementation detail that can’t be relied upon by users.

struct ImplicitlyTransparentWrapper(f64);

#[repr(C)]
struct BogusRepr {
    // While ImplicitlyTransparentWrapper implicitly has the same representation
    // as f64, this will fail to compile because ImplicitlyTransparentWrapper
    // has no explicit transparent or C representation.
    wrapper: ImplicitlyTransparentWrapper,
}

The representation of a transparent wrapper is the representation of its only non-zero-sized field, transitively:

#[repr(transparent)]
struct Transparent<T>(T);

#[repr(transparent)]
struct F64(f64);

#[repr(C)]
struct C(usize);

type TransparentF64 = Transparent<F64>; // Behaves as f64.

type TransparentString = Transparent<String>; // Representation is Rust.

type TransparentC = Transparent<C>; // Representation is C.

type TransparentTransparentC = Transparent<Transparent<C>>; // Transitively C.

Coercions and casting between the transparent wrapper and its non-zero-sized types are forbidden.

Drawbacks

None.

Alternatives

The only alternative to such a construct for FFI purposes is to use the exact same types as specified in the C header (or wherever the FFI types come from) and to make additional wrappers for them in Rust. This does not help if a field using interior mutability (i.e. uses UnsafeCell<T>) has to be passed to the FFI side, so this alternative does not actually cover all the uses cases allowed by #[repr(transparent)].

Unresolved questions

  • None

Summary

This RFC proposes the 2017 Rust Roadmap, in accordance with RFC 1728. The goal of the roadmap is to lay out a vision for where the Rust project should be in a year’s time. This year’s focus is improving Rust’s productivity, while retaining its emphasis on fast, reliable code. At a high level, by the end of 2017:

  • Rust should have a lower learning curve
  • Rust should have a pleasant edit-compile-debug cycle
  • Rust should provide a solid, but basic IDE experience
  • Rust should provide easy access to high quality crates
  • Rust should be well-equipped for writing robust, high-scale servers
  • Rust should have 1.0-level crates for essential tasks
  • Rust should integrate easily into large build systems
  • Rust’s community should provide mentoring at all levels

In addition, we should make significant strides in exploring two areas where we’re not quite ready to set out specific goals:

  • Integration with other languages, running the gamut from C to JavaScript
  • Usage in resource-constrained environments

The proposal is based on the 2016 survey, systematic outreach, direct conversations with individual Rust users, and an extensive internals thread. Thanks to everyone who helped with this effort!

Motivation

There’s no end of possible improvements to Rust—so what do we use to guide our thinking?

The core team has tended to view our strategy not in terms of particular features or aesthetic goals, but instead in terms of making Rust successful while staying true to its core values. This basic sentiment underlies much of the proposed roadmap, so let’s unpack it a bit.

Making Rust successful

The measure of success

What does it mean for Rust to be successful? There are a lot of good answers to this question, a lot of different things that draw people to use or contribute to Rust. But regardless of our personal values, there’s at least one clear measure for Rust’s broad success: people should be using Rust in production and reaping clear benefits from doing so.

  • Production use matters for the obvious reason: it grows the set of stakeholders with potential to invest in the language and ecosystem. To deliver on that potential, Rust needs to be part of the backbone of some major products.

  • Production use measures our design success; it’s the ultimate reality check. Rust takes a unique stance on a number of tradeoffs, which we believe to position it well for writing fast and reliable software. The real test of those beliefs is people using Rust to build large, production systems, on which they’re betting time and money.

  • The kind of production use matters. For Rust to truly be a success, there should be clear-cut reasons people are employing it rather than another language. Rust needs to provide crisp, standout benefits to the organizations using it.

The idea here is not about “taking over the world” with Rust; it’s not about market share for the sake of market share. But if Rust is truly delivering a valuable new way of programming, we should be seeing that benefit in “the real world”, in production uses that are significant enough to help sustain Rust’s development.

That’s not to say we should expect to see this usage immediately; there’s a long pipeline for technology adoption, so the effects of our work can take a while to appear. The framing here is about our long-term aims. We should be making investments in Rust today that will position it well for this kind of success in the future.

The obstacles to success

At this point, we have a fair amount of data about how Rust is reaching its audience, through the 2016 survey, informal conversations, and explicit outreach to (pre-)production shops (writeup coming soon). The data from the survey is generally corroborated by these other venues, so let’s focus on that.

We asked both current and potential users what most stands in the way of their using Rust, and got some pretty clear answers:

  • 1 in 4: learning curve
  • 1 in 7: lack of libraries
  • 1 in 9: general “maturity” concerns
  • 1 in 19: lack of IDEs (1 in 4 non-users)
  • 1 in 20: compiler performance

None of these obstacles is directly about the core language or std; people are generally happy with what the language offers today. Instead, the connecting theme is productivity—how quickly can I start writing real code? bring up a team? prototype and iterate? debug my code? And so on.

In other words, our primary challenge isn’t making Rust “better” in the abstract; it’s making people productive with Rust. The need is most pronounced in the early stages of Rust learning, where we risk losing a large pool of interested people if we can’t get them over the hump. Evidence from the survey and elsewhere suggests that once people do get over the initial learning curve, they tend to stick around.

So how do we pull it off?

Core values

Part of what makes Rust so exciting is that it attempts to eliminate some seemingly fundamental tradeoffs. The central such tradeoff is between safety and speed. Rust strives for

  • uncompromising reliability
  • uncompromising performance

and delivers on this goal largely thanks to its fundamental concept of ownership.

But there’s a problem: at first glance, “productivity” and “learnability” may seem at odds with Rust’s core goals. It’s common to hear the refrain that “fighting with the borrow checker” is a rite of passage for Rustaceans. Or that removing papercuts would mean glossing over safety holes or performance cliffs.

To be sure, there are tradeoffs here. But as above, if there’s one thing the Rust community knows how to do, it’s bending the curve around tradeoffs—memory safety without garbage collection, concurrency without data races, and all the rest. We have many examples in the language where we’ve managed to make a feature pleasant to use, while also providing maximum performance and safety—closures are a particularly good example, but there are others.

And of course, beyond the core language, “productivity” also depends a lot on tooling and the ecosystem. Cargo is one example where Rust’s tooling provides a huge productivity boost, and we’ve been working hard on other aspects of tooling, like the compiler’s error messages, that likewise have a big impact on productivity. There’s so much more we can be doing in this space.

In short, productivity should be a core value of Rust. By the end of 2017, let’s try to earn the slogan:

  • Rust: fast, reliable, productive—pick three.

Detailed design

Overall strategy

In the abstract, reaching the kind of adoption we need means bringing people along a series of distinct steps:

  • Public perception of Rust
  • First contact
  • Early play, toy projects
  • Public projects
  • Personal investment
  • Professional investment

We need to (1) provide “drivers”, i.e. strong motivation to continue through the stages and (2) avoid “blockers” that prevent people from progressing.

At the moment, our most immediate adoption obstacles are mostly about blockers, rather than a lack of drivers: there are people who see potential value in Rust, but worry about issues like productivity, tooling, and maturity standing in the way of use at scale. The roadmap proposes a set of goals largely angled at reducing these blockers.

However, for Rust to make sense to use in a significant way in production, it also needs to have a “complete story” for one or more domains of use. The goals call out a specific domain where we are already seeing promising production use, and where we have a relatively clear path toward a more complete story.

Almost all of the goals focus squarely on “productivity” of one kind or another.

Goals

Now to the meat of the roadmap: the goals. Each is phrased in terms of a qualitative vision, trying to carve out what the experience of Rust should be in one year’s time. The details mention some possible avenues toward a solution, but this shouldn’t be taken as prescriptive.

These goals are partly informed from the internals thread about the roadmap. That thread also posed a number of possible additional goals. Of course, part of the work of the roadmap is to allocate our limited resources, which fundamentally means not including some possible goals. Some of the most promising suggestions that didn’t make it into the roadmap proposal itself are included in the Alternatives section.

Rust should have a lower learning curve

Rust offers a unique value proposition in part because it offers a unique feature: its ownership model. Because the concept is not (yet!) a widespread one in other languages, it is something most people have to learn from scratch before hitting their stride with Rust. And that often comes on top of other aspects of Rust that may be less familiar. A common refrain is “the first couple of weeks are tough, but it’s oh so worth it.” How many people are bouncing off of Rust in those first couple of weeks? How many team leads are reluctant to introduce Rust because of the training needed? (1 in 4 survey respondents mentioned the learning curve.)

Here are some strategies we might take to lower the learning curve:

  • Improved docs. While the existing Rust book has been successful, we’ve learned a lot about teaching Rust, and there’s a rewrite in the works. The effort is laser-focused on the key areas that trip people up today (ownership, modules, strings, errors).

  • Gathering cookbooks, examples, and patterns. One way to quickly get productive in a language is to work from a large set of examples and known-good patterns that can guide your early work. As a community, we could push crates to include more substantial example code snippets, and organize efforts around design patterns and cookbooks. (See the commentary on the RFC thread for much more detail.)

  • Improved errors. We’ve already made some big strides here, particularly for ownership-related errors, but there’s surely more room for improvement.

  • Improved language features. There are a couple of ways that the language design itself can be oriented toward learnability. First, we can introduce new features with an explicit eye toward how they will be taught. Second, we can improve existing features to make them easier to understand and use – things like non-lexical lifetimes being a major example. There’s already been some discussion on internals

  • IDEs and other tooling. IDEs provide a good opportunity for deeper teaching. An IDE can visualize errors, for example showing you the lifetime of a borrow. They can also provide deeper inspection of what’s going on with things like method dispatch, type inference, and so on.

Rust should have a pleasant edit-compile-debug cycle

The edit-compile-debug cycle in Rust takes too long, and it’s one of the complaints we hear most often from production users. We’ve laid down a good foundation with MIR (now turned on by default) and incremental compilation (which recently hit alpha). But we need to continue pushing hard to actually deliver the improvements. And to fully address the problem, the improvement needs to apply to large Rust projects, not just small or mid-sized benchmarks.

To get this done, we’re also going to need further improvements to the performance monitoring infrastructure, including more benchmarks. Note, though, that the goal is stated qualitatively, and we need to be careful with what we measure to ensure we don’t lose sight of that goal.

While the most obvious routes are direct improvements like incremental compilation, since the focus here is primarily on development (including debugging), another promising avenue is more usable debug builds. Production users often say “debug binaries are too slow to run, but release binaries are too slow to build”. There may be a lot of room in the middle.

Depending on how far we want to take IDE support (see below), pushing incremental compilation up through the earliest stages of the compiler may also be important.

Rust should provide a solid, but basic IDE experience

For many people—even whole organizations—IDEs are an essential part of the programming workflow. In the survey, 1 in 4 respondents mentioned requiring IDE support before using Rust seriously. Tools like Racer and the IntelliJ Rust plugin have made great progress this year, but compiler integration in its infancy, which limits the kinds of tools that general IDE plugins can provide.

The problem statement here says “solid, but basic” rather than “world-class” IDE support to set realistic expectations for what we can get done this year. Of course, the precise contours will need to be driven by implementation work, but we can enumerate some basic constraints for such an IDE here:

  • It should be reliable: it shouldn’t crash, destroy work, or give inaccurate results in situations that demand precision (like refactorings).
  • It should be responsive: the interface should never hang waiting on the compiler or other computation. In places where waiting is required, the interface should update as smoothly as possible, while providing responsiveness throughout.
  • It should provide basic functionality. At a minimum, that’s: syntax highlighting, basic code navigation (e.g. go-to-definition), code completion, build support (with Cargo integration), error integration, and code formatting.

Note that while some of this functionality is available in existing IDE/plugin efforts, a key part of this initiative is to (1) lay the foundation for plugins based on compiler integration (2) pull together existing tools into a single service that can integrate with multiple IDEs.

Rust should provide easy access to high quality crates

Another major message from the survey and elsewhere is that Rust’s ecosystem, while growing, is still immature (1 in 9 survey respondents mentioned this). Maturity is not something we can rush. But there are steps we can take across the ecosystem to help improve the quality and discoverability of crates, both of which will help increase the overall sense of maturity.

Some avenues for quality improvement:

  • Provide stable, extensible test/bench frameworks.
  • Provide more push-button CI setup, e.g. have cargo new set up Travis/Appveyor.
  • Restart the API guidelines project.
  • Use badges on crates.io to signal various quality metrics.
  • Perform API reviews on important crates.

Some avenues for discoverability improvement:

  • Adding categories to crates.io, making it possible to browse lists like “crates for parsing”.
  • More sophisticated ranking and/or curation.

A number of ideas along these lines were discussed in the Rust Platform thread.

Rust should be well-equipped for writing robust, high-scale servers

The biggest area we’ve seen with interest in production Rust so far is the server, particularly in cases where high-scale performance, control, and/or reliability are paramount. At the moment, our ecosystem in this space is nascent, and production users are having to build a lot from scratch.

Of the specific domains we might target for having a more complete story, Rust on the server is the place with the clearest direction and momentum. In a year’s time, it’s within reach to drastically improve Rust’s server ecosystem and the overall experience of writing server code. The relevant pieces here include foundations for async IO, language improvements for async code ergonomics, shared infrastructure for writing services (including abstractions for implementing protocols and middleware), and endless interfaces to existing services/protocols.

There are two reasons to focus on the robust, high-scale case. Most importantly, it’s the place where Rust has the clearest value proposition relative to other languages, and hence the place where we’re likeliest to achieve significant, quality production usage (as discussed earlier in the RFC). More generally, the overall server space is huge, so choosing a particular niche provides essential focus for our efforts.

Rust should have 1.0-level crates for essential tasks

Rust has taken a decidedly lean approach to its standard library, preferring for much of the typical “batteries included” functionality to live externally in the crates.io ecosystem. While there are a lot of benefits to that approach, it’s important that we do in fact provide the batteries somewhere: we need 1.0-level functionality for essential tasks. To pick just one example, the rand crate has suffered from a lack of vision and has effectively stalled before reaching 1.0 maturity, despite its central importance for a non-trivial part of the ecosystem.

There are two basic strategies we might take to close these gaps.

The first is to identify a broad set of “essential tasks” by, for example, finding the commonalities between large “batteries included” standard libraries, and focus community efforts on bolstering crates in these areas. With sustained and systematic effort, we can probably help push a number of these crates to 1.0 maturity this year.

A second strategy is to focus specifically on tasks that play to Rust’s strengths. For example, Rust’s potential for fearless concurrency across a range of paradigms is one of the most unique and exciting aspects of the language. But we aren’t fully delivering on this potential, due to the immaturity of libraries in the space. The response to work in this space, like the recent futures library announcement, suggests that there is a lot of pent-up demand and excitement, and that this kind of work can open a lot of doors for Rust. So concurrency/asynchrony/parallelism is one segment of the ecosystem that likely deserves particular focus (and feeds into the high-scale server goal as well); there are likely others.

Rust should integrate easily into large build systems

When working with larger organizations interested in using Rust, one of the first hurdles we tend to run into is fitting into an existing build system. We’ve been exploring a number of different approaches, each of which ends up using Cargo (and sometimes rustc) in different ways, with different stories about how to incorporate crates from the broader crates.io ecosystem. Part of the issue seems to be a perceived overlap between functionality in Cargo (and its notion of compilation unit) and in ambient build systems, but we have yet to truly get to the bottom of the issues—and it may be that the problem is one of communication, rather than of some technical gap.

By the end of 2017, this kind of integration should be easy: as a community, we should have a strong understanding of best practices, and potentially build tooling in support of those practices. And of course, we want to approach this goal with Rust’s values in mind, ensuring that first-class access to the crates.io ecosystem is a cornerstone of our eventual story.

Rust’s community should provide mentoring at all levels

The Rust community is awesome, in large part because of how welcoming it is. But we could do a lot more to help grow people into roles in the project, including pulling together important work items at all level of expertise to direct people to, providing mentoring, and having a clearer on-ramp to the various official Rust teams. Outreach and mentoring is also one of the best avenues for increasing diversity in the project, which, as the survey demonstrates, has a lot of room for improvement.

While there’s work here for all the teams, the community team in particular will continue to focus on early-stage outreach, while other teams will focus on leadership onboarding.

Areas of exploration

The goals above represent the steps we think are most essential to Rust’s success in 2017, and where we are in a position to lay out a fairly concrete vision.

Beyond those goals, however, there are a number of areas with strong potential for Rust that are in a more exploratory phase, with subcommunities already exploring the frontiers. Some of these areas are important enough that we want to call them out explicitly, and will expect ongoing progress over the course of the year. In particular, the subteams are expected to proactively help organize and/or carry out explorations in these areas, and by the end of the year we expect to have greater clarity around Rust’s story for these areas, putting us in a position to give more concrete goals in subsequent roadmaps.

Here are the two proposed Areas of Exploration.

Integration with other languages

Other languages here includes “low-level” cases like C/C++, and “high-level” cases like JavaScript, Ruby, Python, Java and C#. Rust adoption often depends on being able to start using it incrementally, and language integration is often a key to doing so – an intuition substantiated by data from the survey and commercial outreach.

Rust’s core support for interfacing with C is fairly strong, but wrapping a C library still involves tedious work mirroring declarations and writing C shims or other glue code. Moreover, many projects that are ripe for Rust integration are currently using C++, and interfacing with those effectively requires maintaining an alternative C wrapper for the C++ APIs. This is a problem both for Rust code that wants to employ existing libraries and for those who want to integrate Rust into existing C/C++ codebases.

For interfacing with “high-level” languages, there is the additional barrier of working with a runtime system, which often involves integration with a garbage collector and object system. There are ongoing projects on these fronts, but it’s early days and there are still a lot of open questions.

Some potential avenues of exploration include:

  • Continuing work on bindgen, with focus on seamless C and eventually C++ support. This may involve some FFI-related language extensions (like richer repr).
  • Other routes for C/C++ integration.
  • Continued expansion of existing projects like Helix and Neon, which may require some language enhancements.
  • Continued work on GC integration hooks
  • Investigation of object system integrations, including DOM and GObject.

Usage in resource-constrained environments

Rust is a natural fit for programming resource-constrained devices, and there are some ongoing efforts to better organize work in this area, as well as a thread on the current significant problems in the domain. Embedded devices likewise came up repeatedly in the internals thread. It’s also a potentially huge market. At the moment, though, it’s far from clear what it will take to achieve significant production use in the embedded space. It would behoove us to try to get a clearer picture of this space in 2017.

Some potential avenues of exploration include:

  • Continuing work on rustup, xargo and similar tools for easing embedded development.
  • Land “std-aware Cargo”, making it easier to experiment with ports of the standard library to new platforms.
  • Work on scenarios or other techniques for cutting down std in various ways, depending on platform capabilities.
  • Develop a story for failable allocation in std (i.e., without aborting when out of memory).

Non-goals

Finally, it’s important that the roadmap “have teeth”: we should be focusing on the goals, and avoid getting distracted by other improvements that, whatever their appeal, could sap bandwidth and our ability to ship what we believe is most important in 2017.

To that end, it’s worth making some explicit non-goals, to set expectations and short-circuit discussions:

  • No major new language features, except in service of one of the goals. Cases that have a very strong impact on the “areas of support” may be considered case-by-case.

  • No major expansions to std, except in service of one of the goals. Cases that have a very strong impact on the “areas of support” may be considered case-by-case.

  • No Rust 2.0. In particular, no changes to the language or std that could be perceived as “major breaking changes”. We need to be doing everything we can to foster maturity in Rust, both in reality and in perception, and ongoing stability is an important part of that story.

Drawbacks and alternatives

It’s a bit difficult to enumerate the full design space here, given how much there is we could potentially be doing. Instead, we’ll take a look at some alternative high-level strategies, and some additional goals from the internals thread.

Overall strategy

At a high level, though, the biggest alternatives (and potential for drawbacks) are probably at the strategic level. This roadmap proposal takes the approach of (1) focusing on reducing clear blockers to Rust adoption, particularly connected with productivity and (2) choosing one particular “driver” for adoption to invest in, namely high-scale servers. The balance between blocker/driver focus could be shifted—it might be the case that by providing more incentive to use Rust in a particular domain, people are willing to overlook some of its shortcomings.

Another possible blind spot is the conservative take on language expansion, particularly when it comes to productivity. For example, we could put much greater emphasis on “metaprogramming”, and try to complete Plugins 2.0 in 2017. That kind of investment could pay dividends, since libraries can do amazing things with plugins that could draw people to Rust. But, as above, the overall strategy of reducing blockers assumes that what’s most needed isn’t more flashy examples of Rust’s power, but rather more bread-and-butter work on reducing friction, improving tooling, and just making Rust easier to use across the board.

The roadmap is informed by the survey, systematic outreach, numerous direct conversations, and general strategic thinking. But there could certainly be blind spots and biases. It’s worth double-checking our inputs.

Other ideas from the internals thread

Finally, there were several strong contenders for additional goals from the internals thread that we might consider. To be clear, these are not currently part of the proposed goals, but we may want to consider elevating them:

  • A goal explicitly for systematic expansion of commercial use; this proposal takes that as a kind of overarching idea for all of the goals.

  • A goal for Rust infrastructure, which came up several times. While this goal seems quite worthwhile in terms of paying dividends across the project, in terms of our current subteam makeup it’s hard to see how to allocate resources toward this goal without dropping other important goals. We might consider forming a dedicated infrastructure team, or somehow organizing and growing our bandwidth in this area.

  • A goal for progress in areas like scientific computing, HPC.

After an exhaustive look at the thread, the remaining proposals are in one way or another covered somewhere in the discussion above.

Unresolved questions

The main unresolved question is how to break the given goals into more deliverable pieces of work, but that’s a process that will happen after the overall roadmap is approved.

Are there other “areas of support” we should consider? Should any of these areas be elevated to a top-level goal (which would likely involve cutting back on some other goal)?

Should we consider some loose way of organizing “special interest groups” to focus on some of the priorities not part of the official goal set, but where greater coordination would be helpful? This was suggested multiple times.

Finally, there were several strong contenders for additional goals from the internals thread that we might consider, which are listed at the end of the goals section.

Summary

  • Change Cell<T> to allow T: ?Sized.
  • Guarantee that T and Cell<T> have the same memory layout.
  • Enable the following conversions through the std lib:
    • &mut T -> &Cell<T> where T: ?Sized
    • &Cell<[T]> -> &[Cell<T>]

Note: https://github.com/rust-lang/rfcs/pull/1651 has been accepted recently, so no T: Copy bound is needed anymore.

Motivation

Rust’s iterators offer a safe, fast way to iterate over collections while avoiding additional bound checks.

However, due to the borrow checker, they run into issues if we try to have more than one iterator into the same data structure while mutating elements in it.

Wanting to do this is not that unusual for many low level algorithms that deal with integers, floats or similar primitive data types.

For example, an algorithm might…

  • For each element, access each other element.
  • For each element, access an element a number of elements before or after it.

Todays answer for algorithms like that is to fall back to C-style for loops and indexing, which might look like this…


let v: Vec<i32> = ...;

// example 1
for i in 0..v.len() {
    for j in 0..v.len() {
        v[j] = f(v[i], v[j]);
    }
}

// example 2
for i in n..v.len() {
    v[i] = g(v[i - n]);
}

…but this reintroduces potential bound-checking costs.

The alternative, short of changing the actual algorithms involved, is to use internal mutability to enable safe mutations even with overlapping shared views into the data:

let v: Vec<Cell<i32>> = ...;

// example 1
for i in &v {
    for j in &v {
        j.set(f(i.get(), j.get()));
    }
}

// example 2
for (i, j) in v[n..].iter().zip(&v) {
    i.set(g(g.get()));
}

This has the advantages of allowing both bound-check free iteration and aliasing references, but comes with restrictions that makes it not generally applicable, namely:

  • The need to change the definition of the data structure containing the data (Which is not always possible because it might come from external code).
  • Loss of the ability to directly hand out &T and &mut T references to the data.

This RFC proposes a way to address these in cases where Cell<T> could be used by introducing simple conversions functions to the standard library that allow the creation of shared borrowed Cell<T>s from mutably borrowed Ts.

This in turn allows the original data structure to remain unchanged, while allowing to temporary opt-in to the Cell API as needed. As an example, given Cell::from_mut_slice(&mut [T]) -> &[Cell<T>], the previous examples can be written as this:

let mut v: Vec<i32> = ...;

// convert the mutable borrow
let v_slice: &[Cell<T>] = Cell::from_mut_slice(&mut v);

// example 1
for i in v_slice {
    for j in v_slice {
        j.set(f(i.get(), j.get()));
    }
}

// example 2
for (i, j) in v_slice[n..].iter().zip(v_slice) {
    i.set(g(g.get()));
}

Detailed design

Language

The core of this proposal is the ability to convert a &T to a &Cell<T>, so in order for it to be safe, it needs to be guaranteed that T and Cell<T> have the same memory layout, and that there are no codegen issues based on viewing a reference to a type that does not contain a UnsafeCell as a reference to a type that does contain a UnsafeCell.

As far as the author is aware, both should already implicitly fall out of the semantic of Cell and Rusts/llvms notion of aliasing:

  • Cell is safe interior mutability based on memcopying the T, and thus does not need additional fields or padding.
  • &mut T -> &U is a sub borrow, which prevents access to the original &mut T for its duration, thus no aliasing.

Std library

from_mut

We add a constructor to the cell API that enables the &mut T -> &Cell<T> conversion, implemented with the equivalent of a transmute() of the two pointers:

impl<T> Cell<T> {
    fn from_mut<'a>(t: &'a mut T) -> &'a Cell<T> {
        unsafe {
            &*(t as *mut T as *const Cell<T>)
        }
    }
}

In the future this could also be provided through AsRef, Into or From impls.

Unsized Cell<T>

We extend Cell<T> to allow T: ?Sized, and move all compatible methods to a less restricted impl block:

pub struct Cell<T: ?Sized> {
    value: UnsafeCell<T>,
}

impl<T: ?Sized> Cell<T> {
    pub fn as_ptr(&self) -> *mut T;
    pub fn get_mut(&mut self) -> &mut T;
    pub fn from_mut(value: &mut T) -> &Cell<T>;
}

This is purely done to enable cell slicing below, and should otherwise have no effect on any existing code.

Cell Slicing

We enable a conversion from &Cell<[T]> to &[Cell<T>]. This seems like it violates the “no interior references” API of Cell at first glance, but is actually safe:

  • A slice represents a number of elements next to each other. Thus, if &mut T -> &Cell<T> is ok, then &mut [T] -> &[Cell<T>] would be as well. &mut [T] -> &Cell<[T]> follows from &mut T -> &Cell<T> through substitution, so &Cell<[T]> <-> &[Cell<T>] has to be valid.
  • The API of a Cell<T> is to allow internal mutability through single-threaded memcopies only. Since a memcopy is just a copy of all bits that make up a type, it does not matter if we logically do a memcopy to all elements of a slice through a &Cell<[T]>, or just a memcopy to a single element through a &Cell<T>.
  • Yet another way to look at it is that if we created a &mut T to each element of a &mut [T], and converted each of them to a &Cell<T>, their addresses would allow “stitching” them back together to a single &[Cell<T>]

For convenience, we expose this conversion by implementing Index for Cell<[T]>:

impl<T> Index<RangeFull> for Cell<[T]> {
    type Output = [Cell<T>];

    fn index(&self, _: RangeFull) -> &[Cell<T>] {
        unsafe {
            &*(self as *const Cell<[T]> as *const [Cell<T>])
        }
    }
}

impl<T> Index<Range<usize>> for Cell<[T]> {
    type Output = [Cell<T>];

    fn index(&self, idx: Range<usize>) -> &[Cell<T>] {
        &self[..][idx]
    }
}

impl<T> Index<RangeFrom<usize>> for Cell<[T]> {
    type Output = [Cell<T>];

    fn index(&self, idx: RangeFrom<usize>) -> &[Cell<T>] {
        &self[..][idx]
    }
}

impl<T> Index<RangeTo<usize>> for Cell<[T]> {
    type Output = [Cell<T>];

    fn index(&self, idx: RangeTo<usize>) -> &[Cell<T>] {
        &self[..][idx]
    }
}

impl<T> Index<usize> for Cell<[T]> {
    type Output = Cell<T>;

    fn index(&self, idx: usize) -> &Cell<T> {
        &self[..][idx]
    }
}

Using this, the motivation example can be written as such:

let mut v: Vec<i32> = ...;

// convert the mutable borrow
let v_slice: &[Cell<T>] = &Cell::from_mut(&mut v[..])[..];

// example 1
for i in v_slice {
    for j in v_slice {
        j.set(f(i.get(), j.get()));
    }
}

// example 2
for (i, j) in v_slice[n..].iter().zip(v_slice) {
    i.set(g(g.get()));
}

Possible extensions

The proposal only covers the base case &mut T -> &Cell<T> and the trivially implementable extension to [T], but in theory this conversion could be enabled for many “higher level mutable reference” types, like for example mutable iterators (with the goal of making them cloneable through this).

See https://play.rust-lang.org/?gist=d012cebf462841887323185cff8ccbcc&version=stable&backtrace=0 for an example implementation and a more complex use case, and https://crates.io/crates/alias for an existing crate providing these features.

How We Teach This

What names and terminology work best for these concepts and why? How is this idea best presented—as a continuation of existing Rust patterns, or as a wholly new one?

The API could be described as “temporarily opting-in to internal mutability”. It would be a more flexible continuation of the existing usage of Cell<T> since the Cell<T> no longer needs to exist in the original location if you have mutable access to it.

Would the acceptance of this proposal change how Rust is taught to new users at any level? How should this feature be introduced and taught to existing Rust users?

As it is, the API just provides a few neat conversion functions. Nevertheless, with the legalization of the &mut T -> &Cell<T> conversion there is the potential for a major change in how accessors to data structures are provided:

In todays Rust, there are generally three different ways:

  • Owned access that starts off with a T and yield U.
  • Shared borrowed access that starts off with a &T and yields &U.
  • Mutable borrowed access that starts off with a &mut T and yields &mut U.

With this change, it would be possible in many cases to add a fourth accessor:

  • Shared borrowed cell access that starts off with a &mut T and yields &Cell<U>.

For example, today there exist:

  • Vec<T> -> std::vec::IntoIter<T>, which yields T values and is cloneable.
  • &[T] -> std::slice::Iter<T>, which yields &T values and is cloneable because it does a shared borrow.
  • &mut [T] -> std::slice::IterMut<T>, which yields &mut T values and is not cloneable because it does a mutable borrow.

We could then add a fourth iterator like this:

  • &mut [T] -> std::slice::CellIter<T>, which yields &Cell<T> values and is cloneable because it does a shared borrow.

So there is the potential that we go away from teaching the “rule of three” of ownership and change it to a “rule of four”.

What additions or changes to the Rust Reference, The Rust Programming Language, and/or Rust by Example does it entail?

  • The reference should explain that the &mut T -> &Cell<T> conversion, or specifically the &mut T -> &UnsafeCell<T> conversion is fine.
  • The book could use the API introduced here if it talks about internal mutability, and use it as a “temporary opt-in” example.
  • Rust by Example could have a few basic examples of situations where this API is useful, eg the ones mention in the motivation section above.

Drawbacks

Why should we not do this?

  • More complexity around the Cell API.
  • T -> Cell<T> transmute compatibility might not be a desired guarantee.

Alternatives

Removing cell slicing

Instead of allowing unsized types in Cell and adding the Index impls, there could just be a single &mut [T] -> &[Cell<T>] conversions function:

impl<T> Cell<T> {
    /// [...]

    fn from_mut_slice<'a>(t: &'a mut [T]) -> &'a [Cell<T>] {
        unsafe {
            &*(t as *mut [T] as *const [Cell<T>])
        }
    }
}

Usage:

let mut v: Vec<i32> = ...;

// convert the mutable borrow
let v_slice: &[Cell<T>] = Cell::from_mut_slice(&mut v);

// example 1
for i in v_slice {
    for j in v_slice {
        j.set(f(i.get(), j.get()));
    }
}

// example 2
for (i, j) in v_slice[n..].iter().zip(v_slice) {
    i.set(g(g.get()));
}

This would be less modular than the &mut [T] -> &Cell<[T]> -> &[Cell<T>] conversions steps, while still offering essentially the same API.

Just the language guarantee

The conversion could be guaranteed as correct, but not be provided by std itself. This would serve as legitimization of external implementations like alias.

No guarantees

If the safety guarantees of the conversion can not be granted, code would have to use direct indexing as today, with either possible bound checking costs or the use of unsafe code to avoid them.

Replacing Index impls with Deref

Instead of the Index impls, have only this Deref impl:

impl<T> Deref for Cell<[T]> {
    type Target = [Cell<T>];

    fn deref(&self) -> &[Cell<T>] {
        unsafe {
            &*(self as *const Cell<[T]> as *const [Cell<T>])
        }
    }
}

Pro:

  • Automatic conversion due to deref coercions and auto deref.
  • Less redundancy since we don’t repeat the slicing impls of [T].

Cons:

  • Cell<[T]> -> [Cell<T>] conversion does not seem like a good usecase for Deref, since Cell<[T]> isn’t a smartpointer.

Cast to &mut Cell<T> instead of &Cell<T>

Nothing that makes the &mut T -> &Cell<T> conversion safe would prevent &mut T -> &mut Cell<T> from being safe either, and the latter can be trivially turned into a &Cell<T> while also allowing mutable access - eg to call Cell::as_mut() to conversion back again.

Similar to that, there could also be a way to turn a &mut [Cell<T>] back into a &mut [T].

However, this does not seem to be actually useful since the only reason to use this API is to make use of shared internal mutability.

Exposing the functions differently

Instead of Cell constructors, we could just have freestanding functions in, say, std::cell:

fn ref_as_cell<T>(t: &mut T) -> &Cell<T> {
    unsafe {
        &*(t as *mut T as *const Cell<T>)
    }
}

fn cell_slice<T>(t: &Cell<[T]>) -> &[Cell<T>] {
    unsafe {
        &*(t as *const Cell<[T]> as *const [Cell<T>])
    }
}

On the opposite spectrum, and should this feature end up being used somewhat commonly, we could provide the conversions by dedicated traits, possibly in the prelude, or use the std coherence hack to implement them directly on &mut T and &mut [T]:

trait AsCell {
    type Cell;
    fn as_cell(self) -> Self::Cell;
}

impl<'a, T> AsCell for &'a mut T {
    type Cell = &'a Cell<T>;
    fn as_cell(self) -> Self::Cell {
        unsafe {
            &*(self as *mut T as *const Cell<T>)
        }
    }
}

But given the issues of adding methods to pointer-like types, this approach in general would probably be not a good idea (See the situation with Rc and Arc).

Unresolved questions

None so far.

Summary

Crates.io has many useful libraries for a variety of purposes, but it’s difficult to find which crates are meant for a particular purpose and then to decide among the available crates which one is most suitable in a particular context. Categorization and badges are coming to crates.io; categories help with finding a set of crates to consider and badges help communicate attributes of crates.

This RFC aims to create a default ranking of crates within a list of crates that have a category or keyword in order to make a recommendation to crate users about which crates are likely to deserve further manual evaluation.

Motivation

Finding and evaluating crates can be time consuming. People already familiar with the Rust ecosystem often know which crates are best for which puproses, but we want to share that knowledge with everyone. For example, someone looking for a crate to help create a parser should be able to navigate to a category for that purpose and get a list of crates to consider. This list would include crates such as nom and peresil, and the order in which they appear should be significant and should help make the decision between the crates in this category easier.

This helps address the goal of “Rust should provide easy access to high quality crates” as stated in the Rust 2017 Roadmap.

Detailed design

Please see the Appendix: Comparative Research section for ways that other package manager websites have solved this problem, and the Appendix: User Research section for results of a user research survey we did on how people evaluate crates by hand today.

A few assumptions we made:

  • Measures that can be made automatically are preferred over measures that would need administrators, curators, or the community to spend time on manually.
  • Measures that can be made for any crate regardless of that crate’s choice of version control, repository host, or CI service are preferred over measures that would only be available or would be more easily available with git, GitHub, Travis, and Appveyor. Our thinking is that when this additional information is available, it would be better to display a badge indicating it since this is valuable information, but it should not influence the ranking of the crates.
  • There are some measures, like “suitability for the current task” or “whether I like the way the crate is implemented” that crates.io shouldn’t even attempt to assess, since those could potentially differ across situations for the same person looking for a crate.
  • We assume we will be able to calculate these in a reasonable amount of time either on-demand or by a background job initiated on crate publish and saved in the database as appropriate. We think the measures we have proposed can be done without impacting the performance of either publishing or browsing crates noticeably. If this does not turn out to be the case, we will have to adjust the formula.

Order by recent downloads

Through the iterations of this RFC, there was no consensus around a way to order crates that would be useful, understandable, resistant to being gamed, and not require work of curators, reviewers, or moderators. Furthermore, different people in different situations may value different aspects of crates.

Instead of attempting to order crates as a majority of people would rank them, we propose a coarser measure to expose the set of crates worthy of further consideration on the first page of a category or keyword. At that point, the person looking for a crate can use other indicators on the page to decide which crates best meet their needs.

The default ordering of crates within a keyword or category will be changed to be the number of downloads in the last 90 days.

While coarse, downloads show how many people or other crates have found this crate to be worthy of using. By limiting to the last 90 days, crates that have been around the longest won’t have an advantage over new crates that might be better. Crates that are lower in the “stack”, such as libc, will always have a higher number of downloads than those higher in the stack due to the number of crates using a lower-level crate as a dependency. Within a category or keyword, however, crates are likely to be from the same level of the stack and thus their download numbers will be comparable.

Crates are currently ordered by all-time downloads and the sort option button says “Downloads”. We will:

  • change the ordering to be downloads in the last 90 days
  • change the number of downloads displayed with each crate to be those made in the last 90 days
  • change the sort option button to say “Recent Downloads”.

“All-time Downloads” could become another sort option in the menu, alongside “Alphabetical”.

Add more badges, filters, and sorting options

Crates.io now has badges for master branch CI status, and will soon have a badge indicating the version(s) of Rust a particular version builds successfully on.

To enable a person to narrow down relevant crates to find the one that will best meet their needs, we will add more badges and indicators. Badges will not influence crate ordering.

Some badges may require use of third-party services such as GitHub. We recognize that not everyone uses these services, but note a specific badge is only one factor that people can consider out of many.

Through the survey we conducted, we found that when people evaluate crates, they are primarily looking for signals of:

  • Ease of use
  • Maintenance
  • Quality

Secondary signals that were used to infer the primary signals:

  • Popularity (covered by the default ordering by recent downloads)
  • Credibility

Ease of use

By far, the most common attribute people said they considered in the survey was whether a crate had good documentation. Frequently mentioned when discussing documentation was the desire to quickly find an example of how to use the crate.

This would be addressed in two ways.

Render README on a crate’s page

Render README files on a crate’s page on crates.io so that people can quickly see for themselves the information that a crate author chooses to make available in their README. We can nudge towards having an example in the README by adding a template README that includes an Examples section in what cargo new generates.

“Well Documented” badge

For each crate published, in a background job, unpack the crate files and calculate the ratio of lines of documentation to lines of code as follows:

  • Find the number of lines of documentation in Rust files: grep -r "//[!/]" --binary-files=without-match --include=*.rs . | wc -l
  • Find the number of lines in the README file, if specified in Cargo.toml
  • Find the number of lines in Rust files: find . -name '*.rs' | xargs wc -l

We would then add the lines in the README to the lines of documentation, subtract the lines of documentation from the total lines of code, and divide the lines of documentation by the lines of non-documentation in order to get the ratio of documentation to code. Test code (and any documentation within test code) is part of this calculation.

Any crate getting in the top 20% of all crates would get a badge saying “well documented”.

This measure is gameable if a crate adds many lines that match the documentation regex but don’t provide meaningful content, such as /// lol. While this may be easy to implement, a person looking at the documentation for a crate using this technique would immediately be able to see that the author is trying to game the system and reject it. If this becomes a common problem, we can re-evaluate this situation, but we believe the community of crate authors genuinely want to provide great documentation to crate users. We want to encourage and reward well-documented crates, and this outweighs the risk of potential gaming of the system.

  • combine:

    • 1,195 lines of documentation
    • 99 lines in README.md
    • 5,815 lines of Rust
    • (1195 + 99) / (5815 - 1195) = 1294/4620 = .28
  • nom:

    • 2,263 lines of documentation
    • 372 lines in README.md
    • 15,661 lines of Rust
    • (2263 + 372) / (15661 - 2263) = 2635/13398 = .20
  • peresil:

    • 159 lines of documentation
    • 20 lines in README.md
    • 1,341 lines of Rust
    • (159 + 20) / (1341 - 159) = 179/1182 = .15
  • lalrpop: (in the /lalrpop directory in the repo)

    • 742 lines of documentation
    • 110 lines in ../README.md
    • 94,104 lines of Rust
    • (742 + 110) / (94104 - 742) = 852/93362 = .01
  • peg:

    • 3 lines of documentation
    • no readme specified in Cargo.toml
    • 1,531 lines of Rust
    • (3 + 0) / (1531 - 3) = 3/1528 = .00

If we assume these are all the crates on crates.io for this example, then combine is the top 20% and would get a badge.

Maintenance

We will add a way for maintainers to communicate their intended level of maintenance and support. We will add indicators of issues resolved from the various code hosting services.

Self-reported maintenance intention

We will add an optional attribute to Cargo.toml that crate authors could use to self-report their maintenance intentions. The valid values would be along the lines of the following, and would influence the ranking in the order they’re presented:

Actively developed
New features are being added and bugs are being fixed.
Passively maintained
There are no plans for new features, but the maintainer intends to respond to issues that get filed.
As-is
The crate is feature complete, the maintainer does not intend to continue working on it or providing support, but it works for the purposes it was designed for.
none
We display nothing. Since the maintainer has not chosen to specify their intentions, potential crate users will need to investigate on their own.
Experimental
The author wants to share it with the community but is not intending to meet anyone's particular use case.
Looking for maintainer
The current maintainer would like to transfer the crate to someone else.

These would be displayed as badges on lists of crates.

These levels would not have any time commitments attached to them– maintainers who would like to batch changes into releases every 6 months could report “actively developed” just as much as maintainers who like to release every 6 weeks. This would need to be clearly communicated to set crate user expectations properly.

This is also inherently a crate author’s statement of current intentions, which may get out of sync with the reality of the crate’s maintenance over time.

If I had to guess for the maintainers of the parsing crates, I would assume:

  • nom: actively developed
  • combine: actively developed
  • lalrpop: actively developed
  • peg: actively developed
  • peresil: passively maintained

GitHub issue badges

isitmaintained.com provides badges indicating the time to resolution of GitHub issues and percentage of GitHub issues that are open.

We will enable maintainers to add these badges to their crate.

CrateIssue ResolutionOpen Issues
combineAverage time to resolve an issuePercentage of issues still open
nomAverage time to resolve an issuePercentage of issues still open
lalrpopAverage time to resolve an issuePercentage of issues still open
pegAverage time to resolve an issuePercentage of issues still open
peresilAverage time to resolve an issuePercentage of issues still open

Quality

We will enable maintainers to add Coveralls badges to indicate the crate’s test coverage. If there are other services offering test coverage reporting and badges, we will add support for those as well, but this is the only service we know of at this time that offers code coverage reporting that works with Rust projects.

This excludes projects that cannot use Coveralls, which only currently supports repositories hosted on GitHub or BitBucket that use CI on Travis, CircleCI, Jenkins, Semaphore, or Codeship.

nom has coveralls.io configured: Coverage Status

Credibility

We have an idea for a “favorite authors” list that we think would help indicate credibility. With this proposed feature, each person can define “credibility” for themselves, which makes this measure less gameable and less of a popularity contest.

Out of scope

This proposal is not advocating to change the default order of search results; those should still be ordered by relevancy to the query based on the indexed content. We will add the ability to sort search results by recent downloads.

Evaluation

If ordering by number of recent downloads and providing more indicators is not helpful, we expect to get bug reports from the community and feedback on the users forum, reddit, IRC, etc.

In the community survey scheduled to be taken around May 2017, we will ask about people’s satisfaction with the information that crates.io provides.

If changes are needed that are significant, we will open a new RFC. If smaller tweaks need to be made, the process will be managed through crates.io’s issues. We will consult with the tools team and core team to determine whether a change is significant enough to warrant a new RFC.

How do we teach this?

We will change the label on the default ordering button to read “Recent Downloads” rather than “Downloads”.

Badges will have tooltips on hover that provide additional information.

We will also add a page to doc.crates.io that details all possible indicators and their values, and explains to crate authors how to configure or earn the different badges.

Drawbacks

We might create a system that incentivizes attributes that are not useful, or worse, actively harmful to the Rust ecosystem. For example, the documentation percentage could be gamed by having one line of uninformative documentation for all public items, thus giving a score of 100% without the value that would come with a fully documented library. We hope the community at large will agree these attributes are valuable to approach in good faith, and that trying to game the badges will be easily discoverable. We could have a reporting mechanism for crates that are attempting to gain badges artificially, and implement a way for administrators to remove badges from those crates.

Alternatives

Manual curation

  1. We could keep the default ranking as number of downloads, and leave further curation to sites like Awesome Rust.
  1. We could build entirely manual ranking into crates.io, as Ember Observer does. This would be a lot of work that would need to be done by someone, but would presumably result in higher quality evaluations and be less vulnerable to gaming.
  1. We could add user ratings or reviews in the form of upvote/downvote, 1-5 stars, and/or free text, and weight more recent ratings higher than older ratings. This could have the usual problems that come with online rating systems, such as spam, paid reviews, ratings influenced by personal disagreements, etc.

More sorting and filtering options

There are even more options for interacting with the metadata that crates.io has than we are proposing in this RFC at this time. For example:

  1. We could add filtering options for metadata, so that each user could choose, for example, “show me only crates that work on stable” or “show me only crates that have a version greater than 1.0”.

  2. We could add independent axes of sorting criteria in addition to the existing alphabetical and number of downloads, such as by number of owners or most recent version release date.

We would probably want to implement saved search configurations per user, so that people wouldn’t have to re-enter their criteria every time they wanted to do a similar search.

Unresolved questions

All questions have now been resolved.

Appendix: Comparative Research

This is how other package hosting websites handle default sorting within categories.

Django Packages

Django Packages has the concept of grids, which are large tables of packages in a particular category. Each package is a column, and each row is some attribute of packages. The default ordering from left to right appears to be GitHub stars.

Example of a Django Packages grid

Libhunt

Libhunt pulls libraries and categories from Awesome Rust, then adds some metadata and navigation.

The default ranking is relative popularity, measured by GitHub stars and scaled to be a number out of 10 as compared to the most popular crate. The other ordering offered is dev activity, which again is a score out of 10, relative to all other crates, and calculated by giving a higher weight to more recent commits.

Example of a Libhunt category

You can also choose to compare two libraries on a number of attributes:

Example of comparing two crates on Libhunt

Maven Repository

Maven Repository appears to order by the number of reverse dependencies (“# usages”):

Example of a maven repository category

Pypi

Pypi lets you choose multiple categories, which are not only based on topic but also other attributes like library stability and operating system:

Example of filtering by Pypi categories

Once you’ve selected categories and click the “show all” packages in these categories link, the packages are in alphabetical order… but the alphabet starts over multiple times… it’s unclear from the interface why this is the case.

Example of Pypi ordering

GitHub Showcases

To get incredibly meta, GitHub has the concept of showcases for a variety of topics, and they have a showcase of package managers. The default ranking is by GitHub stars (cargo is 17/27 currently).

Example of a GitHub showcase

Ruby toolbox

Ruby toolbox sorts by a relative popularity score, which is calculated from a combination of GitHub stars/watchers and number of downloads:

How Ruby Toolbox's popularity ranking is calculated

Category pages have a bar graph showing the top gems in that category, which looks like a really useful way to quickly see the differences in relative popularity. For example, this shows nokogiri is far and away the most popular HTML parser:

Example of Ruby Toolbox ordering

Also of note is the amount of information shown by default, but with a magnifying glass icon that, on hover or tap, reveals more information without a page load/reload:

Expanded Ruby Toolbox info

npms

While npms doesn’t have categories, its search appears to do some exact matching of the query and then rank the rest of the results weighted by three different scores:

  • score-effect:14: Set the effect that package scores have for the final search score, defaults to 15.3
  • quality-weight:1: Set the weight that quality has for the each package score, defaults to 1.95
  • popularity-weight:1: Set the weight that popularity has for the each package score, defaults to 3.3
  • maintenance-weight:1: Set the weight that the quality has for the each package score, defaults to 2.05
Example npms search results

There are many factors that go into the three scores, and more are planned to be added in the future. Implementation details are available in the architecture documentation.

Explanation of the data analyzed by npms

Package Control (Sublime)

Package Control is for Sublime Text packages. It has Labels that are roughly equivalent to categories:

Package Control homepage showing Labels like language syntax, snippets

The only available ordering within a label is alphabetical, but each result has the number of downloads plus badges for Sublime Text version compatibility, OS compatibility, Top 25/100, and new/trending:

Sample Package Control list of packages within a label, sorted alphabetically

Appendix: User Research

Demographics

We ran a survey for 1 week and got 134 responses. The responses we got seem to be representative of the current Rust community: skewing heavily towards more experienced programmers and just about evenly distributed between Rust experience starting before 1.0, since 1.0, in the last year, and in the last 6 months, with a slight bias towards longer amounts of experience. 0 Graydons responded to the survey.

Distribution of programming experience of survey repsondents, over half have been programming for over 10 years Distribution of Rust experience of survey respondents, slightly biased towards those who have been using Rust before 1.0 and since 1.0 over those with less than a year and less than 6 months

Since this matches about what we’d expect of the Rust community, we believe this survey is representative. Given the bias towards more experience programming, we think the answers are worthy of using to inform recommendations crates.io will be making to programmers of all experience levels.

Crate ranking agreement

The community ranking of the 5 crates presented in the survey for which order people would try them out for parsing comes out to be:

1.) nom

2.) combine

3.) and 4.) peg and lalrpop, in some order

5.) peresil

This chart shows how many people ranked the crates in each slot:

Raw votes for each crate in each slot, showing that nom and combine are pretty clearly 1 and 2, peresil is clearly 5, and peg and lalrpop both got slotted in 4th most often

This chart shows the cumulative number of votes: each slot contains the number of votes each crate got for that ranking or above.

Whatever default ranking formula we come up with in this RFC, when applied to these 5 crates, it should generate an order for the crates that aligns with the community ordering. Also, not everyone will agree with the crates.io ranking, so we should display other information and provide alternate filtering and sorting mechanisms so that people who prioritize different attributes than the majority of the community will be able to find what they are looking for.

Factors considered when ranking crates

The following table shows the top 25 mentioned factors for the two free answer sections. We asked both “Please explain what information you used to evaluate the crates and how that information influenced your ranking.” and “Was there any information you wish was available, or that would have taken more than 15 minutes for you to get?”, but some of the same factors were deemed to take too long to find out or not be easily available, while others did consider those, so we’ve ranked by the combination of mentions of these factors in both questions.

Far and away, good documentation was the most mentioned factor people used to evaluate which crates to try.

FeatureUsed in evaluationNot available/too much time neededTotalNotes
1Good documentation9410104
2README421961
3Number of downloads58058
4Most recent version date54054
5Obvious / easy to find usage examples371451
6Examples in the repo38644
7Reputation of the author36339
8Description or README containing Introduction / goals / value prop / use cases29534
9Number of reverse dependencies (Dependent Crates)23730
10Version >= 1.0.030030
11Commit activity23629Depends on VCS
12Fits use case26329Situational
13Number of dependencies (more = worse)28028
14Number of open issues, activity on issues“22628Depends on GitHub
15Easy to use or understand27027Situational
16Publicity (blog posts, reddit, urlo, “have I heard of it”)25025
17Most recent commit date17522Dependent on VCS
18Implementation details22022Situational
19Nice API22022Situational
20Mentioned using/wanting to use docs.rs81321
21Tutorials18321
22Number or frequency of released versions19120
23Number of maintainers/contributors12618Depends on VCS
24CI results15217Depends on CI service
25Whether the crate works on nightly, stable, particular stable versions8816

Relevant quotes motivating our choice of factors

Easy to use

  1. Documentation linked from crates.io 2) Documentation contains decent example on front page

  1. “Docs Coverage” info - I’m not sure if there’s a way to get that right now, but this is almost more important that test coverage.

rust docs: Is there an intro and example on the top-level page? are the rustdoc examples detailed enough to cover a range of usecases? can i avoid reading through the files in the examples folder?


Documentation:

  • Is there a README? Does it give me example usage of the library? Point me to more details?
  • Are functions themselves documented?
  • Does the documentation appear to be up to date?

The GitHub repository pages, because there are no examples or detailed descriptions on crates.io. From the GitHub readme I first checked the readme itself for a code example, to get a feeling for the library. Then I looked for links to documentation or tutorials and examples. The crates that did not have this I discarded immediately.


When evaluating any library from crates.io, I first follow the repository link – often the readme is enough to know whether or not I like the actual library structure. For me personally a library’s usability is much more important than performance concerns, so I look for code samples that show me how the library is used. In the examples given, only peresil forces me to look at the actual documentation to find an example of use. I want something more than “check the docs” in a readme in regards to getting started.


I would like the entire README.md of each package to be visible on crates.io I would like a culture where each README.md contains a runnable example


Ok, this one isn’t from the survey, it’s from a Sept 2015 internals thread:

there should be indicator in Crates.io that show how much code is documented, this would help with choosing well done package.

I really love this idea! Showing a percentage or a little progress bar next to each crate with the proportion of public items with at least some docs would be a great starting point.

Maintenance

On nom’s crates.io page I checked the version (2.0.0) and when the latest version came out (less than a month ago). I know that versioning is inconsistent across crates, but I’m reassured when a crate has V >= 1.0 because it typically indicates that the authors are confident the crate is production-ready. I also like to see multiple, relatively-recent releases because it signals the authors are serious about maintenance.


Answering yes scores points: crates.io page: Does the crate have a major version >= 1? Has there been a release recently, and maybe even a steady stream of minor or patch-level releases?


From github:

  • Number of commits and of contributors (A small number of commits (< 100) and of contributors (< 3) is often the sign of a personal project, probably not very much used except by its author. All other things equal, I tend to prefer active projects.);

Quality

Tests:

  • Is critical functionality well tested?
  • Is the entire package well tested?
  • Are the tests clear and descriptive?
  • Could I reimplement the library based on these tests?
  • Does the project have CI?
  • Is master green?

Popularity/credibility

  1. I look at the number of download. If it is too small (~ <1000), I assume the crate has not yet reached a good quality. nom catches my attention because it has 200K download: I assume it is a high quality crate.

  1. Compare the number of downloads: More downloads = more popular = should be the best

Popularity: - Although not being a huge factor, it can help tip the scale when one is more popular or well supported than another when all other factors are close.

Overall

I can’t pick a most important trait because certain ones outweigh others when combined, etc. I.e. number of downloads is OK, but may only suggest that it’s been around the longest. Same with number of dependent crates (which probably spikes number of downloads). I like a crate that is well documented, has a large user base (# dependent crates + downloads + stars), is post 1.0, is active (i.e. a release within the past 6 months?), and it helps when it’s a prominent author (but that I feel is an unfair metric).

Relevant bugs capturing other feedback

There was a wealth of good ideas and feedback in the survey answers, but not all of it pertained to crate ranking directly. Commonly mentioned improvements that could greatly help the usability and usefulness of crates.io included:

Summary

Change doc.rust-lang.org to redirect to the latest release instead of an alias of stable.

Introduce a banner that contains a dropdown allowing users to switch between versions, noting when a release is not the most current release.

Motivation

Today, if you hit https://doc.rust-lang.org/, you’ll see the same thing as if you hit https://doc.rust-lang.org/stable/. It does not redirect, but instead displays the same documentation. This is suboptimal for multiple reasons:

  • One of the oldest bugs open in Rust, from September 2013 (a four digit issue number!), is about the lack of rel=canonical, which means search results are being duplicated between / and /stable, at least (issue link)
  • / not having any version info is a similar bug, stated in a different way, but still has the same problems. (issue link)
  • We’ve attempted to change the URL structure of Rustdoc in the past, but it’s caused many issues, which will be elaborated below. (issue link)

There’s other issues that stem from this as well that haven’t been filed as issues. Two notable examples are:

  • When we release the new book, links are going to break. This has multiple ways of being addressed, and so isn’t a strong motivation, but fixing this issue would help out a lot.
  • In order to keep links working, we modified rustdoc to add redirects from the older format. But this can lead to degenerate situations in certain crates. libc, one of the most important crates in Rust, and included in the official docs, had their docs break because so many extra files were generated that GitHub Pages refused to serve them any more.

From #rust-internals on 2016-12-22:

18:19 <@brson> lots of libc docs
18:19 <@steveklabnik> :(
18:20 <@brson> 6k to document every C constant

Short URLs are nice to have, but they have an increasing maintenance cost that’s affecting other parts of the project in an adverse way.

The big underlying issue here is that people tend to link to /, because it’s what you get by default. By changing the default, people will link to the specific version instead. This means that their links will not break, and will allow us to update the URL structure of our documentation more freely.

Detailed design

https://doc.rust-lang.org/ will be updated to have a heading with a drop-down that allows you to select between different versions of the docs. It will also display a message when looking at older documentation.

https://doc.rust-lang.org/ should issue a redirect to https://doc.rust-lang.org/RELEASE, where RELEASE is the latest stable release, like 1.14.0.

The exact details will be worked out before this is ‘stabilized’ on doc.rust-lang.org; only the general approach is presented in this RFC.

How We Teach This

There’s not a lot to teach; users end up on a different page than they used to.

Drawbacks

Losing short URLs is a drawback. This is outweighed by other considerations, in my opinion, as the rest of the RFC shows.

Alternatives

We could make no changes. We’ve dealt with all of these problems so far, so it’s possible that we won’t run into more issues in the future.

We could do work on the rel=canonical issue instead, which would solve this in a different way. This doesn’t totally solve all issues, however, only the duplication issue.

We could redirect all URLs that don’t start with a version prefix to redirect to /, which would be an index page showing all of the various places to go. Right now, it’s unclear how many people even know that we host specific old versions, or stuff like /beta.

Unresolved questions

None.

Summary

Create a “Rust Bookshelf” of learning resources for Rust.

  • Pull the book out of tree into rust-lang/book, which holds the second edition, currently.
  • Pull the nomicon and the reference out of tree and convert them to mdBook.
  • Pull the cargo docs out of tree and convert them to mdBook.
  • Create a new “Nightly Book” in-tree.
  • Provide a path forward for more long-form documentation to be maintained by the project.

This is largely about how doc.rust-lang.org is organized; today, it points to the book, the reference, the nomicon, the error index, and the standard library docs. This suggests unifying the first three into one thing.

Motivation

There are a few independent motivations for this RFC.

  • Separate repos for separate projects.
  • Consistency between long-form docs.
  • A clear place for unstable documentation, which is now needed for stabilization.
  • Better promoting good resources like the ’nomicon, which may not be as well known as “the book” is.

These will be discussed further in the detailed design.

Detailed design

Several new repositories will be made, one for each of:

  • The Rustinomicon (“the ’nomicon”)
  • The Cargo Book
  • The Rust Reference Manual

These would live under the rust-lang organization.

They will all use mdBook to build. They will have their existing text re-worked into the format; at first a simple conversion, then more major improvements. Their current text will be removed from the main tree.

The first edition of the book lives in-tree, but the second edition lives in rust-lang/book. We’ll remove the existing text from the tree and move it into rust-lang/book.

A new book will be created from the “Nightly Rust” section of the book. It will be called “The Nightly Book,” and will contain unstable documentation for both rustc and Cargo, as well as material that will end up in the reference. This came up when trying to document RFC 1623. We don’t have a unified way of handling unstable documentation. This will give it a place to develop, and part of the stabilization process will be moving documentation from this book into the other parts of the documentation.

The nightly book will be organized around #![feature]s, so that you can look up the documentation for each feature, as well as seeing which features currently exist.

The nightly book is in-tree so that it runs more often, as part of people’s normal test suite. This doesn’t mean that the book won’t run on every commit; just that the out-of-tree books will run mostly in CI, whereas the nightly book will run when developers do x.py check. This is similar to how, today, Traivs runs a subset of the tests, but buildbot runs all of them.

The landing page on doc.rust-lang.org will show off the full bookshelf, to let people find the documentation they need. It will also link to their respective repositories.

Finally, this creates a path for more books in the future: “the FFI Book” would be one example of a possibility for this kind of thing. The docs team will develop criteria for accepting a book as part of the official project.

How We Teach This

The landing page on doc.rust-lang.org will show off the full bookshelf, to let people find the documentation they need. It will also link to their respective repositories.

Drawbacks

A ton of smaller repos can make it harder to find what goes where.

Removing work from rust-lang/rust means people aren’t credited in release notes any more. I will be opening a separate RFC to address this issue, it’s also an issue without this RFC being accepted.

Operations are harder, but they have to change to support this use-case for other reasons, so this does not add any extra burden.

Alternatives

Do nothing.

Do only one part of this, instead of the whole thing.

Move all of the “bookshelf” into one repository, rather than individual ones. This would require a lot more label-wrangling, but might be easier.

Unresolved questions

How should the first and second editions of the book live in the same repository?

What criteria should we use to accept new books?

Should we adopt “learning Rust with too many Linked Lists”?

Summary

This is an RFC to add the APIs: From<&[T]> for Rc<[T]> where T: Clone or T: Copy as well as From<&str> for Rc<str>. In addition: From<Vec<T>> for Rc<[T]> and From<Box<T: ?Sized>> for Rc<T> will be added.

Identical APIs will also be added for Arc.

Motivation

Caching and string interning

These, and especially the latter - i.e: From<&str>, trait implementations of From are useful when dealing with any form of caching of slices.

This especially applies to controllable string interning, where you can cheaply cache strings with a construct such as putting Rcs into HashSets, i.e: HashSet<Rc<str>>.

An example of string interning:

#![feature(ptr_eq)]
#![feature(shared_from_slice)]
use std::rc::Rc;
use std::collections::HashSet;
use std::mem::drop;

fn cache_str(cache: &mut HashSet<Rc<str>>, input: &str) -> Rc<str> {
     // If the input hasn't been cached, do it:
     if !cache.contains(input) {
         cache.insert(input.into());
     }

    // Retrieve the cached element.
    cache.get(input).unwrap().clone()
}

let first   = "hello world!";
let second  = "goodbye!";
let mut set = HashSet::new();

// Cache the slices:
let rc_first  = cache_str(&mut set, first);
let rc_second = cache_str(&mut set, second);
let rc_third  = cache_str(&mut set, second);

// The contents match:
assert_eq!(rc_first.as_ref(),  first);
assert_eq!(rc_second.as_ref(), second);
assert_eq!(rc_third.as_ref(),  rc_second.as_ref());

// It was cached:
assert_eq!(set.len(), 2);
drop(set);
assert_eq!(Rc::strong_count(&rc_first),  1);
assert_eq!(Rc::strong_count(&rc_second), 2);
assert_eq!(Rc::strong_count(&rc_third),  2);
assert!(Rc::ptr_eq(&rc_second, &rc_third));

One could imagine a scenario where you have an AST with string literals that gets repeated a lot in it. For example, namespaces in XML documents tends to be repeated many times.

The tendril crate does one form of interning:

Buffer sharing is accomplished through thread-local (non-atomic) reference counting

It is useful to provide an implementation of From<&[T]> as well, and not just for &str, because one might deal with non-utf8 strings, i.e: &[u8]. One could potentially reuse this for Path, OsStr.

Safe abstraction for unsafe code.

Providing these implementations in the current state of Rust requires substantial amount of unsafe code. Therefore, for the sake of confidence in that the implementations are safe - it is best done in the standard library.

RcBox is not public

Furthermore, since RcBox is not exposed publicly from std::rc, one can’t make an implementation outside of the standard library for this without making assumptions about the internal layout of Rc. The alternative is to roll your own implementation of Rc in its entirety - but this in turn requires using a lot of feature gates, which makes using this on stable Rust in the near future unfeasible.

For Arc

For Arc the synchronization overhead of doing .clone() is probably greater than the overhead of doing Arc<Box<str>>. But once the clones have been made, Arc<str> would probably be cheaper to dereference due to locality.

Most of the motivations for Rc applies to Arc as well, but the use cases might be fewer. Therefore, the case for adding the same API for Arc is less clear. One could perhaps use it for multi threaded interning with a type such as: Arc<Mutex<HashSet<Arc<str>>>>.

Because of the similarities between the layout of Rc and Arc, almost identical implementations could be added for From<&[T]> for Arc<[T]> and From<&str> for Arc<str>. It would also be consistent to do so.

Taking all of this into account, adding the APIs for Arc is warranted.

Detailed design

There’s already an implementation

There is already an implementation of sorts alloc::rc for this. But it is hidden under the feature gate rustc_private, which, to the authors knowledge, will never be stabilized. The implementation is, on this day, as follows:

impl Rc<str> {
    /// Constructs a new `Rc<str>` from a string slice.
    #[doc(hidden)]
    #[unstable(feature = "rustc_private",
               reason = "for internal use in rustc",
               issue = "0")]
    pub fn __from_str(value: &str) -> Rc<str> {
        unsafe {
            // Allocate enough space for `RcBox<str>`.
            let aligned_len = 2 + (value.len() + size_of::<usize>() - 1) / size_of::<usize>();
            let vec = RawVec::<usize>::with_capacity(aligned_len);
            let ptr = vec.ptr();
            forget(vec);
            // Initialize fields of `RcBox<str>`.
            *ptr.offset(0) = 1; // strong: Cell::new(1)
            *ptr.offset(1) = 1; // weak: Cell::new(1)
            ptr::copy_nonoverlapping(value.as_ptr(), ptr.offset(2) as *mut u8, value.len());
            // Combine the allocation address and the string length into a fat pointer to `RcBox`.
            let rcbox_ptr: *mut RcBox<str> = mem::transmute([ptr as usize, value.len()]);
            assert!(aligned_len * size_of::<usize>() == size_of_val(&*rcbox_ptr));
            Rc { ptr: Shared::new(rcbox_ptr) }
        }
    }
}

The idea is to use the bulk of the implementation of that, generalize it to Vecs and slices, specialize it for &str, provide documentation for both.

Copy and Clone

For the implementation of From<&[T]> for Rc<[T]>, T must be Copy if ptr::copy_nonoverlapping is used because this relies on it being memory safe to simply copy the bits over. If instead, T::clone() is used in a loop, then T can simply be Clone instead. This is however slower than using ptr::copy_nonoverlapping.

Vec and Box

For the implementation of From<Vec<T>> for Rc<[T]>, T need not be Copy, nor Clone. The input vector already owns valid Ts, and these elements are simply copied over bit for bit. After copying all elements, they are no longer owned in the vector, which is then deallocated. Unfortunately, at this stage, the memory used by the vector can not be reused - this could potentially be changed in the future.

This is similar for Box.

Suggested implementation

The actual implementations could / will look something like:

For Rc

#[inline(always)]
unsafe fn slice_to_rc<'a, T, U, W, C>(src: &'a [T], cast: C, write_elems: W)
   -> Rc<U>
where U: ?Sized,
      W: FnOnce(&mut [T], &[T]),
      C: FnOnce(*mut RcBox<[T]>) -> *mut RcBox<U> {
    // Compute space to allocate for `RcBox<U>`.
    let susize = mem::size_of::<usize>();
    let aligned_len = 2 + (mem::size_of_val(src) + susize - 1) / susize;

    // Allocate enough space for `RcBox<U>`.
    let vec = RawVec::<usize>::with_capacity(aligned_len);
    let ptr = vec.ptr();
    forget(vec);

    // Combine the allocation address and the slice length into a
    // fat pointer to RcBox<[T]>.
    let rbp = slice::from_raw_parts_mut(ptr as *mut T, src.len())
                as *mut [T] as *mut RcBox<[T]>;

    // Initialize fields of RcBox<[T]>.
    (*rbp).strong.set(1);
    (*rbp).weak.set(1);
    write_elems(&mut (*rbp).value, src);

    // Recast to RcBox<U> and yield the Rc:
    let rcbox_ptr = cast(rbp);
    assert_eq!(aligned_len * susize, mem::size_of_val(&*rcbox_ptr));
    Rc { ptr: Shared::new(rcbox_ptr) }
}

#[unstable(feature = "shared_from_slice",
           reason = "TODO",
           issue = "TODO")]
impl<T> From<Vec<T>> for Rc<[T]> {
    /// Constructs a new `Rc<[T]>` from a `Vec<T>`.
    /// The allocated space of the `Vec<T>` is not reused,
    /// but new space is allocated and the old is deallocated.
    /// This happens due to the internal layout of `Rc`.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let arr = [1, 2, 3];
    /// let vec = vec![Box::new(1), Box::new(2), Box::new(3)];
    /// let rc: Rc<[Box<usize>]> = Rc::from(vec);
    /// assert_eq!(rc.len(), arr.len());
    /// for (x, y) in rc.iter().zip(&arr) {
    ///     assert_eq!(**x, *y);
    /// }
    /// ```
    #[inline]
    fn from(mut vec: Vec<T>) -> Self {
        unsafe {
            let rc = slice_to_rc(vec.as_slice(), |p| p, |dst, src|
                ptr::copy_nonoverlapping(
                    src.as_ptr(), dst.as_mut_ptr(), src.len())
            );
            // Prevent vec from trying to drop the elements:
            vec.set_len(0);
            rc
        }
    }
}

#[unstable(feature = "shared_from_slice",
           reason = "TODO",
           issue = "TODO")]
impl<'a, T: Clone> From<&'a [T]> for Rc<[T]> {
    /// Constructs a new `Rc<[T]>` by cloning all elements from the shared slice
    /// [`&[T]`][slice]. The length of the reference counted slice will be exactly
    /// the given [slice].
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// #[derive(PartialEq, Clone, Debug)]
    /// struct Wrap(u8);
    ///
    /// let arr = [Wrap(1), Wrap(2), Wrap(3)];
    /// let rc: Rc<[Wrap]> = Rc::from(arr.as_ref());
    /// assert_eq!(rc.as_ref(), &arr);   // The elements match.
    /// assert_eq!(rc.len(), arr.len()); // The lengths match.
    /// ```
    ///
    /// Using the [`Into`][Into] trait:
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// #[derive(PartialEq, Clone, Debug)]
    /// struct Wrap(u8);
    ///
    /// let rc: Rc<[Wrap]> = arr.as_ref().into();
    /// assert_eq!(rc.as_ref(), &arr);   // The elements match.
    /// assert_eq!(rc.len(), arr.len()); // The lengths match.
    /// ```
    ///
    /// [Into]: https://doc.rust-lang.org/std/convert/trait.Into.html
    /// [slice]: https://doc.rust-lang.org/std/primitive.slice.html
    #[inline]
    default fn from(slice: &'a [T]) -> Self {
        unsafe {
            slice_to_rc(slice, |p| p, |dst, src| {
                for (d, s) in dst.iter_mut().zip(src) {
                    ptr::write(d, s.clone())
                }
            })
        }
    }
}

#[unstable(feature = "shared_from_slice",
           reason = "TODO",
           issue = "TODO")]
impl<'a, T: Copy> From<&'a [T]> for Rc<[T]> {
    /// Constructs a new `Rc<[T]>` from a shared slice [`&[T]`][slice].
    /// All elements in the slice are copied and the length is exactly that of
    /// the given [slice]. In this case, `T` must be `Copy`.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let arr = [1, 2, 3];
    /// let rc  = Rc::from(arr);
    /// assert_eq!(rc.as_ref(), &arr);   // The elements match.
    /// assert_eq!(rc.len(), arr.len()); // The length is the same.
    /// ```
    ///
    /// Using the [`Into`][Into] trait:
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let arr          = [1, 2, 3];
    /// let rc: Rc<[u8]> = arr.as_ref().into();
    /// assert_eq!(rc.as_ref(), &arr);   // The elements match.
    /// assert_eq!(rc.len(), arr.len()); // The length is the same.
    /// ```
    ///
    /// [Into]: ../../std/convert/trait.Into.html
    /// [slice]: ../../std/primitive.slice.html
    #[inline]
    fn from(slice: &'a [T]) -> Self {
        unsafe {
            slice_to_rc(slice, |p| p, <[T]>::copy_from_slice)
        }
    }
}

#[unstable(feature = "shared_from_slice",
           reason = "TODO",
           issue = "TODO")]
impl<'a> From<&'a str> for Rc<str> {
    /// Constructs a new `Rc<str>` from a [string slice].
    /// The underlying bytes are copied from it.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let slice = "hello world!";
    /// let rc: Rc<str> = Rc::from(slice);
    /// assert_eq!(rc.as_ref(), slice);    // The elements match.
    /// assert_eq!(rc.len(), slice.len()); // The length is the same.
    /// ```
    ///
    /// Using the [`Into`][Into] trait:
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let slice = "hello world!";
    /// let rc: Rc<str> = slice.into();
    /// assert_eq!(rc.as_ref(), slice);    // The elements match.
    /// assert_eq!(rc.len(), slice.len()); // The length is the same.
    /// ```
    ///
    /// This can be useful in doing [string interning], and caching your strings.
    ///
    /// ```
    /// // For Rc::ptr_eq
    /// #![feature(ptr_eq)]
    ///
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    /// use std::collections::HashSet;
    /// use std::mem::drop;
    ///
    /// fn cache_str(cache: &mut HashSet<Rc<str>>, input: &str) -> Rc<str> {
    ///     // If the input hasn't been cached, do it:
    ///     if !cache.contains(input) {
    ///         cache.insert(input.into());
    ///     }
    ///
    ///     // Retrieve the cached element.
    ///     cache.get(input).unwrap().clone()
    /// }
    ///
    /// let first   = "hello world!";
    /// let second  = "goodbye!";
    /// let mut set = HashSet::new();
    ///
    /// // Cache the slices:
    /// let rc_first  = cache_str(&mut set, first);
    /// let rc_second = cache_str(&mut set, second);
    /// let rc_third  = cache_str(&mut set, second);
    ///
    /// // The contents match:
    /// assert_eq!(rc_first.as_ref(),  first);
    /// assert_eq!(rc_second.as_ref(), second);
    /// assert_eq!(rc_third.as_ref(),  rc_second.as_ref());
    ///
    /// // It was cached:
    /// assert_eq!(set.len(), 2);
    /// drop(set);
    /// assert_eq!(Rc::strong_count(&rc_first),  1);
    /// assert_eq!(Rc::strong_count(&rc_second), 2);
    /// assert_eq!(Rc::strong_count(&rc_third),  2);
    /// assert!(Rc::ptr_eq(&rc_second, &rc_third));
    ///
    /// [string interning]: https://en.wikipedia.org/wiki/String_interning
    fn from(slice: &'a str) -> Self {
        // This is safe since the input was valid utf8 to begin with, and thus
        // the invariants hold.
        unsafe {
            let bytes = slice.as_bytes();
            slice_to_rc(bytes, |p| p as *mut RcBox<str>, <[u8]>::copy_from_slice)
        }
    }
}

#[unstable(feature = "shared_from_slice",
           reason = "TODO",
           issue = "TODO")]
impl<T: ?Sized> From<Box<T>> for Rc<T> {
    /// Constructs a new `Rc<T>` from a `Box<T>` where `T` can be unsized.
    /// The allocated space of the `Box<T>` is not reused,
    /// but new space is allocated and the old is deallocated.
    /// This happens due to the internal layout of `Rc`.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(shared_from_slice)]
    /// use std::rc::Rc;
    ///
    /// let arr = [1, 2, 3];
    /// let vec = vec![Box::new(1), Box::new(2), Box::new(3)].into_boxed_slice();
    /// let rc: Rc<[Box<usize>]> = Rc::from(vec);
    /// assert_eq!(rc.len(), arr.len());
    /// for (x, y) in rc.iter().zip(&arr) {
    ///     assert_eq!(**x, *y);
    /// }
    /// ```
    #[inline]
    fn from(boxed: Box<T>) -> Self {
        unsafe {
            // Compute space to allocate + alignment for `RcBox<T>`.
            let sizeb  = mem::size_of_val(&*boxed);
            let alignb = mem::align_of_val(&*boxed);
            let align  = cmp::max(alignb, mem::align_of::<usize>());
            let size   = offset_of_unsafe!(RcBox<T>, value) + sizeb;

            // Allocate the space.
            let alloc  = heap::allocate(size, align);

            // Cast to fat pointer: *mut RcBox<T>.
            let bptr      = Box::into_raw(boxed);
            let rcbox_ptr = {
                let mut tmp = bptr;
                ptr::write(&mut tmp as *mut _ as *mut * mut u8, alloc);
                tmp as *mut RcBox<T>
            };

            // Initialize fields of RcBox<T>.
            (*rcbox_ptr).strong.set(1);
            (*rcbox_ptr).weak.set(1);
            ptr::copy_nonoverlapping(
                bptr as *const u8,
                (&mut (*rcbox_ptr).value) as *mut T as *mut u8,
                sizeb);

            // Deallocate box, we've already forgotten it.
            heap::deallocate(bptr as *mut u8, sizeb, alignb);

            // Yield the Rc:
            assert_eq!(size, mem::size_of_val(&*rcbox_ptr));
            Rc { ptr: Shared::new(rcbox_ptr) }
        }
    }
}

These work on zero sized slices and vectors as well.

With more safe abstractions in the future, this can perhaps be rewritten with less unsafe code. But this should not change the API itself and thus will never cause a breaking change.

For Arc

For the sake of brevity, just use the implementation above, and replace:

  • slice_to_rc with slice_to_arc,
  • RcBox with ArcInner,
  • rcbox_ptr with arcinner_ptr,
  • Rc with Arc.

How We Teach This

The documentation provided in the impls should be enough.

Drawbacks

The main drawback would be increasing the size of the standard library.

Alternatives

  1. Only implement this for T: Copy and skip T: Clone.
  2. Let other libraries do this. This has the problems explained in the motivation section above regarding RcBox not being publicly exposed as well as the amount of feature gates needed to roll ones own Rc alternative - for little gain.
  3. Only implement this for Rc and skip it for Arc.
  4. Skip this for Vec.
  5. Only implement this for Vec.
  6. Skip this for Box.
  7. Use AsRef. For example: impl<'a> From<&'a str> for Rc<str> becomes impl From<AsRef<str>> for Rc<str>. It could potentially make the API a bit more ergonomic to use. However, it could run afoul of coherence issues, preventing other wanted impls. This RFC currently leans towards not using it.
  8. Add these trait implementations of From as functions on &str like .into_rc_str() and on &[T] like .into_rc_slice(). This RFC currently leans towards using From implementations for the sake of uniformity and ergonomics. It also has the added benefit of letting you remember one method name instead of many. One could also consider String::into_boxed_str and Vec::into_boxed_slice, since these are similar with the difference being that this version uses the From trait, and is converted into a shared smart pointer instead.
  9. Also add these APIs as associated functions on Rc and Arc as follows:
impl<T: Clone> Rc<[T]> {
    fn from_slice(slice: &[T]) -> Self;
}

impl Rc<str> {
  fn from_str(slice: &str) -> Self;
}

impl<T: Clone> Arc<[T]> {
    fn from_slice(slice: &[T]) -> Self;
}

impl Arc<str> {
  fn from_str(slice: &str) -> Self;
}

Unresolved questions

  • Should a special version of make_mut be added for Rc<[T]>? This could look like:
impl<T> Rc<[T]> where T: Clone {
    fn make_mut_slice(this: &mut Rc<[T]>) -> &mut [T]
}

UPDATE

The lang team ultimately decided to retract this RFC. It was never implemented. The motivation for retraction was that the change was too prone to mis-use and did not provide adequate benefit.

Summary

Remove the 'static bound from the type_id intrinsic so users can experiment with usecases where lifetimes either soundly irrelevant to type checking or where lifetime correctness is enforced elsewhere in the program.

Motivation

Sometimes it’s useful to encode a type so it can be checked at runtime. This can be done using the type_id intrinsic, that gives an id value that’s guaranteed to be unique across the types available to the program. The drawback is that it’s only valid for types that are 'static, because concrete lifetimes aren’t encoded in the id. For most cases this makes sense, otherwise the encoded type could be used to represent data in lifetimes it isn’t valid for. There are cases though where lifetimes can be soundly checked outside the type id, so it’s not possible to misrepresent the validy of the data. These cases can’t make use of type ids right now, they need to rely on workarounds. One such workaround is to define a trait with an associated type that’s expected to be a 'static version of the implementor:

unsafe trait Keyed {
	type Key: 'static;
}

struct NonStaticStruct<'a> {
	a: &'a str
}
unsafe impl <'a> Keyed for NonStaticStruct<'a> {
	type Key = NonStaticStruct<'static>;
}

This requires additional boilerplate that may lead to undefined behaviour if implemented incorrectly or not kept up to date.

This RFC proposes simply removing the 'static bound from the type_id intrinsic, leaving the stable TypeId and Any traits unchanged. That way users who opt-in to unstable intrinsics can build the type equality guarantees they need without waiting for stable API support.

This is an important first step in expanding the tools available to users at runtime to reason about their data. With the ability to fetch a guaranteed unique type id for non-static types, users can build their own TypeId or Any traits.

Detailed design

Remove the 'static bound from the type_id intrinsic in libcore.

Allowing type ids for non-static types exposes the fact that concrete lifetimes aren’t taken into account. This means a type id for SomeStruct<'a, 'b> will be the same as SomeStruct<'b, 'a>, even though they’re different types.

Users need to be very careful using type_id directly, because it can easily lead to undefined behaviour if lifetimes aren’t verified properly.

How We Teach This

This changes an unstable compiler intrinsic so we don’t need to teach it. The change does need to come with plenty of warning that it’s unsound for type-checking and can’t be used to produce something like a lifetime parameterised Any trait.

Drawbacks

Removing the 'static bound means callers may now depend on the fact that type_id doesn’t consider concrete lifetimes, even though this probably isn’t its intended final behaviour.

Alternatives

  • Create a new intrinsic called runtime_type_id that’s specifically designed ignore concrete lifetimes, like type_id does now. Having a totally separate intrinsic means type_id could be changed in the future to account for lifetimes without impacting the usecases that specifically ignore them.
  • Don’t do this. Stick with existing workarounds for getting a TypeId for non-static types.

Unresolved questions

Summary

I propose we specify and stabilize drop order in Rust, instead of treating it as an implementation detail. The stable drop order should be based on the current implementation. This results in avoiding breakage and still allows alternative, opt-in, drop orders to be introduced in the future.

Motivation

After lots of discussion on issue 744, there seems to be consensus about the need for a stable drop order. See, for instance, this and this comment.

The current drop order seems counter-intuitive (fields are dropped in FIFO order instead of LIFO), but changing it would inevitably result in breakage. There have been cases in the recent past when code broke because of people relying on unspecified behavior (see for instance the post about struct field reorderings). It is highly probable that similar breakage would result from changes to the drop order. See for instance, the comment from @sfackler, which reflects the problems that would arise:

Real code in the wild does rely on the current drop order, including rust-openssl, and there is no upgrade path if we reverse it. Old versions of the libraries will be subtly broken when compiled with new rustc, and new versions of the libraries will be broken when compiled with old rustc.

Introducing a new drop order without breaking things would require figuring out how to:

  • Forbid an old compiler (with the old drop order) from compiling recent Rust code (which could rely on the new drop order).
  • Let the new compiler (with the new drop order) recognize old Rust code (which could rely on the old drop order). This way it could choose to either: (a) fail to compile; or (b) compile using the old drop order.

Both requirements seem quite difficult, if not impossible, to meet. Even in case we figured out how to meet those requirements, the complexity of the approach would probably outweigh the current complexity of having a non-intuitive drop order.

Finally, in case people really dislike the current drop order, it may still be possible to introduce alternative, opt-in, drop orders in a backwards compatible way. However, that is not covered in this RFC.

Detailed design

The design is the same as currently implemented in rustc and is described below. This behavior will be enforced by run-pass tests.

Tuples, structs and enum variants

Struct fields are dropped in the same order as they are declared. Consider, for instance, the struct below:

struct Foo {
    bar: String,
    baz: String,
}

In this case, bar will be the first field to be destroyed, followed by baz.

Tuples and tuple structs show the same behavior, as well as enum variants of both kinds (struct and tuple variants).

Note that a panic during construction of one of previous data structures causes destruction in a different order. Since the object has not yet been constructed, its fields are treated as local variables (which are destroyed in LIFO order). See the example below:

let x = MyStruct {
    field1: String::new(),
    field2: String::new(),
    field3: panic!()
};

In this case, field2 is destructed first and field1 second, which may seem counterintuitive at first but makes sense when you consider that the initialized fields are actually temporary variables. Note that the drop order depends on the order of the fields in the initializer and not in the struct declaration.

Slices and Vec

Slices and vectors show the same behavior as structs and enums. This behavior can be illustrated by the code below, where the first elements are dropped first.

for x in xs { drop(x) }

If there is a panic during construction of the slice or the Vec, the drop order is reversed (that is, when using [] literals or the vec![] macro). Consider the following example:

let xs = [X, Y, panic!()];

Here, Y will be dropped first and X second.

Allowed unspecified behavior

Besides the previous constructs, there are other ones that do not need a stable drop order (at least, there is not yet evidence that it would be useful). It is the case of vec![expr; n] and closure captures.

Vectors initialized with vec![expr; n] syntax clone the value of expr in order to fill the vector. In case clone panics, the values produced so far are dropped in unspecified order. The order is closely tied to an implementation detail and the benefits of stabilizing it seem small. It is difficult to come up with a real-world scenario where the drop order of cloned objects is relevant to ensure some kind of invariant. Furthermore, we may want to modify the implementation in the future.

Closure captures are also dropped in unspecified order. At this moment, it seems like the drop order is similar to the order in which the captures are consumed within the closure (see this blog post for more details). Again, this order is closely tied to an implementation that we may want to change in the future, and the benefits of stabilizing it seem small. Furthermore, enforcing invariants through closure captures seems like a terrible footgun at best (the same effect can be achieved with much less obscure methods, like passing a struct as an argument).

Note: we ignore slices initialized with [expr; n] syntax, since they may only contain Copy types, which in turn cannot implement Drop.

How We Teach This

When mentioning destructors in the Rust book, Reference and other documentation, we should also mention the overall picture for a type that implements Drop. In particular, if a struct/enum implements Drop, then when it is dropped we will first execute the user’s code and then drop all the fields (in the given order). Thus any code in Drop must leave the fields in an initialized state such that they can be dropped. If you wish to interleave the fields being dropped and user code being executed, you can make the fields into Option and have a custom drop that calls take() (or else wrap your type in a union with a single member and implement Drop such that it invokes ptr::read() or something similar).

It is also important to mention that union types never drop their contents.

Drawbacks

  • The counter-intuitive drop order is here to stay.

Alternatives

  • Figure out how to let rustc know the language version targeted by a given program. This way we could introduce a new drop order without breaking code.
  • Introduce a new drop order anyway, try to minimize breakage by running crater and hope for the best.

Unresolved questions

  • Where do we draw the line between the constructs where drop order should be stabilized and the rest? Should the drop order of closure captures be specified? And the drop order of vec![expr; n]?

Summary

Introduce a trait Try for customizing the behavior of the ? operator when applied to types other than Result.

Motivation

Using ? with types other than Result

The ? operator is very useful for working with Result, but it really applies to any sort of short-circuiting computation. As the existence and popularity of the try_opt! macro confirms, it is common to find similar patterns when working with Option values and other types. Consider these two lines from rustfmt:

let lhs_budget = try_opt!(width.checked_sub(prefix.len() + infix.len()));
let rhs_budget = try_opt!(width.checked_sub(suffix.len()));

The overarching goal of this RFC is to allow lines like those to be written using the ? operator:

let lhs_budget = width.checked_sub(prefix.len() + infix.len())?;
let rhs_budget = width.checked_sub(suffix.len())?;

Naturally, this has all the advantages that ? offered over try! to begin with:

  • suffix notation, allowing for more fluent APIs;
  • concise, yet noticeable.

However, there are some tensions to be resolved. We don’t want to hardcode the behavior of ? to Result and Option, rather we would like to make something more extensible. For example, futures defined using the futures crate typically return one of three values:

  • a successful result;
  • a “not ready yet” value, indicating that the caller should try again later;
  • an error.

Code working with futures typically wants to proceed only if a successful result is returned. “Not ready yet” values as well as errors should be propagated to the caller. This is exemplified by the try_ready! macro used in futures. If this 3-state value were written as an enum:

enum Poll<T, E> {
    Ready(T),
    NotReady,
    Error(E),
}

Then one could replace code like try_ready!(self.stream.poll()) with self.stream.poll()?.

(Currently, the type Poll in the futures crate is defined differently, but alexcrichton indicates that in fact the original design did use an enum like Poll, and it was changed to be more compatible with the existing try! macro, and hence could be changed back to be more in line with this RFC.)

Support interconversion, but with caution

The existing try! macro and ? operator already allow a limit amount of type conversion, specifically in the error case. That is, if you apply ? to a value of type Result<T, E>, the surrounding function can have some other return type Result<U, F>, so long as the error types are related by the From trait (F: From<E>). The idea is that if an error occurs, we will wind up returning F::from(err), where err is the actual error. This is used (for example) to “upcast” various errors that can occur in a function into a common error type (e.g., Box<Error>).

In some cases, it would be useful to be able to convert even more freely. At the same time, there may be some cases where it makes sense to allow interconversion between types. For example, a library might wish to permit a Result<T, HttpError> to be converted into an HttpResponse (or vice versa). Or, in the futures example given above, we might wish to apply ? to a Poll value and use that in a function that itself returns a Poll:

fn foo() -> Poll<T, E> {
    let x = bar()?; // propagate error case
}

and we might wish to do the same, but in a function returning a Result:

fn foo() -> Result<T, E> {
    let x = bar()?; // propagate error case
}

However, we wish to be sure that this sort of interconversion is intentional. In particular, Result is often used with a semantic intent to mean an “unhandled error”, and thus if ? is used to convert an error case into a “non-error” type (e.g., Option), there is a risk that users accidentally overlook error cases. To mitigate this risk, we adopt certain conventions (see below) in that case to help ensure that “accidental” interconversion does not occur.

Detailed design

Playground

Note: if you wish to experiment, this Rust playground link contains the traits and impls defined herein.

Desugaring and the Try trait

The desugaring of the ? operator is changed to the following, where Try refers to a new trait that will be introduced shortly:

match Try::into_result(expr) {
    Ok(v) => v,

    // here, the `return` presumes that there is
    // no `catch` in scope:
    Err(e) => return Try::from_error(From::from(e)),
}

If a catch is in scope, the desugaring is roughly the same, except that instead of returning, we would break out of the catch with e as the error value.

This definition refers to a trait Try. This trait is defined in libcore in the ops module; it is also mirrored in std::ops. The trait Try is defined as follows:

trait Try {
    type Ok;
    type Error;
    
    /// Applies the "?" operator. A return of `Ok(t)` means that the
    /// execution should continue normally, and the result of `?` is the
    /// value `t`. A return of `Err(e)` means that execution should branch
    /// to the innermost enclosing `catch`, or return from the function.
    ///
    /// If an `Err(e)` result is returned, the value `e` will be "wrapped"
    /// in the return type of the enclosing scope (which must itself implement
    /// `Try`). Specifically, the value `X::from_error(From::from(e))`
    /// is returned, where `X` is the return type of the enclosing function.
    fn into_result(self) -> Result<Self::Ok, Self::Error>;

    /// Wrap an error value to construct the composite result. For example,
    /// `Result::Err(x)` and `Result::from_error(x)` are equivalent.
    fn from_error(v: Self::Error) -> Self;

    /// Wrap an OK value to construct the composite result. For example,
    /// `Result::Ok(x)` and `Result::from_ok(x)` are equivalent.
    ///
    /// *The following function has an anticipated use, but is not used
    /// in this RFC. It is included because we would not want to stabilize
    /// the trait without including it.*
    fn from_ok(v: Self::Ok) -> Self;
}

Initial impls

libcore will also define the following impls for the following types.

Result

The Result type includes an impl as follows:

impl<T,E> Try for Result<T, E> {
    type Ok = T;
    type Error = E;

    fn into_result(self) -> Self {
        self
    }
    
    fn from_ok(v: T) -> Self {
        Ok(v)
    }

    fn from_error(v: E) -> Self {
        Err(v)
    }
}

This impl permits the ? operator to be used on results in the same fashion as it is used today.

Option

The Option type includes an impl as follows:

mod option {
    pub struct Missing;

    impl<T> Try for Option<T>  {
        type Ok = T;
        type Error = Missing;

        fn into_result(self) -> Result<T, Missing> {
            self.ok_or(Missing)
        }
    
        fn from_ok(v: T) -> Self {
            Some(v)
        }

        fn from_error(_: Missing) -> Self {
            None
        }
    }
}    

Note the use of the Missing type, which is specific to Option, rather than a generic type like (). This is intended to mitigate the risk of accidental Result -> Option conversion. In particular, we will only allow conversion from Result<T, Missing> to Option<T>. The idea is that if one uses the Missing type as an error, that indicates an error that can be “handled” by converting the value into an Option. (This rationale was originally explained in a comment by Aaron Turon.)

The use of a fresh type like Missing is recommended whenever one implements Try for a type that does not have the #[must_use] attribute (or, more semantically, that does not represent an “unhandled error”).

Interaction with type inference

Supporting more types with the ? operator can be somewhat limiting for type inference. In particular, if ? only works on values of type Result (as did the old try! macro), then x? forces the type of x to be Result. This can be significant in an expression like vec.iter().map(|e| ...).collect()?, since the behavior of the collect() function is determined by the type it returns. In the old try! macro days, collect() would have been forced to return a Result<_, _> – but ? leaves it more open.

This implies that callers of collect() will have to either use try!, or write an explicit type annotation, something like this:

vec.iter().map(|e| ...).collect::<Result<_, _>>()?

Another problem (which also occurs with try!) stems from the use of From to interconvert errors. This implies that ‘nested’ uses of ? are often insufficiently constrained for inference to make a decision. The problem here is that the nested use of ? effectively returns something like From::from(From::from(err)) – but only the starting point (err) and the final type are constrained. The inner type is not. It’s unclear how to address this problem without introducing some form of inference fallback, which seems orthogonal from this RFC.

How We Teach This

Where and how to document it

This RFC proposes extending an existing operator to permit the same general short-circuiting pattern to be used with more types. When initially teaching the ? operator, it would probably be best to stick to examples around Result, so as to avoid confusing the issue. However, at that time we can also mention that ? can be overloaded and offer a link to more comprehensive documentation, which would show how ? can be applied to Option and then explain the desugaring and how one goes about implementing one’s own impls.

The reference will have to be updated to include the new trait, naturally. The Rust book and Rust by example should be expanded to include coverage of the ? operator being used on a variety of types.

One important note is that we should publish guidelines explaining when it is appropriate to introduce a special error type (analogous to the option::Missing type included in this RFC) for use with ?. As expressed earlier, the rule of thumb ought to be that a special error type should be used whenever implementing Try for a type that does not, semantically, indicates an unhandled error (i.e., a type for which the #[must_use] attribute would be inappropriate).

Error messages

Another important factor is the error message when ? is used in a function whose return type is not suitable. The current error message in this scenario is quite opaque and directly references the Carrer trait. A better message would consider various possible cases.

Source type does not implement Try. If ? is applied to a value that does not implement the Try trait (for any return type), we can give a message like

? cannot be applied to a value of type Foo

Return type does not implement Try. Otherwise, if the return type of the function does not implement Try, then we can report something like this (in this case, assuming a fn that returns ()):

cannot use the ? operator in a function that returns ()

or perhaps if we want to be more strictly correct:

? cannot be applied to a Result<T, Box<Error>> in a function that returns ()

At this point, we could likely make a suggestion such as “consider changing the return type to Result<(), Box<Error>>”.

Note however that if ? is used within an impl of a trait method, or within main(), or in some other context where the user is not free to change the type signature (modulo RFC 1937), then we should not make this suggestion. In the case of an impl of a trait defined in the current crate, we could consider suggesting that the user change the definition of the trait.

Errors cannot be interconverted. Finally, if the return type R does implement Try, but a value of type R cannot be constructed from the resulting error (e.g., the function returns Option<T>, but ? is applied to a Result<T, ()>), then we can instead report something like this:

? cannot be applied to a Result<T, Box<Error>> in a function that returns Option<T>

This last part can be tricky, because the error can result for one of two reasons:

  • a missing From impl, perhaps a mistake;
  • the impl of Try is intentionally limited, as in the case of Option.

We could help the user diagnose this, most likely, by offering some labels like the following:

22 | fn foo(...) -> Option<T> {
   |                --------- requires an error of type `option::Missing`
   |     write!(foo, ...)?;
   |     ^^^^^^^^^^^^^^^^^ produces an error of type `io::Error`
   | }

Consider suggesting the use of catch. Especially in contexts where the return type cannot be changed, but possibly in other contexts as well, it would make sense to advise the user about how they can catch an error instead, if they chose. Once catch is stabilized, this could be as simple as saying “consider introducing a catch, or changing the return type to …”. In the absence of catch, we would have to suggest the introduction of a match block.

Extended error message text. In the extended error message, for those cases where the return type cannot easily be changed, we might consider suggesting that the fallible portion of the code is refactored into a helper function, thus roughly following this pattern:

fn inner_main() -> Result<(), HLError> {
    let args = parse_cmdline()?;
    // all the real work here
}

fn main() {
    process::exit(match inner_main() {
        Ok(_) => 0,
        Err(ref e) => {
            writeln!(io::stderr(), "{}", e).unwrap();
            1
        }
    });
}

Implementation note: it may be helpful for improving the error message if ? were not desugared when lowering from AST to HIR but rather when lowering from HIR to MIR; however, the use of source annotations may suffice.

Drawbacks

One drawback of supporting more types is that type inference becomes harder. This is because an expression like x? no longer implies that the type of x is Result.

There is also the risk that results or other “must use” values are accidentally converted into other types. This is mitigated by the use of newtypes like option::Missing (rather than, say, a generic type like ()).

Alternatives

The “essentialist” approach

When this RFC was first proposed, the Try trait looked quite different:

trait Try<E> {
    type Success;
    fn try(self) -> Result<Self::Success, E>;
}    

In this version, Try::try() converted either to an unwrapped “success” value, or to a error value to be propagated. This allowed the conversion to take into account the context (i.e., one might interconvert from a Foo to a Bar in some distinct way as one interconverts from a Foo to a Baz).

This was changed to adopt the current “reductionist” approach, in which all values are first interconverted (in a context independent way) to an OK/Error value, and then interconverted again to match the context using from_error. The reasons for the change are roughly as follows:

  • The resulting trait feels simpler and more straight-forward. It also supports from_ok in a simple fashion.
  • Context dependent behavior has the potential to be quite surprising.
  • The use of specific types like option::Missing mitigates the primary concern that motivated the original design (avoiding overly loose interconversion).
  • It is nice that the use of the From trait is now part of the ? desugaring, and hence supported universally across all types.
  • The interaction with the orphan rules is made somewhat nicer. For example, using the essentialist alternative, one might like to have a trait that permits a Result to be returned in a function that yields Poll. That would require an impl like this impl<T,E> Try<Poll<T,E>> for Result<T, E>, but this impl runs afoul of the orphan rules.

Traits implemented over higher-kinded types

The desire to avoid “free interconversion” between Result and Option seemed to suggest that the Carrier trait ought to be defined over higher-kinded types (or generic associated types) in some form. The most obvious downside of such a design is that Rust does not offer higher-kinded types nor anything equivalent to them today, and hence we would have to block on that design effort. But it also turns out that HKT is not a particularly good fit for the problem. To start, consider what “kind” the Self parameter on the Try trait would have to have. If we were to implement Try on Option, it would presumably then have kind type -> type, but we also wish to implement Try on Result, which has kind type -> type -> type. There has even been talk of implementing Try for simple types like bool, which simply have kind type. More generally, the problems encountered are quite similar to the problems that Simon Peyton-Jones describes in attempting to model collections using HKT: we wish the Try trait to be implemented in a great number of scenarios. Some of them, like converting Result<T,E> to Result<U,F>, allow for the type of the success value and the error value to both be changed, though not arbitrarily (subject to the From trait, in particular). Others, like converting Option<T> to Option<U>, allow only the type of the success value to change, whereas others (like converting bool to bool) do not allow either type to change.

What to name the trait

A number of names have been proposed for this trait. The original name was Carrier, as the implementing type was the “carrier” for an error value. A proposed alternative was QuestionMark, named after the operator ?. However, the general consensus seemed to be that since Rust operator overloading traits tend to be named after the operation that the operator performed (e.g., Add and not Plus, Deref and not Star or Asterisk), it was more appropriate to name the trait Try, which seems to be the best name for the operation in question.

Unresolved questions

None.

Summary

Include the ManuallyDrop wrapper in core::mem.

Motivation

Currently Rust does not specify the order in which the destructors are run. Furthermore, this order differs depending on context. RFC issue #744 exposed the fact that the current, but unspecified behaviour is relied onto for code validity and that there’s at least a few instances of such code in the wild.

While a move to stabilise and document the order of destructor evaluation would technically fix the problem described above, there’s another important aspect to consider here – implicitness. Consider such code:

struct FruitBox {
    peach: Peach,
    banana: Banana,
}

Does this structure depend on Peach’s destructor being run before Banana for correctness? Perhaps its the other way around and it is Banana’s destructor that has to run first? In the common case structures do not have any such dependencies between fields, and therefore it is easy to overlook such a dependency while changing the code above to the snippet below (e.g. so the fields are sorted by name).

struct FruitBox {
    banana: Banana,
    peach: Peach,
}

For structures with dependencies between fields it is worthwhile to have ability to explicitly annotate the dependencies somehow.

Detailed design

This RFC proposes adding the following struct as a new lang item to the core::mem (and by extension the std::mem) module. mem module is a most suitable place for such type, as the module already a place for functions very similar in purpose: drop and forget.

/// Inhibits compiler from automatically calling `T`’s destructor.
#[lang = "manually_drop"]
#[unstable(feature = "manually_drop", reason = "recently added", issue = "0")]
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct ManuallyDrop<T> {
    value: T,
}

impl<T> ManuallyDrop<T> {
    /// Wraps a value to be manually dropped.
    #[unstable(feature = "manually_drop", reason = "recently added", issue = "0")]
    pub fn new(value: T) -> ManuallyDrop<T> {
        ManuallyDrop { value }
    }

    /// Extracts the value from the ManuallyDrop container.
    #[unstable(feature = "manually_drop", reason = "recently added", issue = "0")]
    pub fn into_inner(slot: ManuallyDrop<T>) -> T {
        slot.value
    }

    /// Manually drops the contained value.
    ///
    /// # Unsafety
    ///
    /// This function runs the destructor of the contained value and thus makes any further action
    /// with the value within invalid. The fact that this function does not consume the wrapper
    /// does not statically prevent further reuse.
    #[unstable(feature = "manually_drop", reason = "recently added", issue = "0")]
    pub unsafe fn drop(slot: &mut ManuallyDrop<T>) {
        ptr::drop_in_place(&mut slot.value)
    }
}

impl<T> Deref for ManuallyDrop<T> {
    type Target = T;
    // ...
}

impl<T> DerefMut for ManuallyDrop<T> {
    // ...
}

The lang item will be treated specially by the compiler to not emit any drop glue for this type.

Let us apply ManuallyDrop to a somewhat expanded example from the motivation:

struct FruitBox {
    // Immediately clear there’s something non-trivial going on with these fields.
    peach: ManuallyDrop<Peach>,
    melon: Melon, // Field that’s independent of the other two.
    banana: ManuallyDrop<Banana>,
}

impl Drop for FruitBox {
    fn drop(&mut self) {
        unsafe {
            // Explicit ordering in which field destructors are run specified in the intuitive
            // location – the destructor of the structure containing the fields.
            // Moreover, one can now reorder fields within the struct however much they want.
            ManuallyDrop::drop(&mut self.peach);
            ManuallyDrop::drop(&mut self.banana);
        }
        // After destructor for `FruitBox` runs (this function), the destructor for Melon gets
        // invoked in the usual manner, as it is not wrapped in `ManuallyDrop`.
    }
}

It is proposed that this pattern would become idiomatic for structures where fields must be dropped in a particular order.

How We Teach This

It is expected that the functions and wrapper added as a result of this RFC would be seldom necessary.

In addition to the usual API documentation, ManuallyDrop should be mentioned in reference/nomicon/elsewhere as the solution to the desire of explicit control of the order in which the structure fields gets dropped.

Alternatives

  • Stabilise some sort of drop order and make people to write code that’s hard to figure out at a glance;
  • Bikeshed colour;
  • Stabilise union and let people implement this themselves:
    • Precludes (or makes it much harder) from recommending this pattern as the idiomatic way to implement destructors with dependencies.

Unresolved questions

None known.

Summary

Add an extern type syntax for declaring types which are opaque to Rust’s type system.

Motivation

When interacting with external libraries we often need to be able to handle pointers to data that we don’t know the size or layout of.

In C it’s possible to declare a type but not define it. These incomplete types can only be used behind pointers, a compilation error will result if the user tries to use them in such a way that the compiler would need to know their layout.

In Rust, we don’t have this feature. Instead, a couple of problematic hacks are used in its place.

One is, we define the type as an uninhabited type. eg.

enum MyFfiType {}

Another is, we define the type with a private field and no methods to construct it.

struct MyFfiType {
    _priv: (),
}

The point of both these constructions is to prevent the user from being able to create or deal directly with instances of the type. Neither of these types accurately reflect the reality of the situation. The first definition is logically problematic as it defines a type which can never exist. This means that references to the type can also—logically—never exist and raw pointers to the type are guaranteed to be invalid. The second definition says that the type is a ZST, that we can store it on the stack and that we can call ptr::read, mem::size_of etc. on it. None of this is of course valid.

The controversies on how to represent foreign types even extend to the standard library too; see the discussion in the libc_types RFC PR.

This RFC instead proposes a way to directly express that a type exists but is unknown to Rust.

Finally, In the 2017 roadmap, integration with other languages, is listed as a priority. Just like unions, this is an unsafe feature necessary for dealing with legacy code in a correct and understandable manner.

Detailed design

Add a new kind of type declaration, an extern type:

extern {
    type Foo;
}

These types are FFI-safe. They are also DSTs, meaning that they do not implement Sized. Being DSTs, they cannot be kept on the stack, can only be accessed through pointers and references and cannot be moved from.

In Rust, pointers to DSTs carry metadata about the object being pointed to. For strings and slices this is the length of the buffer, for trait objects this is the object’s vtable. For extern types the metadata is simply (). This means that a pointer to an extern type has the same size as a usize (ie. it is not a “fat pointer”). It also means that if we store an extern type at the end of a container (such as a struct or tuple) pointers to that container will also be identical to raw pointers (despite the container as a whole being unsized). This is useful to support a pattern found in some C APIs where structs are passed around which have arbitrary data appended to the end of them: eg.

extern {
    type OpaqueTail;
}

#[repr(C)]
struct FfiStruct {
    data: u8,
    more_data: u32,
    tail: OpaqueTail,
}

As a DST, size_of and align_of do not work, but we must also be careful that size_of_val and align_of_val do not work either, as there is not necessarily a way at run-time to get the size of extern types either. For an initial implementation, those methods can just panic, but before this is stabilized there should be some trait bound or similar on them that prevents their use statically. The exact mechanism is more the domain of the custom DST RFC, RFC 1524, and so figuring that mechanism out will be delegated to it.

C’s “pointer void” (not (), but the void used in void* and similar) is currently defined in two official places: std::os::raw::c_void and libc::c_void. Unifying these is out of scope for this RFC, but this feature should be used in their definition instead of the current tricks. Strictly speaking, this is a breaking change, but the std docs explicitly say that void shouldn’t be used without indirection. And libc can, in the worst-case, make a breaking change.

How We Teach This

Really, the question is “how do we teach without this”. As described above, the current tricks for doing this are wrong. Furthermore, they are quite advanced touching upon many advanced corners of the language: zero-sized and uninhabited types are phenomena few programmer coming from mainstream languages have encountered. From reading around other RFCs, issues, and internal threads, one gets a sense of two issues: First, even among the group of Rust programmers enthusiastic enough to participate in these fora, the semantics of foreign types are not widely understood. Second, there is annoyance that none of the current tricks, by nature of them all being flawed in different ways, would become standard.

By contrast, extern type does exactly what one wants, with an obvious and guessable syntax, without forcing the user to immediately understand all the nuance about why these semantics are indeed the right ones. As they see various options fail: moves, stack variables, they can discover these semantics incrementally. The benefits are such that this would soon displace the current hacks, making code in the wild more readable through consistent use of a pattern.

This should be taught in the foreign function interface chapter of the rust book in place of where it currently tells people to use uninhabited enums (ack!).

Drawbacks

Very slight addition of complexity to the language.

The syntax has the potential to be confused with introducing a type alias, rather than a new nominal type. The use of extern here is also a bit of a misnomer as the name of the type does not refer to anything external to Rust.

Alternatives

Not do this.

Alternatively, rather than provide a way to create opaque types, we could just offer one distinguished type (std::mem::OpaqueData or something like that). Then, to create new opaque types, users just declare a struct with a member of type OpaqueData. This has the advantage of introducing no new syntax, and issues like FFI-compatibility would fall out of existing rules.

Another alternative is to drop the extern and allow a declaration to be written type A;. This removes the (arguably disingenuous) use of the extern keyword although it makes the syntax look even more like a type alias.

Unresolved questions

  • Should we allow generic lifetime and type parameters on extern types? If so, how do they effect the type in terms of variance?

  • In std’s source, it is mentioned that LLVM expects i8* for C’s void*. We’d need to continue to hack this for the two c_voids in std and libc. But perhaps this should be done across-the-board for all extern types? Somebody should check what Clang does.

Summary

Improve the assert_eq failure message formatting to increase legibility.

Previous RFC issue.

Motivation

Currently when assert_eq fails the default panic text has all the information on one long line, which is difficult to parse. This is more difficult when working with larger data structures. I’d like to alter the format of this text in order improve legibility, putting each piece of information on a different line.

Detailed design

Here is an failing test with the current format:

---- log_packet::tests::syntax_error_test stdout ----
        thread 'log_packet::tests::syntax_error_test' panicked at 'assertion failed: `(left == right)` (left: `"Syntax Error: a.rb:1: syntax error, unexpected end-of-input\n\n"`, right: `"Syntax error: a.rb:1: syntax error, unexpected end-of-input\n\n"`)', src/log_packet.rs:102
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Here is a failing test with an alternate format:

---- log_packet::tests::syntax_error_test stdout ----
        thread 'log_packet::tests::syntax_error_test' panicked at 'assertion failed: `(left == right)`

left:  `"Syntax Error: a.rb:1: syntax error, unexpected end-of-input\n\n"`
right: `"Syntax error: a.rb:1: syntax error, unexpected end-of-input\n\n"`

', src/log_packet.rs:102
note: Run with `RUST_BACKTRACE=1` for a backtrace.

In addition to putting each expression on a separate line I’ve also padding the word “left” with an extra space. This makes the values line up and easier to visually diff.

This could be further improved with coloured diff’ing or indication of differences. i.e. If two strings are between a certain levenshtein distance colour additional chars green and missing ones red.

Here is a screenshot of the output of the Elixir lang ExUnit test assertion macro, which I think is extremely clear:

2017-01-22-232834_932x347_scrot

As the stdlib does not contain any terminal colour manipulation features at the moment LLVM style arrows could also be used, as suggested by @p-kraszewski:

---- log_packet::tests::syntax_error_test stdout ----
        thread 'log_packet::tests::syntax_error_test' panicked at 'assertion failed: `(left == right)`

left:  `"Syntax Error: a.rb:1: syntax error, unexpected end-of-input\n\n"`
right: `"Syntax error: a.rb:1: syntax error, unexpected end-of-input\n\n"`
         ~~~~~~ ^ ~~~~
', src/log_packet.rs:102
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Drawbacks

This could be a breaking change if people are parsing this text. I feel the format of this text shouldn’t be relied upon, so this is probably OK.

Colour diffing will require quite a bit more work to support terminals on all platforms.

Unresolved questions

Summary

There has long been a desire to expand the number of platform- and architecture-specific APIs in the standard library, and to offer subsets of the standard library for working in constrained environments. At the same time, we want to retain the property that Rust code is portable by default.

This RFC proposes a new portability lint, which threads the needle between these two desires. The lint piggybacks on the existing cfg system, so that using APIs involving cfg will generate a warning unless there is explicit acknowledgment of the portability implications.

The lint is intended to make the existing std::os module obsolete, to allow expansion (and subsetting) of the standard library, and to provide deeper checking for portability across the ecosystem.

Motivation

Background: portability and the standard library

One of the goals of the standard library is to provide an interface to hardware and system services. In doing so, there were several competing principles that we wanted to embrace:

  • Rust should provide ergonomic and productive APIs for system services.
  • Rust should encourage portability by default.
  • Rust should provide zero-cost access to low-level system services.
  • Rust should be usable in a wide range of contexts, including resource-constrained and kernel environments.

The way we balanced these principles was roughly as follows:

  • We identified a set of “mainstream” platforms, consisting of 32- and 64-bit machines running Windows, Linux, or macOS. “Portability by default” thus more specifically means portability to mainstream platforms.

  • We present an ergonomic, primary API surface which is portable across these mainstream platforms (see std::{fs, net, env, process, sync} etc.).

  • We also provide separate access to low-level or OS-specific services via the std::os module. APIs in this module are largely traits that extend the cross-platform APIs, and in particular can expose their OS-level representation. The fact that these APIs require explicitly importing from std::os provided a small “speed bump” for venturing out of guaranteed mainstream platform portability.

  • Finally, for working in low-level and embedded contexts, we stabilized libcore, a subset of libstd that excludes all OS services and allocation, but still makes some hardware assumptions (e.g. about atomics and floating point support).

Problems with the status quo

The above strategy has served us fairly well in the first year since Rust 1.0, but it’s increasingly holding us back from enhancements we’d like to make. It’s also suboptimal in a few ways, even for the needs it covers.

Problems with std::os:

  • The std::os module has submodules that correspond to a hierarchy of OS types. For example, there is a unix submodule that applies to several operating systems, but there’s also a linux submodule with Linux-specific extensions. There are a couple of problems with such an organization. Most importantly, it’s not at all clear how to use the module hierarchy to organize features like fixed-size atomic types, where the types available vary in a fine-grained way based on the CPU family; SIMD is even worse. But even just for operating systems, organizing into a hierarchy becomes difficult as we gain more and more APIs, some of which are only available on particular versions of a given operating system.

  • The “speed bump” for using std::os is minimal and easy to miss; it’s just an import that looks the same as any other. Moreover, it doesn’t provide any help with the ecosystem beyond std. There’s no simple way to tell whether a crate you’re relying on is portable to the same degree as std is, and the os submodule pattern has not really caught on in the wider ecosystem.

  • Platform-specific APIs don’t live in their “natural location”. The majority of std::os works through extension traits to enhance the functionality of standard primitives. For example std::os::unix::io::AsRawFd is a trait with the as_raw_fd method (to extract a file descriptor). If you were to ignore Windows, however, one might expect this API instead to live as a method directly on types like File, TcpStream, etc. Forcing code to live in std::os thus comes at a mild cost for both ergonomics and discoverability. This problem is even worse for features like adding more atomic types or SIMD.

Problems with libcore/the facade:

  • Embedded libraries typically wish to never use functions in the standard library that abort on allocation failure (e.g. Vec::push). We’d like to provide some way for these libraries to use and interoperate with the standard collection types, but only have access to an alternative API surface (e.g. a try_push method provided via an extension trait). It’s not clear how to do that with the current facade setup.

  • Kernels and embedded environments often want to disable floating point, but the floating point types are currently treated as primitive and shipped in libcore.

  • There are platforms like emscripten where much of the standard library exists for consumption, but APIs like std::thread are unimplementable. Today these functions simply panic on use, but a compiler error would be better.

  • We’d like to open the door to a growing number of subsets of std and core, dropping hardware features like atomics, or perhaps even supporting 16-bit architectures. But again, it’s not clear how to fit this into the facade model without introducing a sprawling, unwieldy collection of crates.

What are our portability goals?

Taking a step back from the specific problems with the status quo, it’s worth thinking about what it means for Rust to be “portable”, and what is realistic to achieve. We should be asking this question not just for the standard library, but for the Rust library ecosystem in general.

The premise of this RFC is that there are roughly three desired portability levels for a library. In order of increasing portability:

  • Platform-specific. These are libraries whose fundamental purpose depends on a given platform, for which portability doesn’t make sense. Examples include the libc crate, the winapi crates, and crates designed for particular embedded devices.

  • Mainstream portability. Most libraries take portability as a secondary concern, and in particular don’t want to take a productivity hit just for the sake of maximizing portability. On the other hand, these libraries tend not to use obscure platform features, and it’s usually not too much of a hardship to work across common platforms.

  • Maximal portability. In some cases, a library author is motivated to push for a greater degree of portability, for example allowing their code to work in the no_std ecosystem. Depending on the library, this may entail a significant amount of work.

There’s a fundamental tradeoff here. On the one hand, we want Rust libraries to be as portable as possible. On the other hand, achieving maximal portability can be a big burden for library authors. Our approach so far has been to identify “mainstream platform assumptions”, as mentioned above, and guide code to work on all mainstream platforms by default; by convention, such portability is the default expectation of libraries on crates.io. This RFC formalizes that approach in a deeper way.

An important point: while we can expect library authors who are striving for portability to test their code on a variety of target platforms, we can’t make that assumption for the average library. In other words, if we want to guide all Rust code toward at least mainstream portability, we will need to do so in a way that doesn’t require actually compiling and testing for all mainstream scenarios.

Detailed design

The basic idea

The core problem we want to solve is:

  • We want to make non-mainstream APIs available in their natural location, e.g. as inherent methods directly on standard library types.

  • We want to have some kind of “speed bump” before using such APIs, so that users realize that they may be giving up mainstream portability.

  • We want to do this without requiring testing on platforms that lack the API.

The core idea is that having to write cfg is a sufficient speedbump, as it makes explicit what platform assumptions a piece of code is making. But today, you don’t have to be within a cfg to call something labeled with cfg.

Let’s take a concrete example: the as_raw_fd method. We’d like to provide this API as an inherent method on things like files. But it’s not a “mainstream” API; it only works on Unix. If you tried to use it and compiled your code on Windows, you would discover the problem right away, since the API would not be available due to cfg. But if you were only testing on Linux, you might never notice, since the API is available there.

The basic idea of this RFC is to provide an additional layer of checking on top of the existing cfg system, to avoid usage of an API accidentally working because you happen to be compiling for a given target platform. This checking is performed through a new portability lint, which warns when invoking APIs marked with cfg unless you’ve explicitly acknowledged the portability implications. We’ll see how you do that in a moment.

Going back to our example, we’d like to define methods on File like:

impl File {
    #[cfg(unix)]
    fn as_raw_fd(&self) -> RawFd { ... }

    #[cfg(windows)]
    fn as_raw_handle(&self) -> RawHandle { ... }
}

If you attempted to call as_raw_fd, when compiling on Unix you’d get a warning from the portability lint that you’re calling an API not available on all mainstream platforms. There are basically three ways to react (all of which will make the warning go away):

  • Decide not to use the API, after discovering that it would reduce portability.

  • Decide to use the API, putting the function using it within a cfg(unix) as well (which will flag that function as Unix-specific).

  • Decide to use the API in a cross-platform way, e.g. by providing a Windows version of the same functionality. In that case you allow the lint, explicitly acknowledging that your code may involve platform-specific APIs but claiming that all platforms of the current cfg are handled. (See the appendix at the end for a possible extension that does more checking).

In code, we’d have:

////////////////////////////////////////////////////////////////////////////////
// The code we might have written initially:
////////////////////////////////////////////////////////////////////////////////

fn unlabeled() {
    // Would generate a warning: calling a `unix`-only API while only
    // assuming a mainstream platform
    let fd = File::open("foo.txt").unwrap().as_raw_fd();
}

////////////////////////////////////////////////////////////////////////////////
// Code that opts into platform-specificness:
////////////////////////////////////////////////////////////////////////////////

#[cfg(unix)]
fn foo() {
    // No warning: we're within code that assumes `unix`
    let fd = File::open("foo.txt").unwrap().as_raw_fd();
}

#[cfg(windows)]
fn foo() {
    // No warning: we're within code that assumes `windows`
    let handle = File::open("foo.txt").unwrap().as_raw_handle();
}

#[cfg(linux)]
fn linux_only() {
    // No warning: we're within code that assumes `linux`, which implies `unix`
    let fd = File::open("foo.txt").unwrap().as_raw_fd();
}

////////////////////////////////////////////////////////////////////////////////
// Code that provides a cross-platform abstraction
////////////////////////////////////////////////////////////////////////////////

// No `cfg` label here; it's a cross-platform function, which we claim
// via the `allow`
#[allow(nonportable)]
fn cross_platform() {
    // invoke an item with a more restrictive `cfg`
    foo()
}

As with many lints, the portability lint is best effort: it is not required to provide airtight guarantees about portability. However, the RFC sketches a plausible implementation route that should cover the vast majority of cases.

Note that this lint will only check code that is actually compiled on the current platform, so the following code would not produce a warning when compiled on unix:

pub fn mycrate_function() {
    // ...
}

#[cfg(windows)]
pub fn windows_specific_mycrate_function() {
    // this call should warn since it makes an additional assumption
    windows_more_specific_mycrate_function();
}

#[cfg(all(windows, target_pointer_width = "64"))]
pub fn windows_more_specific_mycrate_function() {
    // ...
}

However, any such “missed portability issues” are only possible when already using cfg, which means a “speedbump” has already been passed.

With that overview in mind, let’s dig into the details.

The lint definition

The lint is structured somewhat akin to a type and effect system: roughly speaking, items that are labeled with a given cfg assumption can only be used within code making that same cfg assumption.

More precisely, each item has a portability, consisting of all the lexically-nested uses of cfg. If there are multiple uses of cfg, the portability is taken to be their conjunction:

#[cfg(unix)]
mod foo {
    #[cfg(target_pointer_width = "32")]
    fn bar() {
        // the portability of `bar` is `all(unix, target_pointer_width = "32")`
    }
}

The portability only considers built-in cfg attributes (like target_os), not Cargo features (which are treated as automatically true for the lint purposes).

The lint is then straightforward to define at a high level: it walks over item definitions and checks that the item’s portability is narrower than the portability of items it references or invokes. For example, bar in the above could invoke an item with portability unix and/or target_pointer_width = "32", but not one with portability linux.

To fully define the lint, though, we need to give more details about what “narrower” means, and how referenced item portability is determined.

Comparing portabilities

What does it mean for a portability to be narrower? In general, portability is a logical expression, using the operators all, any, not on top of primitive expressions like unix. Portability P is narrower than portability Q if P implies Q as a logic formula.

In general, comparing two portabilities is equivalent to solving SAT, an NP-complete problem – a frightening prospect for a lint! However, note that worst-case execution is exponential in the number of variables (i.e., primitive cfg constraints), not the number/complexity of clauses, and most comparisons should involve a very small number of variables. We can likely get away with a naive SAT implementation, perhaps with a handful of optimiziations specific to our use-case. In the limit, there are also many well-known techniques for solving SAT efficiently even on very large examples that arise in real-world usage.

Axioms

Another aspect of portability comparison is the relationship between things like unix and linux. In logical terms, we want to assume that linux implies unix, for example.

The primitive portabilities we’ll be comparing are all built in (since we are not including Cargo features). The solver can thus build in a number of assumptions about these portabilities. The end result is that code like the following should pass the lint:

#[cfg(unix)]
fn unix_only() { .. }

#[cfg(linux)]
fn linux_only() {
    // permitted since `linux` implies `unix`
    unix_only()
}

Of course, primitive portabilities in practice are key-value pairs (like target_os = "unix"). This RFC proposes to treat all keys as multimaps, that is, to not introduce assumptions like nand(target_os = "unix", target_os = "windows") for simplicity’s sake; uses of cfg in practice will not produce such nonsensical situations. However, the precise details of how these implications are specified—and what implications are desired—are left as implementation details that need to be worked out with real-world experience.

Determining the portability of referenced items

How is the portability of a referenced item determined? The lint will resolve an item to its definition, and use the portability of that definition, which will be recorded in metadata. For the case of trait items, however, this will involve attempting to resolve the invocation to a particular impl, to look up the portability of that impl. We can set up trait selection to yield portability information with the selected impl, which will allow us to catch cases like the following:

trait Foo {
    fn foo();
}

struct MyType;

#[cfg(unix)]
impl Foo for MyType {
    fn foo() { .. }
}

fn use_foo<T: Foo>() {
    T::foo()
}

fn invoke() {
    // invokes a `cfg(unix)` item via a generic function, but we can catch it
    // when checking that `MyType: Foo`, since selection will say that we need
    // our context to imply `unix`
    use_foo::<MyType>();
}

The story for std

With these basic mechanisms in hand, let’s sketch out how we might apply them to the standard library to achieve our initial goals. This part of the RFC should not be considered normative; it’s left to the implementation to make the final determination about how to set up the standard library.

The mainstream platform

The “mainstream platform” will be expressed via a new primitive cfg pattern called std. This is the default portability of all crates, unless opted-out (see below on “subsetting std”). Likewise, most items in std will initially be exported at std portability level (but see subsets below). These two facts together mean that existing uses of std will continue to work without issuing any warnings.

Expanding std

With the above setup, handling extensions to std with APIs like as_raw_fd is straightforward. In particular, we can write:

impl File {
    #[cfg(unix)]
    fn as_raw_fd(&self) -> RawFd { ... }

    #[cfg(windows)]
    fn as_raw_handle(&self) -> RawHandle { ... }
}

and the portability of as_raw_fd will be all(std, unix). Thus, any code using as_raw_fd will need to be in a unix context in particular.

We can thus deprecate the std::os module in favor of these in-place APIs. Doing so leverages the fact that we’re using a portability lint: these new inherent methods will shadow the existing ones in std::os, and may generate new warnings, but this is considered an acceptable change. After all, lints on dependencies are automatically capped, and the lint will not prevent code from compiling–and can be silenced.

For hardware features like additional atomics or SIMD, we can use the target_feature cfg key to label the APIs – which has to be done anyway, but will also do the right thing for the lint.

In short, for expansions there’s basically nothing to do. You just add the API in its natural location, with its natural cfg, and everything works out.

Subsetting std

What about subsets of std?

What use case do we want to address? Going back to the Portability Goals discussed earlier, the goal of subsetting std is mostly about helping people who want maximum portability. For this use case, you should opt out of the mainstream platform, and then whitelist the various features you need, thus giving you assistance in using the minimal set of assumptions needed.

Opting out of the mainstream platform. To opt out of the std platform, you can just apply a cfg to your crate definition. The assumptions of that cfg will form the baseline for the crate.

Carving up std into whitelistable features. When we want to provide subsets of std, we can introduce a new set of target features, along the following lines:

  • each integer size
  • each float size
  • each atomics size
  • allocation
  • OS facilities
    • env
    • fs
    • net
    • process
    • thread
    • rng

To introduce these features, we would change APIs in std from being marked as #[cfg(std)] to instead being labeled with the particular feature, e.g.:

// previously: #[cfg(std)]
#[cfg(target_feature = "thread")]
mod thread;

// previously: #[cfg(std)]
#[cfg(target_feature = "fs")]
mod fs;

and so on. We can then set up axioms such that std implies all of these features. That way existing code written at the default portability level will not produce warnings when using the standard library. And in general, we can carve out increasingly fine-grained subsets, setting up implications between the previous coarse-grained features and the new subsets.

On the other side, library authors shooting for maximal portability should opt out of cfg(std), and use cfg as little as possible, adding features to their whitelist only after deciding they’re truly needed, or abstracting over them (such as using threading for parallelism only when it was available).

Proposed rollout

The most pressing problem in std is the desire for expansion, rather than subsetting, so we should start there. The cfg needed for expansion is totally straightforward, and will allow us to gain experience with the lint.

Later, we can start exploring subsets of std, which will likely require some more thoughtful design to find the right granularity.

Drawbacks

There are several potential drawbacks to the approach of this RFC:

  • It adds a significant level of pedanticness about portability to Rust.
  • It does not provide airtight guarantees.
  • It may create compiler performance issues, due to the use of SAT solving.

The fact that it’s a lint offers some help with the first two points; the use of std as a default portability level should also help quite a bit with pedanticness.

The worry about SAT solving is harder to mitigate; there’s not much concrete evidence in either direction. But it is yet another place where the fact that it’s a lint could help: we may be able to simply skip checking pathological cases, if they indeed arise in practice. In any case, it’s hard to know how concerned to be until we try it.

While the fact that it’s a lint gives us more leeway to experiment, it’s also a lint that could produce widespread warnings throughout the ecosystem, so we need to exercise care.

Alternatives

The main alternatives are:

  • Give up on encouraging “portability by default”, and instead just land APIs in their natural location using today’s cfg system. This is certainly the less costly way to go. It’s also forward-compatible with implementing the proposed lint, so we should discuss the possibility of landing APIs under cfg even before the lint is implemented.

  • Use a less precise checking strategy. In particular, rather than trying to compare portabilities in a detailed, item-level way, we might just require some crate-level “opt in”. That could either take the form of acknowledging “this code makes assumptions beyond the mainstream platform”, or might list the specific cfg assumptions the code is allowed to make. Of course, the downside is that you get much less help making sure that your APIs are properly labeled in place.

How we teach this

For people simply using libraries, this feature “teaches itself” by generating warnings. Those warnings should make clear what to do to fix the problem, and ideally provide extended error information that describes the system in more detail.

For library authors, the documentation for cfg and match_cfg would explain the implications for the lint, and walk through several examples illustrating the scenarios that arise in practice.

Unresolved questions

Extensions to cfg itself

If we allow cfg to go beyond simple key-value pairs, for example to talk about ranges, we will need to accommodate that somehow in the lint. One plausible approach would be to use something more like SMT solving, which incorporates reasoning about things like ordering constraints in addition to basic SAT questions.

External libraries

It’s not clear what the story should be for a library like libc, which currently involves intricate uses of cfg. We should have some idea for how to approach such cases before landing the RFC.

The standard library

To what extent does this proposal obviate the need for the std facade? Might it be possible to deprecate libcore in favor of the “subsetting std” approach?

Cargo features

It’s unclear whether, or how, to extend this approach to deal with Cargo features. In particular, features are namespaced per crate, so there’s no way to use the cfg system today to talk about upstream features.

Appendix: possible extensions

match_cfg

The original version of this RFC was more expansive, and proposed a match_cfg macro that provided some additional checking.

The match_cfg macro takes a sequence of cfg patterns, followed by => and an expression. Its syntax and semantics resembles that of match. However, there are some special considerations when checking portability:

  • When descending into an arm of a match_cfg, the arm is checked against portability that includes the pattern for the arm.

  • The portability for the match_cfg itself is understood as any(p1, ..., p_n) where the match_cfg patterns are p1 through p_n.

Thus, for example, the following code will pass the lint:

#[cfg(windows)]
fn windows_only() { .. }

#[cfg(unix)]
fn unix_only() { .. }

#[cfg(any(windows, unix))]
fn portable() {
    // the expression here has portability `any(windows, unix)`
    match_cfg! {
        windows => {
            // allowed because we are within a scope with
            // portability `all(any(windows, unix), windows)`
            windows_only()
        }
        unix => {
            // allowed because we are within a scope with
            // portability `all(any(windows, unix), unix)`
            unix_only()
        }
    }
}

If you have a match_case that covers all cases (like windows and not(windows)), then it imposes no portability constraints on its context.

On more reflection, though, this extension doesn’t seem so worthwhile: while it provides some additional checking, the fact remains that only the currently-enabled cfg is fully checked, so the additional guarantee you get is somewhat mixed. It’s also a rare (maybe non-existent) error to explicitly write code that’s broken down by platforms, but forget one of the platforms you wish to cover.

We can, however, add match_cfg as a backwards-compatible extension at any time.

Summary

This RFC proposes the addition of two macros to the global prelude, eprint! and eprintln!. These are exactly the same as print! and println!, respectively, except that they write to standard error instead of standard output.

An implementation already exists.

Motivation

This proposal will improve the ergonomics of the Rust language for development of command-line tools and “back end” / “computational kernel” programs. Such programs need to maintain a distinction between their primary output, which will be fed to the next element in a computational “pipeline”, and their status reports, which should go directly to the user. Conventionally, standard output should receive the primary output and standard error should receive status reports.

At present, writing text to standard output is very easy, using the print(ln)! macros, but writing text to standard error is significantly more work: compare

println!("out of cheese error: {}", 42);
writeln!(stderr(), "out of cheese error: {}", 42).unwrap();

The latter may also require the addition of use std::io::stderr and/or use std::io::Write; to the top of the file.

Because writing to stderr is more work, and requires introduction of more concepts, all of the tutorial documentation for the language uses println! for error messages, which teaches bad habits.

Detailed design

Two macros will be added to the global prelude. eprint! is exactly the same as print!, and eprintln! is exactly the same as println!, except that both of them write to standard error instead of standard output. “Standard error” is defined as “the same place where panic! writes messages.” In particular, using set_panic to change where panic messages go will also affect eprint! and eprintln!.

Previous discussion has converged on agreement that both these macros will be useful, but has not arrived at a consensus about their names. An executive decision is necessary. It is the author’s opinion that eprint! and eprintln! have the strongest case in their favor, being (a) almost as short as print! and println!, (b) still visibly different from them, and (c) the names chosen by several third-party crate authors who implemented these macros themselves for internal use.

How We Teach This

We will need to add text to the reference manual, and especially to the tutorials, explaining the difference between “primary output” and “status reports”, so that programmers know when to use println! and when to use eprintln!. All of the existing examples and tutorials should be checked over for cases where println! is being used for a status report, and all such cases should be changed to use eprintln! instead; similarly for print!.

Drawbacks

The usual drawbacks of adding macros to the prelude apply. In this case, I think the most significant concern is to choose names that are unlikely to conflict with existing library crates’ exported macros. (Conversely, internal macros with the same names and semantics demonstrate that the names chosen are appropriate.)

The names eprintln! and eprint! are terse, differing only in a single letter from println! and print!, and it’s not obvious at a glance what the leading e means. (“This is too cryptic” is the single most frequently heard complaint from people who don’t like eprintln!.) However, once you do know what it means it is reasonably memorable, and anyone who is already familiar with stdout versus stderr is very likely to guess correctly what it means.

There is an increased teaching burden—but that’s the wrong way to look at it. The Book and the reference manual should have been teaching the difference between “primary output” and “status reports” all along. This is something programmers already need to know in order to write programs that fit well into the larger ecosystem. Any documentation that might be a new programmer’s first exposure to the concept of “standard output” has a duty to explain that there is also “standard error”, and when you should use which.

Alternatives

It would be inappropriate to introduce printing-to-stderr macros whose behavior did not exactly parallel the existing printing-to-stdout macros; I will not discuss that possibility further.

We could provide only eprintln!, omitting the no-newline variant. Most error messages should be one or more complete lines, so it’s not obvious that we need eprint!. However, standard error is also the appropriate place to send progress messages, and it is common to want to print partial lines in progress messages, as this is a natural way to express “a time-consuming computation is running”. For example:

Particle        0 of      200: (0.512422, 0.523495, 0.481173)  ( 1184 ms)
Particle        1 of      200: (0.521386, 0.543189, 0.473058)  ( 1202 ms)
Particle        2 of      200: (0.498974, 0.538118, 0.488474)  ( 1146 ms)
Particle        3 of      200: (0.546846, 0.565138, 0.500004)  ( 1171 ms)
Particle        4 of      200: _

We could choose different names. Quite a few other possibilities have been suggested in the pre-RFC and RFC discussions; they fall into three broad classes:

  • error(ln)! and err(ln)! are ruled out as too likely to collide with third-party crates. error! in particular is already taken by the log crate.

  • println_err!, printlnerr!, errprintln!, and several other variants on this theme are less terse, but also more typing. It is the author’s personal opinion that minimizing additional typing here is a Good Thing. People do live with fprintf(stderr, ...) in C, but on the other hand there is a lot of sloppy C out there that sends its error messages to stdout. I want to minimize the friction in using eprintln! once you already know what it means.

    It is also highly desirable to put the distinguishing label at the beginning of the macro name, as this makes the difference stand out more when skimming code.

  • aprintln!, dprintln!, uprintln!, println2!, etc. are not less cryptic than eprintln!, and the official name of standard I/O stream 2 is “standard error”, even though it’s not just for errors, so e is the best choice.

Finally, we could think of some way to improve the ergonomics of writeln! so that we don’t need the new macros at all. There are four fundamental problems with that, though:

  1. writeln!(stderr(), ...) is always going to be more typing than eprintln!(...). (Again, people do live with fprintf(stderr, ...) in C, but again, minimizing usage friction is highly desirable.)

  2. On a similar note, use of writeln! requires use std::io::Write, in contrast to C where #include <stdio.h> gets you both printf and fprintf. I am not sure how often this would be the only use of writeln! in complex programs, however.

  3. writeln! returns a Result, which must be consumed; this is appropriate for the intended core uses of writeln!, but means tacking .unwrap() on the end of every use to print diagnostics (if printing diagnostics fails, it is almost always the case that there’s nothing more sensible to do than crash).

  4. writeln!(stderr(), ...) is unaffected by set_panic() (just as writeln!(stdout(), ...) is unaffected by set_print()). This is arguably a bug. On the other hand, it is also arguably the Right Thing.

Unresolved questions

See discussion above.

Summary

Add an unstable sort to libcore.

Motivation

At the moment, the only sort function we have in libstd is slice::sort. It is stable, allocates additional memory, and is unavailable in #![no_std] environments.

The sort function is stable, which is a good but conservative default. However, stability is rarely a required property in practice, and some other characteristics of sort algorithms like higher performance or lower memory overhead are often more desirable.

Having a performant, non-allocating unstable sort function in libcore would cover those needs. At the moment Rust is not offering this solution as a built-in (only crates), which is unusual for a systems programming language.

Q: What is stability?
A: A sort function is stable if it doesn’t reorder equal elements. For example:

let mut orig = vec![(0, 5), (0, 4)];
let mut v = orig.clone();

// Stable sort preserves the original order of equal elements.
v.sort_by_key(|p| p.0);
assert!(orig == v); // OK!

/// Unstable sort may or may not preserve the original order.
v.sort_unstable_by_key(|p| p.0);
assert!(orig == v); // MAY FAIL!

Q: When is stability useful?
A: Not very often. A typical example is sorting columns in interactive GUI tables. E.g. you want to have rows sorted by column X while breaking ties by column Y, so you first click on column Y and then click on column X. This is a use case where stability is important.

Q: Can stable sort be performed using unstable sort?
A: Yes. If we transform [T] into [(T, usize)] by pairing every element with its index, then perform unstable sort, and finally remove indices, the result will be equivalent to stable sort.

Q: Why is slice::sort stable?
A: Because stability is a good default. A programmer might call a sort function without checking in the documentation whether it is stable or unstable. It is very intuitive to assume stability, so having slice::sort perform unstable sorting might cause unpleasant surprises. See this story for an example.

Q: Why does slice::sort allocate?
A: It is possible to implement a non-allocating stable sort, but it would be considerably slower.

Q: Why is slice::sort not compatible with #![no_std]?
A: Because it allocates additional memory.

Q: How much faster can unstable sort be?
A: Sorting 10M 64-bit integers using pdqsort (an unstable sort implementation) is 45% faster than using slice::sort. Detailed benchmarks are here.

Q: Can unstable sort benefit from allocation?
A: Generally, no. There is no fundamental property in computer science saying so, but this has always been true in practice. Zero-allocation and instability go hand in hand.

Detailed design

The API will consist of three functions that mirror the current sort in libstd:

  1. core::slice::sort_unstable
  2. core::slice::sort_unstable_by
  3. core::slice::sort_unstable_by_key

By contrast, C++ has functions std::sort and std::stable_sort, where the defaults are set up the other way around.

Interface

pub trait SliceExt {
    type Item;

    // ...

    fn sort_unstable(&mut self)
        where Self::Item: Ord;

    fn sort_unstable_by<F>(&mut self, compare: F)
        where F: FnMut(&Self::Item, &Self::Item) -> Ordering;

    fn sort_unstable_by_key<B, F>(&mut self, mut f: F)
        where F: FnMut(&Self::Item) -> B,
              B: Ord;
}

Examples

let mut v = [-5i32, 4, 1, -3, 2];

v.sort_unstable();
assert!(v == [-5, -3, 1, 2, 4]);

v.sort_unstable_by(|a, b| b.cmp(a));
assert!(v == [4, 2, 1, -3, -5]);

v.sort_unstable_by_key(|k| k.abs());
assert!(v == [1, 2, -3, 4, -5]);

Implementation

Proposed implementation is available in the pdqsort crate.

Q: Why choose this particular sort algorithm?
A: First, let’s analyse what unstable sort algorithms other languages use:

  • C: quicksort
  • C++: introsort
  • D: introsort
  • Swift: introsort
  • Go: introsort
  • Crystal: introsort
  • Java: dual-pivot quicksort

The most popular sort is definitely introsort. Introsort is an implementation of quicksort that limits recursion depth. As soon as depth exceeds 2 * log(n), it switches to heapsort in order to guarantee O(n log n) worst-case. This method combines the best of both worlds: great average performance of quicksort with great worst-case performance of heapsort.

Java (talking about Arrays.sort, not Collections.sort) uses dual-pivot quicksort. It is an improvement of quicksort that chooses two pivots for finer grained partitioning, offering better performance in practice.

A recent improvement of introsort is pattern-defeating quicksort, which is substantially faster in common cases. One of the key tricks pdqsort uses is block partitioning described in the BlockQuicksort paper. This algorithm still hasn’t been built into in any programming language’s standard library, but there are plans to include it into some C++ implementations.

Among all these, pdqsort is the clear winner. Some benchmarks are available here.

Q: Is slice::sort ever faster than pdqsort?
A: Yes, there are a few cases where it is faster. For example, if the slice consists of several pre-sorted sequences concatenated one after another, then slice::sort will most probably be faster. Another case is when using costly comparison functions, e.g. when sorting strings. slice::sort optimizes the number of comparisons very well, while pdqsort optimizes for fewer writes to memory at expense of slightly larger number of comparisons. But other than that, slice::sort should be generally slower than pdqsort.

Q: What about radix sort?
A: Radix sort is usually blind to patterns in slices. It treats totally random and partially sorted the same way. It is probably possible to improve it by combining it with some other techniques, but it’s not trivial. Moreover, radix sort is incompatible with comparison-based sorting, which makes it an awkward choice for a general-purpose API. On top of all this, it’s not even that much faster than pdqsort anyway.

How We Teach This

Stability is a confusing and loaded term. Function slice::sort_unstable might be misunderstood as a function that has unstable API. That said, there is no less confusing alternative to “unstable sorting”. Documentation should clearly state what “stable” and “unstable” mean.

slice::sort_unstable will be mentioned in the documentation for slice::sort as a faster non-allocating alternative. The documentation for slice::sort_unstable must also clearly state that it guarantees no allocation.

Drawbacks

The amount of code for sort algorithms will grow, and there will be more code to review.

It might be surprising to discover cases where slice::sort is faster than slice::sort_unstable. However, these peculiarities can be explained in documentation.

Alternatives

Unstable sorting is indistinguishable from stable sorting when sorting primitive integers. It’s possible to specialize slice::sort to fall back to slice::sort_unstable. This would improve performance for primitive integers in most cases, but patching cases type by type with different algorithms makes performance more inconsistent and less predictable.

Unstable sort guarantees no allocation. Instead of naming it slice::sort_unstable, it could also be named slice::sort_noalloc or slice::sort_unstable_noalloc. This may slightly improve clarity, but feels much more awkward.

Unstable sort can also be provided as a standalone crate instead of within the standard library. However, every other systems programming language has a fast unstable sort in standard library, so why shouldn’t Rust, too?

Unresolved questions

None.

Summary

Deprecate mem::uninitialized::<T> and mem::zeroed::<T> and replace them with a MaybeUninit<T> type for safer and more principled handling of uninitialized data.

Motivation

The problems with uninitialized centre around its usage with uninhabited types, and its interaction with Rust’s type layout invariants. The concept of “uninitialized data” is extremely problematic when it comes into contact with types like ! or Void.

For any given type, there may be valid and invalid bit-representations. For example, the type u8 consists of a single byte and all possible bytes can be sensibly interpreted as a value of type u8. By contrast, a bool also consists of a single byte but not all bytes represent a bool: the bit vectors [00000000] (false) and [00000001] (true) are valid bools whereas [00101010] is not. By further contrast, the type ! has no valid bit-representations at all. Even though it’s treated as a zero-sized type, the empty bit vector [] is not a valid representation and has no interpretation as a !.

As bool has both valid and invalid bit-representations, an uninitialized bool cannot be known to be invalid until it is inspected. At this point, if it is invalid, the compiler is free to invoke undefined behaviour. By contrast, an uninitialized ! can only possibly be invalid. Without even inspecting such a value the compiler can assume that it’s working in an impossible state-of-affairs whenever such a value is in scope. This is the logical basis for using a return type of ! to represent diverging functions. If we call a function which returns bool, we can’t assume that the returned value is invalid and we have to handle the possibility that the function returns. However if a function call returns !, we know that the function cannot sensibly return. Therefore we can treat everything after the call as dead code and we can write-off the scenario where the function does return as being undefined behaviour.

The issue then is what to do about uninitialized::<T>() where T = !? uninitialized::<T> is meaningless for uninhabited T and is currently instant undefined behaviour when T = ! - even if the “value of type !” is never read. The type signature of uninitialized::<!> is, after all, that of a diverging function:

fn mem::uninitialized::<!>() -> !

Yet calling this function does not diverge! It just breaks everything then eats your laundry instead.

This problem is most prominent with ! but also applies to other types that have restrictions on the values they can carry. For example, Some(mem::uninitialized::<bool>()).is_none() could actually return true because uninitialized memory could violate the invariant that a bool is always [00000000] or [00000001] – and Rust relies on this invariant when doing enum layout. So, mem::uninitialized::<bool>() is instantaneous undefined behavior just like mem::uninitialized::<!>(). This also affects mem::zeroed when considering types where the all-0 bit pattern is not valid, like references: mem::zeroed::<&'static i32>() is instantaneous undefined behavior.

Tracking uninitializedness in the type

An alternative way of representing uninitialized data is through a union type:

union MaybeUninit<T> {
    uninit: (),
    value: T,
}

Instead of creating an “uninitialized value”, we can create a MaybeUninit initialized with uninit: (). Then, once we know that the value in the union is valid, we can extract it with my_uninit.value. This is a better way of handling uninitialized data because it doesn’t involve lying to the type system and pretending that we have a value when we don’t. It also better represents what’s actually going on: we never really have a value of type T when we’re using uninitialized::<T>, what we have is some memory that contains either a value (value: T) or nothing (uninit: ()), with it being the programmer’s responsibility to keep track of which state we’re in. Notice that creating a MaybeUninit<T> is safe for any T! Only when accessing my_uninit.value, we have to be careful to ensure this has been properly initialized.

To see how this can replace uninitialized and fix bugs in the process, consider the following code:

fn catch_an_unwind<T, F: FnOnce() -> T>(f: F) -> Option<T> {
    let mut foo = unsafe {
        mem::uninitialized::<T>()
    };
    let mut foo_ref = &mut foo as *mut T;

    match std::panic::catch_unwind(|| {
        let val = f();
        unsafe {
            ptr::write(foo_ref, val);
        }
    }) {
        Ok(()) => Some(foo);
        Err(_) => None
    }
}

Naively, this code might look safe. The problem though is that by the time we get to let mut foo_ref we’re already saying we have a value of type T. But we don’t, and for T = ! this is impossible. And so if this function is called with a diverging callback it will invoke undefined behaviour before it even gets to catch_unwind.

We can fix this by using MaybeUninit instead:

fn catch_an_unwind<T, F: FnOnce() -> T>(f: F) -> Option<T> {
    let mut foo: MaybeUninit<T> = MaybeUninit {
        uninit: (),
    };
    let mut foo_ref = &mut foo as *mut MaybeUninit<T>;

    match std::panic::catch_unwind(|| {
        let val = f();
        unsafe {
            ptr::write(&mut (*foo_ref).value, val);
        }
    }) {
        Ok(()) => {
            unsafe {
                Some(foo.value)
            }
        },
        Err(_) => None
    }
}

Note the difference: we’ve moved the unsafe block to the part of the code which is actually unsafe - where we have to assert to the compiler that we have a valid value. And we only ever tell the compiler we have a value of type T where we know we actually do have a value of type T. As such, this is fine to use with any T, including !. If the callback diverges then it’s not possible to get to the unsafe block and try to read the non-existent value.

Given that it’s so easy for code using uninitialized to hide bugs like this, and given that there’s a better alternative, this RFC proposes deprecating uninitialized and introducing the MaybeUninit type into the standard library as a replacement.

Detailed design

Add the aforementioned MaybeUninit type to the standard library:

pub union MaybeUninit<T> {
    uninit: (),
    value: ManuallyDrop<T>,
}

The type should have at least the following interface (Playground link):

impl<T> MaybeUninit<T> {
    /// Create a new `MaybeUninit` in an uninitialized state.
    ///
    /// Note that dropping a `MaybeUninit` will never call `T`'s drop code.
    /// It is your responsibility to make sure `T` gets dropped if it got initialized.
    pub fn uninitialized() -> MaybeUninit<T> {
        MaybeUninit {
            uninit: (),
        }
    }

    /// Create a new `MaybeUninit` in an uninitialized state, with the memory being
    /// filled with `0` bytes.  It depends on `T` whether that already makes for
    /// proper initialization. For example, `MaybeUninit<usize>::zeroed()` is initialized,
    /// but `MaybeUninit<&'static i32>::zeroed()` is not because references must not
    /// be null.
    ///
    /// Note that dropping a `MaybeUninit` will never call `T`'s drop code.
    /// It is your responsibility to make sure `T` gets dropped if it got initialized.
    pub fn zeroed() -> MaybeUninit<T> {
        let mut u = MaybeUninit::<T>::uninitialized();
        unsafe { u.as_mut_ptr().write_bytes(0u8, 1); }
        u
    }

    /// Set the value of the `MaybeUninit`. The overwrites any previous value without dropping it.
    pub fn set(&mut self, val: T) {
        unsafe {
            self.value = ManuallyDrop::new(val);
        }
    }

    /// Extract the value from the `MaybeUninit` container.  This is a great way
    /// to ensure that the data will get dropped, because the resulting `T` is
    /// subject to the usual drop handling.
    ///
    /// # Unsafety
    ///
    /// It is up to the caller to guarantee that the `MaybeUninit` really is in an initialized
    /// state, otherwise this will immediately cause undefined behavior.
    pub unsafe fn into_inner(self) -> T {
        std::ptr::read(&*self.value)
    }

    /// Get a reference to the contained value.
    ///
    /// # Unsafety
    ///
    /// It is up to the caller to guarantee that the `MaybeUninit` really is in an initialized
    /// state, otherwise this will immediately cause undefined behavior.
    pub unsafe fn get_ref(&self) -> &T {
        &*self.value
    }

    /// Get a mutable reference to the contained value.
    ///
    /// # Unsafety
    ///
    /// It is up to the caller to guarantee that the `MaybeUninit` really is in an initialized
    /// state, otherwise this will immediately cause undefined behavior.
    pub unsafe fn get_mut(&mut self) -> &mut T {
        &mut *self.value
    }

    /// Get a pointer to the contained value. Reading from this pointer will be undefined
    /// behavior unless the `MaybeUninit` is initialized.
    pub fn as_ptr(&self) -> *const T {
        unsafe { &*self.value as *const T }
    }

    /// Get a mutable pointer to the contained value. Reading from this pointer will be undefined
    /// behavior unless the `MaybeUninit` is initialized.
    pub fn as_mut_ptr(&mut self) -> *mut T {
        unsafe { &mut *self.value as *mut T }
    }
}

Deprecate uninitialized with a deprecation messages that points people to the MaybeUninit type. Make calling uninitialized on an empty type trigger a runtime panic which also prints the deprecation message.

How We Teach This

Correct handling of uninitialized data is an advanced topic and should probably be left to The Rustonomicon. There should be a paragraph somewhere therein introducing the MaybeUninit type.

The documentation for uninitialized should explain the motivation for these changes and direct people to the MaybeUninit type.

Drawbacks

This will be a rather large breaking change as a lot of people are using uninitialized. However, much of this code already likely contains subtle bugs.

Alternatives

  • Not do this.
  • Just make uninitialized::<!> panic instead (making !’s behaviour surprisingly inconsistent with all the other types).
  • Introduce an Inhabited auto-trait for inhabited types and add it as a bound to the type argument of uninitialized.
  • Disallow using uninhabited types with uninitialized by making it behave like transmute does today - by having restrictions on its type arguments which are enforced outside the trait system.

Unresolved questions

None known.

Future directions

Ideally, Rust’s type system should have a way of talking about initializedness statically. In the past there have been proposals for new pointer types which could safely handle uninitialized data. We should seriously consider pursuing one of these proposals.

Summary

Allow for local variables, function arguments, and some expressions to have an unsized type, and implement it by storing the temporaries in variably-sized allocas.

Have repeat expressions with a length that captures local variables be such an expression, returning an [T] slice.

Provide some optimization guarantees that unnecessary temporaries will not create unnecessary allocas.

Motivation

There are 2 motivations for this RFC:

  1. Passing unsized values, such as trait objects, to functions by value is often desired. Currently, this must be done through a Box<T> with an unnecessary allocation.

One particularly common example is passing closures that consume their environment without using monomorphization. One would like for this code to work:

fn takes_closure(f: FnOnce()) { f(); }

But today you have to use a hack, such as taking a Box<FnBox<()>>.

  1. Allocating a runtime-sized variable on the stack is important for good performance in some use-cases - see RFC #1808, which this is intended to supersede.

Detailed design

Unsized Rvalues - language

Remove the rule that requires all locals and rvalues to have a sized type. Instead, require the following:

  1. The following expressions must always return a Sized type:
    1. Function calls, method calls, operator expressions
      • implementing unsized return values for function calls would require the called function to do the alloca in our stack frame.
    2. ADT expressions
      • see alternatives
    3. cast expressions
      • this seems like an implementation simplicity thing. These can only be trivial casts.
  2. The RHS of assignment expressions must always have a Sized type.
    • Assigning an unsized type is impossible because we don’t know how much memory is available at the destination. This applies to ExprAssign assignments and not to StmtLet let-statements.

This also allows passing unsized values to functions, with the ABI being as if a &move pointer was passed (a (by-move-data, extra) pair). This also means that methods taking self by value are object-safe, though vtable shims are sometimes needed to translate the ABI (as the callee-side intentionally does not pass extra to the fn in the vtable, no vtable shim is needed if the vtable function already takes its argument indirectly).

For example:

struct StringData {
    len: usize,
    data: [u8],
}

fn foo(s1: Box<StringData>, s2: Box<StringData>, cond: bool) {
    // this creates a VLA copy of either `s1.1` or `s2.1` on
    // the stack.
    let mut s = if cond {
        s1.data
    } else {
        s2.data
    };
    drop(s1);
    drop(s2);
    foo(s);
}

fn example(f: for<'a> FnOnce(&'a X<'a>)) {
    let x = X::new();
    f(x); // aka FnOnce::call_once(f, (x,));
}

VLA expressions

Allow repeat expressions to capture variables from their surrounding environment. If a repeat expression captures such a variable, it has type [T] with the length being evaluated at run-time. If the repeat expression does not capture any variable, the length is evaluated at compile-time. For example:

extern "C" {
   fn random() -> usize;
}

fn foo(n: usize) {
    let x = [0u8; n]; // x: [u8]
    let x = [0u8; n + (random() % 100)]; // x: [u8]
    let x = [0u8; 42]; // x: [u8; 42], like today
    let x = [0u8; random() % 100]; //~ ERROR constant evaluation error
}

“captures a variable” - as in RFC #1558 - is used as the condition for making the return be [T] because it is simple, easy to understand, and introduces no type-checking complications.

The last error message could have a user-helpful note, for example “extract the length to a local variable if you want a variable-length array”.

Unsized Rvalues - MIR

The way this is implemented in MIR is that operands, rvalues, and temporaries are allowed to be unsized. An unsized operand is always “by-ref”. Unsized rvalues are either a Use or a Repeat and both can be translated easily.

Unsized locals can never be reassigned within a scope. When first assigning to an unsized local, a stack allocation is made with the correct size.

MIR construction remains unchanged.

Guaranteed Temporary Elision

MIR likes to create lots of temporaries for OOE reason. We should optimize them out in a guaranteed way in these cases (FIXME: extend these guarantees to locals aka NRVO?).

TODO: add description of problem & solution.

How We Teach This

Passing arguments to functions by value should not be too complicated to teach. I would like VLAs to be mentioned in the book.

The “guaranteed temporary elimination” rules require more work to teach. It might be better to come up with new rules entirely.

Drawbacks

In Unsafe code, it is very easy to create unintended temporaries, such as in:

unsafe fn poke(ptr: *mut [u8]) { /* .. */ }
unsafe fn foo(mut a: [u8]) {
    let ptr: *mut [u8] = &mut a;
    // here, `a` must be copied to a temporary, because
    // `poke(ptr)` might access the original.
    bar(a, poke(ptr));
}

If we make [u8] be Copy, that would be even easier, because even uses of poke(ptr); after the function call could potentially access the supposedly-valid data behind a.

And even if it is not as easy, it is possible to accidentally create temporaries in safe code.

Unsized temporaries are dangerous - they can easily cause aborts through stack overflow.

Alternatives

The bikeshed

There are several alternative options for the VLA syntax.

  1. The RFC choice, [t; φ] has type [T; φ] if φ captures no variables and type [T] if φ captures a variable.
    • pro: can be understood using “HIR”/resolution only.
    • pro: requires no additional syntax.
    • con: might be confusing at first glance.
    • con: [t; foo()] requires the length to be extracted to a local.
  2. The “permissive” choice: [t; φ] has type [T; φ] if φ is a constexpr, otherwise [T]
    • pro: allows the most code
    • pro: requires no additional syntax.
    • con: depends on what is exactly a const expression. This is a big issue because that is both non-local and might change between rustc versions.
  3. Use the expected type - [t; φ] has type [T] if it is evaluated in a context that expects that type (for example [t; foo()]: [T]) and [T; _] otherwise.
    • pro: in most cases, very human-visible.
    • pro: requires no additional syntax.
    • con: relies on the notion of “expected type”. While I think we do have to rely on that in the unsafe code semantics of &foo borrow expressions (as in, whether a borrow is treated as a “safe” or “unsafe” borrow - I’ll write more details sometime), it might be better to not rely on expected types too much.
  4. use an explicit syntax, for example [t; virtual φ].
    • bikeshed: exact syntax.
    • pro: very explicit and visible.
    • con: more syntax.
  5. use an intrinsic, std::intrinsics::repeat(t, n) or something.
    • pro: theoretically minimizes changes to the language.
    • con: requires returning unsized values from intrinsics.
    • con: unergonomic to use.

Unsized ADT Expressions

Allowing unsized ADT expressions would make unsized structs constructible without using unsafe code, as in:

let len_ = s.len();
let p = Box::new(PascalString {
    length: len_,
    data: *s
});

However, without some way to guarantee that this can be done without allocas, that might be a large footgun.

Copy Slices

One somewhat-orthogonal proposal that came up was to make Clone (and therefore Copy) not depend on Sized, and to make [u8] be Copy, by moving the Self: Sized bound from the trait to the methods, i.e. using the following declaration:

pub trait Clone {
    fn clone(&self) -> Self where Self: Sized;
    fn clone_from(&mut self, source: &Self) where Self: Sized {
        // ...
    }
}

That would be a backwards-compatibility-breaking change, because today T: Clone + ?Sized (or of course Self: Clone in a trait context, with no implied Self: Sized) implies that T: Sized, but it might be that its impact is small enough to allow (and even if not, it might be worth it for Rust 2.0).

Unresolved questions

How can we mitigate the risk of unintended unsized or large allocas? Note that the problem already exists today with large structs/arrays. A MIR lint against large/variable stack sizes would probably help users avoid these stack overflows. Do we want it in Clippy? rustc?

How do we handle truly-unsized DSTs when we get them? They can theoretically be passed to functions, but they can never be put in temporaries.

Accumulative allocas (aka 'fn borrows) are beyond the scope of this RFC.

See alternatives.

Summary

This is a proposal for the rust grammar to support a vert | at the beginning of the pattern. Consider the following example:

use E::*;

enum E { A, B, C, D }

// This is valid Rust
match foo {
    A | B | C | D => (),
}

// This is an example of what this proposal should allow.
match foo {
    | A | B | C | D => (),
}

Motivation

This is taking a feature which is nice about F# and allowing it by a straightforward extension of the current rust language. After having used this in F#, it seems limiting to not even support this at the language level.

F# Context

In F#, enumerations (called unions) are declared in the following fashion where all of these are equivalent:

// Normal union
type IntOrBool = I of int | B of bool
// For consistency, have all lines look the same
type IntOrBool = 
   | I of int
   | B of bool
// Collapsing onto a single line is allowed
type IntOrBool = | I of int | B of bool

Their match statements adopt a similar style to this. Note that every | is aligned, something which is not possible with current Rust:

match foo with
    | I -> ""
    | B -> ""

Maximizing | alignment

In Rust, about the best we can do is an inconsistent alignment with one of the following two options:

use E::*;

enum E { A, B, C, D }

match foo {
//  |
//  V Inconsistently missing a `|`.
      A
    | B
    | C
    | D => (),
}

match foo {
    A |
    B |
    C |
    D => (),
//    ^ Also inconsistent but since this is the last in the sequence, not having 
//    | a followup vert could be considered sensible given that no more follow.
}

This proposal would allow the example to have the following form:

use E::*;

enum E { A, B, C, D }

match foo {
    | A
    | B
    | C
    | D => (),
//  ^ Gained consistency by having a matching vert.
}

Flexibility in single line matches

It would allow these examples which are all equivalent:

use E::*;

enum E { A, B, C, D }

// A preceding vert
match foo {
    | A | B | C | D => (),
}

// A match as is currently allowed
match foo {
    A | B | C | D => (),
}

There should be no ambiguity about what either of these means. Preference between these should just come down to a choice of style.

Benefits to macros

This benefits macros. Needs filling in.

Multiple branches

All of these matches are equivalent, each written in a different style:

use E::*;

enum E { A, B, C, D }

match foo {
    A | B => println!("Give me A | B!"),
    C | D => println!("Give me C | D!"),
}

match foo {
    | A | B => println!("Give me A | B!"),
    | C | D => println!("Give me C | D!"),
}

match foo {
    | A
    | B => println!("Give me A | B!"),
    | C
    | D => println!("Give me C | D!"),
}

match foo {
    A | B =>
        println!("Give me A | B!"),
    C | D =>
        println!("Give me C | D!"),
}

Comparing misalignment

use E::*;

enum E { A, B, C }

match foo {
    | A
    | B => {},
    | C => {}
//  ^ Following the style above, a `|` could be placed before the first
// element of every branch.

match value {
    | A
    | B => {},
    C => {}
//  ^ Including a `|` for the `A` but not for the `C` seems inconsistent
// but hardly invalid. Branches *always* follow the `=>`. Not something
// a *grammar* should be greatly concerned about.
}

Detailed design

I don’t know about the implementation but the grammar could be updated so that an optional | is allowed at the beginning. Nothing else in the grammar should need updating.

// Before
match_pat : pat [ '|' pat ] * [ "if" expr ] ? ;
// After
match_pat : '|' ? pat [ '|' pat ] * [ "if" expr ] ? ;

How We Teach This

Adding examples for this are straightforward. You just include an example pointing out that leading verts are allowed. Simple examples such as below should be easy to add to all different resources.

use Letter::*;

enum Letter {
    A,
    B,
    C,
    D,
}

fn main() {
    let a = Letter::A;
    let b = Letter::B;
    let c = Letter::C;
    let d = Letter::D;

    match a {
        A => "A",
        // Can do alternatives with a `|`.
        B | C | D => "B, C, or D",
    }

    match b {
        | A => "A",
        // Leading `|` is allowed.
        | B
        | C
        | D => "B, C, or D",
    }
}

Drawbacks

N/A

Alternatives

N/A

Unresolved questions

N/A

Summary

Allow the ? operator to be used in main, and in #[test] functions and doctests.

To make this possible, the return type of these functions are generalized from () to a new trait, provisionally called Termination. libstd implements this trait for a set of types partially TBD (see list below); applications can provide impls themselves if they want.

There is no magic added to function signatures in rustc. If you want to use ? in either main or a #[test] function you have to write -> Result<(), ErrorT> (or whatever) yourself. Initially, it will also be necessary to write a hidden function head for any doctest that wants to use ?, but eventually (see the deployment plan below) the default doctest template will be adjusted to make this unnecessary most of the time.

Pre-RFC discussion. Prior RFC issue.

Motivation

It is currently not possible to use ? in main, because main’s return type is required to be (). This is a trip hazard for new users of the language, and complicates “programming in the small”. For example, consider a version of the CSV-parsing example from the Rust Book (I have omitted a chunk of command-line parsing code and the definition of the Row type, to keep it short):

fn main() {
    let argv = env::args();
    let _ = argv.next();
    let data_path = argv.next().unwrap();
    let city = argv.next().unwrap();

    let file = File::open(data_path).unwrap();
    let mut rdr = csv::Reader::from_reader(file);

    for row in rdr.decode::<Row>() {
        let row = row.unwrap();

        if row.city == city {
            println!("{}, {}: {:?}",
                row.city, row.country,
                row.population.expect("population count"));
        }
    }
}

The Rust Book uses this as a starting point for a demonstration of how to do error handing properly, i.e. without using unwrap and expect. But suppose this is a program for your own personal use. You are only writing it in Rust because it needs to crunch an enormous data file and high-level scripting languages are too slow. You don’t especially care about proper error handling, you just want something that works, with minimal programming effort. You’d like to not have to remember that this is main and you can’t use ?. You would like to write instead

fn main() -> Result<(), Box<Error>> {
    let argv = env::args();
    let _ = argv.next();
    let data_path = argv.next()?;
    let city = argv.next()?;

    let file = File::open(data_path)?;
    let mut rdr = csv::Reader::from_reader(file);

    for row in rdr.decode::<Row>() {
        let row = row?;

        if row.city == city {
            println!("{}, {}: {:?}",
                row.city, row.country, row.population?);
        }
    }
    Ok(())
}

(Just to be completely clear, this is not intended to reduce the amount of error-handling boilerplate one has to write; only to make it be the same in main as it would be for any other function.)

For the same reason, it is not possible to use ? in doctests and #[test] functions. This is only an inconvenience for #[test] functions, same as for main, but it’s a major problem for doctests, because doctests are supposed to demonstrate normal usage, as well as testing functionality. Taking an example from the stdlib:

use std::net::UdpSocket;
let port = 12345;
let mut udp_s = UdpSocket::bind(("127.0.0.1", port)).unwrap(); // XXX
udp_s.send_to(&[7], (ip, 23451)).unwrap(); // XXX

The lines marked XXX have to use unwrap, because a doctest is the body of a main function, but in normal usage, they would be written

let mut udp_s = UdpSocket::bind(("127.0.0.1", port))?;
udp_s.send_to(&[7], (ip, 23451))?;

and that’s what the documentation ought to say. Documentation writers can work around this by including their own main as hidden code, but they shouldn’t have to.

On a related note, main returning () means that short-lived programs, designed to be invoked from the Unix shell or a similar environment, have to contain extra boilerplate in order to comply with those environments’ conventions, and must ignore the dire warnings about destructors not getting run in the documentation for process::exit. (In particular, one might be concerned that the program below will not properly flush and close io::stdout, and/or will fail to detect delayed write failures on io::stdout.) A typical construction is

fn inner_main() -> Result<(), ErrorT> {
    // ... stuff which may fail ...
    Ok(())
}

fn main() -> () {
    use std::process::exit;
    use libc::{EXIT_SUCCESS, EXIT_FAILURE};

    exit(match inner_main() {
        Ok(_) => EXIT_SUCCESS,

        Err(ref err) => {
            let progname = get_program_name();
            eprintln!("{}: {}\n", progname, err);

            EXIT_FAILURE
        }
    })
}

These problems can be solved by generalizing the return type of main and test functions.

Detailed design

The design goals for this new feature are, in decreasing order of importance:

  1. The ? operator should be usable in main, #[test] functions, and doctests. This entails these functions now returning a richer value than ().
  2. Existing code with fn main() -> () should not break.
  3. Errors returned from main in a hosted environment should not trigger a panic, consistent with the general language principle that panics are only for bugs.
  4. We should take this opportunity to increase consistency with platform conventions for process termination. These often include the ability to pass an “exit status” up to some outer environment, conventions for what that status means, and an expectation that a diagnostic message will be generated when a program fails due to a system error. However, we should not make things more complicated for people who don’t care.

Goal 1 dictates that the new return type for main will be Result<T, E> for some T and E. To minimize the necessary changes to existing code that wants to start using ? in main, T should be allowed to be (), but other types in that position may also make sense. The appropriate bound for E is unclear; there are plausible arguments for at least Error, Debug, and Display. This proposal selects Display, largely because application error types are not obliged to implement Error.

To achieve goal 2 at the same time as goal 1, main’s return type must be allowed to vary from program to program. This can be dealt with by making the start lang item polymorphic (as described below) over a trait which both () and Result<(), E> implement, and similarly for doctests and #[test] functions.

Goals 3 and 4 are largely a matter of quality of implementation; at the level of programmer-visible interfaces, people who don’t care are well-served by not breaking existing code (which is goal 2) and by removing a way in which main is not like other functions (goal 1).

The Termination trait

When main returns a nontrivial value, the runtime needs to know two things about it: what error message, if any, to print, and what value to pass to std::process::exit. These are naturally encapsulated in a trait, which we are tentatively calling Termination, with this signature:

trait Termination {
    fn report(self) -> i32;
}

report is a call-once function; it consumes self. The runtime guarantees to call this function after main returns, but at a point where it is still safe to use eprintln! or io::stderr() to print error messages. report is not required to print error messages, and if it doesn’t, nothing will be printed. The value it returns will be passed to std::process::exit, and shall convey at least a notion of success or failure. The return type is i32 to match std::process::exit (which probably calls the C library’s exit primitive), but (as already documented for process::exit) on “most Unix-like” operating systems, only the low 8 bits of this value are significant.

Standard impls of Termination

At least the following implementations of Termination will be added to libstd. (Code samples below use the constants EXIT_SUCCESS and EXIT_FAILURE for exposition; see below for discussion of what the actual numeric values should be.) The first two are essential to the proposal:

impl Termination for () {
    fn report(self) -> i32 { EXIT_SUCCESS }
}

This preserves backward compatibility: all existing programs, with fn main() -> (), will still satisfy the new requirement (which is effectively fn main() -> impl Termination, although the proposal does not actually depend on impl-trait return types).

impl<T: Termination, E: Display> Termination for Result<T, E> {
    fn report(self) -> i32 {
        match self {
            Ok(val) => val.report(),
            Err(ref err) => {
                print_diagnostics_for_error(err);
                EXIT_FAILURE
            }
        }
    }
}

This enables the use of ? in main. The type bound is somewhat more general than the minimum: we accept any type that satisfies Termination in the Ok position, not just (). This is because, in the presence of application impls of Termination, it would be surprising if fn main() -> FooT was acceptable but fn main() -> Result<FooT, ErrT> wasn’t, or vice versa. On the Err side, any displayable type is acceptable, but its value does not affect the exit status; this is because it would be surprising if an apparent error return could produce a successful exit status. (This restriction can always be relaxed later.)

Note that Box<T> is Display if T is Display, so special treatment of Box<Error> is not necessary.

Two additional impls are not strictly necessary, but are valuable for concrete known usage scenarios:

impl Termination for ! {
    fn report(self) -> i32 { unreachable!(); }
}

This allows programs that intend to run forever to be more self-documenting: fn main() -> ! will satisfy the bound on main’s return type. It might not be necessary to have code for this impl in libstd, since -> ! satisfies -> (), but it should appear in the reference manual anyway, so people know they can do that, and it may be desirable to include the code as a backstop against a main that does somehow return, despite declaring that it doesn’t.

impl Termination for bool {
    fn report(self) -> i32 {
        if (self) { EXIT_SUCCESS } else { EXIT_FAILURE }
    }
}

This impl allows programs to generate both success and failure conditions for their outer environment without printing any diagnostics, by returning the appropriate values from main, possibly while also using ? to report error conditions where diagnostics should be printed. It is meant to be used by sophisticated programs that do all, or nearly all, of their own error-message printing themselves, instead of calling process::exit themselves.

The detailed behavior of print_diagnostics_for_error is left unspecified, but it is guaranteed to write diagnostics to io::stderr that include the Display text for the object it is passed, and without unconditionally calling panic!. When the object it is passed implements Error as well as Display, it should follow the cause chain if there is one (this may necessitate a separate Termination impl for Result<_, Error>, but that’s an implementation detail).

Changes to lang_start

The start “lang item”, the function that calls main, takes the address of main as an argument. Its signature is currently

#[lang = "start"]
fn lang_start(main: *const u8, argc: isize, argv: *const *const u8) -> isize

It will need to become generic, something like

#[lang = "start"]
fn lang_start<T: Termination>
    (main: fn() -> T, argc: isize, argv: *const *const u8) -> !

(Note: the current isize return type is incorrect. As is, the correct return type is libc::c_int. We can avoid the entire issue by requiring lang_start to call process::exit or equivalent itself; this also moves one step toward not depending on the C runtime.)

The implementation for typical “hosted” environments will be something like

#[lang = "start"]
fn lang_start<T: Termination>
    (main: fn() -> T, argc: isize, argv: *const *const u8) -> !
{
    use panic;
    use sys;
    use sys_common;
    use sys_common::thread_info;
    use thread::Thread;

    sys::init();

    sys::process::exit(unsafe {
        let main_guard = sys::thread::guard::init();
        sys::stack_overflow::init();

        // Next, set up the current Thread with the guard information we just
        // created. Note that this isn't necessary in general for new threads,
        // but we just do this to name the main thread and to give it correct
        // info about the stack bounds.
        let thread = Thread::new(Some("main".to_owned()));
        thread_info::set(main_guard, thread);

        // Store our args if necessary in a squirreled away location
        sys::args::init(argc, argv);

        // Let's run some code!
        let exitcode = panic::catch_unwind(|| main().report())
            .unwrap_or(101);

        sys_common::cleanup();
        exitcode
    });
}

Changes to doctests

Simple doctests form the body of a main function, so they require only a small modification to rustdoc: when maketest sees that it needs to insert a function head for main, it will now write out

fn main () -> Result<(), ErrorT> {
   ...
   Ok(())
}

for some value of ErrorT to be worked out during deployment. This head will work correctly for function bodies without any uses of ?, so rustdoc does not need to parse each function body; it can use this head unconditionally.

If the doctest specifies its own function head for main (visibly or invisibly), then it is the programmer’s responsibility to give it an appropriate type signature, as for regular main.

Changes to the #[test] harness

The appropriate semantics for test functions with rich return values are straightforward: Call the report method on the value returned. If report returns a nonzero value, the test has failed. (Optionally, honor the Automake convention that exit code 77 means “this test cannot meaningfully be run in this context.”)

The required changes to the test harness are more complicated, because it supports six different type signatures for test functions:

pub enum TestFn {
    StaticTestFn(fn()),
    StaticBenchFn(fn(&mut Bencher)),
    StaticMetricFn(fn(&mut MetricMap)),
    DynTestFn(Box<FnBox<()>>),
    DynMetricFn(Box<for<'a> FnBox<&'a mut MetricMap>>),
    DynBenchFn(Box<TDynBenchFn + 'static>),
}

All of these need to be generalized to allow any return type that implements Termination. At the same time, it still needs to be possible to put TestFn instances into a static array.

For the static cases, we can avoid changing the test harness at all with a built-in macro that generates wrapper functions: for example, given

#[test]
fn test_the_thing() -> Result<(), io::Error> {
    let state = setup_the_thing()?; // expected to succeed
    do_the_thing(&state)?;          // expected to succeed
}

#[bench]
fn bench_the_thing(b: &mut Bencher) -> Result<(), io::Error> {
    let state = setup_the_thing()?;
    b.iter(|| {
        let rv = do_the_thing(&state);
        assert!(rv.is_ok(), "do_the_thing returned {:?}", rv);
    });
}

after macro expansion we would have

fn test_the_thing_inner() -> Result<(), io::Error> {
    let state = setup_the_thing()?; // expected to succeed
    do_the_thing(&state)?;          // expected to succeed
}

#[test]
fn test_the_thing() -> () {
    let rv = test_the_thing_inner();
    assert_eq!(rv.report(), 0);
}

fn bench_the_thing_inner(b: &mut Bencher) -> Result<(), io::Error> {
    let state = setup_the_thing()?;
    b.iter(|| {
        let rv = do_the_thing(&state);
        assert!(rv.is_ok(), "do_the_thing returned {:?}", rv);
    });
}

#[bench]
fn bench_the_thing(b: &mut Bencher) -> () {
    let rv = bench_the_thing_inner();
    assert_eq!(rv.report(), 0);
}

and similarly for StaticMetricFn (no example shown because I cannot find any actual uses of MetricMap anywhere in the stdlib, so I don’t know what a use looks like).

We cannot synthesize wrapper functions like this for dynamic tests. We could use trait objects to allow the harness to call Termination::report anyway: for example, assuming that runtest::run returns a Termination type, we would have something like

pub fn make_test_closure(config: &Config, testpaths: &TestPaths)
        -> test::TestFn {
    let config = config.clone();
    let testpaths = testpaths.clone();
    test::DynTestFn(Box::new(move |()| -> Box<Termination> {
       Box::new(runtest::run(config, &testpaths))
    }))
}

But this is not that much of an improvement on just checking the result inside the closure:

pub fn make_test_closure(config: &Config, testpaths: &TestPaths)
        -> test::TestFn {
    let config = config.clone();
    let testpaths = testpaths.clone();
    test::DynTestFn(Box::new(move |()| {
       let rv = runtest::run(config, &testpaths);
       assert_eq(rv.report(), 0);
    }))
}

Considering also that dynamic tests are not documented and rarely used (the only cases I can find in the stdlib are as an adapter mechanism within libtest itself, and the compiletest harness) I think it makes most sense to not support rich return values from dynamic tests for now.

main in nostd environments

Some no-std environments do have a notion of processes that run and then exit, but do not have a notion of “exit status”. In this case, process::exit probably already ignores its argument, so main and the start lang item do not need to change. Similarly, in an environment where there is no such thing as an “error message”, io::stderr() probably already points to the bit bucket, so report functions can go ahead and use eprintln! anyway.

There are also environments where returning from main constitutes a bug. If you are implementing an operating system kernel, for instance, there may be nothing to return to. Then you want it to be a compile-time error for main to return anything other than !. If everything is implemented correctly, such environments should be able to get that effect by omitting all stock impls of Termination other than for !. Perhaps there should also be a compiler hook that allows such environments to refuse to let you impl Termination yourself.

The values used for EXIT_SUCCESS and EXIT_FAILURE by standard impls of Termination

The C standard only specifies 0, EXIT_SUCCESS and EXIT_FAILURE as arguments to the exit primitive. It does not require EXIT_SUCCESS to be zero, but it does require exit(0) to have the same effect as exit(EXIT_SUCCESS). POSIX does require EXIT_SUCCESS to be zero, and the only historical C implementation I am aware of where EXIT_SUCCESS was not zero was for VAX/VMS, which is probably not a relevant portability target for Rust. EXIT_FAILURE is only required (implicitly in C, explicitly in POSIX) to be nonzero. It is usually 1; I have not done a thorough survey to find out if it is ever anything else.

Within both the Unix and Windows ecosystems, there are several different semi-conflicting conventions that assign meanings to specific nonzero exit codes. It might make sense to include some support for these conventions in the stdlib (e.g. with a module that provides the same constants as sysexits.h), but that is beyond the scope of this RFC. What is important, in the context of this RFC, is for the standard impls of Termination to not get in the way of any program that wants to use one of those conventions. Therefore I am proposing that all the standard impls’ report functions should use 0 for success and 2 for failure. (It is important not to use 1, even though EXIT_FAILURE is usually 1, because some existing programs (notably grep) give 1 a specific meaning; as far as I know, 2 has no specific meaning anywhere.)

Deployment Plan

This is a complicated feature; it needs two mostly-orthogonal feature gates and a multi-phase deployment sequence.

The first feature gate is #![feature(rich_main_return)], which must be enabled to write a main function, test function, or doctest that returns something other than (). This is not a normal unstable-feature annotation; it has more in common with a lint check and may need to be implemented as such. It will probably be possible to stabilize this feature quickly—one or two releases after it is initially implemented.

The second feature gate is #![feature(termination_trait)], which must be enabled to make explicit use of the Termination trait, either by writing new impls of it, or by calling report directly. However, it is not necessary to enable this feature gate to merely return rich values from main, test functions, etc (because in that case the call to report is in stdlib code). I think this is the semantic of an ordinary unstable-feature annotation on Termination, with appropriate use-this annotations within the stdlib. This feature should not be stabilized for at least a full release after the stabilization of the rich_main_return feature, because it has more complicated backward compatibility implications, and because it’s not going to be used very often so it will take longer to gain experience with it.

In addition to these feature gates, rustdoc will initially not change its template for main. In order to use ? in a doctest, at first it will be necessary for the doctest to specify its own function head. For instance, the ToSocketAddrs example from the “motivation” section will initially need to be written

/// # #![feature(rich_main_return)]
/// # fn main() -> Result<(), io::Error> {
/// use std::net::UdpSocket;
/// let port = 12345;
/// let mut udp_s = UdpSocket::bind(("127.0.0.1", port))?;
/// udp_s.send_to(&[7], (ip, 23451))?;
/// # Ok(())
/// # }

After enough doctests have been updated, we can survey them to learn what the most appropriate default function signature for doctest main is, and only then should rustdoc’s template be changed. (Ideally, this would happen at the same time that rich_main_return is stabilized, but it might need to wait longer, depending on how enthusiastic people are about changing their doctests.)

How We Teach This

This should be taught alongside the ? operator and error handling in general. The stock Termination impls in libstd mean that simple programs that can fail don’t need to do anything special.

fn main() -> Result<(), io::Error> {
    let mut stdin = io::stdin();
    let mut raw_stdout = io::stdout();
    let mut stdout = raw_stdout.lock();
    for line in stdin.lock().lines() {
        stdout.write(line?.trim().as_bytes())?;
        stdout.write(b"\n")?;
    }
    stdout.flush()
}

Programs that care about the exact structure of their error messages will still need to use main primarily for error reporting. Returning to the CSV-parsing example, a “professional” version of the program might look something like this (assume all of the boilerplate involved in the definition of AppError is just off the top of your screen; also assume that impl Termination for bool is available):

struct Args {
    progname: String,
    data_path: PathBuf,
    city: String
}

fn parse_args() -> Result<Args, AppError> {
    let argv = env::args_os();
    let progname = argv.next().into_string()?;
    let data_path = PathBuf::from(argv.next());
    let city = argv.next().into_string()?;
    if let Some(_) = argv.next() {
        return Err(UsageError("too many arguments"));
    }
    Ok(Args { progname, data_path, city })
}

fn process(city: &String, data_path: &Path) -> Result<Args, AppError> {
    let file = File::open(args.data_path)?;
    let mut rdr = csv::Reader::from_reader(file);

    for row in rdr.decode::<Row>() {
        let row = row?;

        if row.city == city {
            println!("{}, {}: {:?}",
                row.city, row.country, row.population?);
        }
    }
}

fn main() -> bool {
    match parse_args() {
        Err(err) => {
            eprintln!("{}", err);
            false
        },
        Ok(args) => {
            match process(&args.city, &args.data_path) {
                Err(err) => {
                    eprintln!("{}: {}: {}",
                              args.progname, args.data_path, err);
                    false
                },
                Ok(_) => true
            }
        }
    }
}

and a detailed error-handling tutorial could build that up from the quick-and-dirty version. Notice that this is not using ? in main, but it is using the generalized main return value. The catch-block feature (part of RFC #243 along with ?; issue #39849) may well enable shortening this main and/or putting parse_args and process back inline.

Tutorial examples should still begin with fn main() -> () until the tutorial gets to the point where it starts explaining why panic! and unwrap are not for “normal errors”. The Termination trait should also be explained at that point, to illuminate how Results returned from main turn into error messages and exit statuses, but as a thing that most programs will not need to deal with directly.

Once the doctest default template is changed, doctest examples can freely use ? with no extra boilerplate, but #[test] examples involving ? will need their boilerplate adjusted.

Drawbacks

Generalizing the return type of main complicates libstd and/or the compiler. It also adds an additional thing to remember when complete newbies to the language get to error handling. On the other hand, people coming to Rust from other languages may find this less surprising than the status quo.

Alternatives

Do nothing; continue to live with the trip hazard, the extra boilerplate required to comply with platform conventions, and people using panic! to report ordinary errors because it’s less hassle. “Template projects” (e.g. quickstart) mean that one need not write out all the boilerplate by hand, but it’s still there.

Unresolved Questions

We need to decide what to call the new trait. The names proposed in the pre-RFC thread were Terminate, which I like OK but have changed to Termination because value traits should be nouns, and Fallible, which feels much too general, but could be OK if there were other uses for it? Relatedly, it is conceivable that there are other uses for Termination in the existing standard library, but I can’t think of any right now. (Thread join was mentioned in the pre-RFC, but that can already relay anything that’s Send, so I don’t see that it adds value there.)

We may discover during the deployment process that we want more impls for Termination. The question of what type rustdoc should use for its default main template is explicitly deferred till during deployment.

Some of the components of this proposal may belong in libcore, but note that the start lang item is not in libcore. It should not be a problem to move pieces from libstd to libcore later.

It would be nice if we could figure out a way to enable use of ? in dynamic test-harness tests, but I do not think this is an urgent problem.

All of the code samples in this RFC need to be reviewed for correctness and proper use of idiom.

Related Proposals

This proposal formerly included changes to process::ExitStatus intended to make it usable as a main return type. That has now been spun off as its own pre-RFC, so that we can take our time to work through the portability issues involved with going beyond C’s simple success/failure dichotomy without holding up this project.

There is an outstanding proposal to generalize ? (see also RFC issues #1718 and #1859); I think it is mostly orthogonal to this proposal, but we should make sure it doesn’t conflict and we should also figure out whether we would need more impls of Termination to make them play well together.

There is also an outstanding proposal to improve the ergonomics of ?-using functions by autowrapping fall-off-the-end return values in Ok; it would play well with this proposal, but is not necessary nor does it conflict.

Summary

Support the #[must_use] attribute on arbitrary functions, to make the compiler lint when a call to such a function is ignored. Mark PartialEq::{eq, ne} #[must_use] as well as PartialOrd::{lt, gt, le, ge}.

Motivation

The #[must_use] lint is extremely useful for ensuring that values that are likely to be important are handled, even if by just explicitly ignoring them with, e.g., let _ = ...;. This expresses the programmers intention clearly, so that there is less confusion about whether, for example, ignoring the possible error from a write call is intentional or just an accidental oversight.

Rust has got a lot of mileage out connecting the #[must_use] lint to specific types: types like Result, MutexGuard (any guard, in general) and the lazy iterator adapters have narrow enough use cases that the programmer usually wants to do something with them. These types are marked #[must_use] and the compiler will print an error if a semicolon ever throws away a value of that type:

fn returns_result() -> Result<(), ()> {
    Ok(())
}

fn ignore_it() {
    returns_result();
}
test.rs:6:5: 6:11 warning: unused result which must be used, #[warn(unused_must_use)] on by default
test.rs:6     returns_result();
              ^~~~~~~~~~~~~~~~~

One of the most important use-cases for this would be annotating PartialEq::{eq, ne} with #[must_use].

There’s a bug in Android where instead of modem_reset_flag = 0; the file affected has modem_reset_flag == 0;. Rust does not do better in this case. If you wrote modem_reset_flag == false; the compiler would be perfectly happy and wouldn’t warn you. By marking PartialEq #[must_use] the compiler would complain about things like:

    modem_reset_flag == false; //warning
    modem_reset_flag = false; //ok

See further discussion in #1812.

Detailed design

If a semicolon discards the result of a function or method tagged with #[must_use], the compiler will emit a lint message (under same lint as #[must_use] on types). An optional message #[must_use = "..."] will be printed, to provide the user with more guidance.

#[must_use]
fn foo() -> u8 { 0 }


struct Bar;

impl Bar {
     #[must_use = "maybe you meant something else"]
     fn baz(&self) -> Option<String> { None }
}

fn qux() {
    foo(); // warning: unused result that must be used
    Bar.baz(); // warning: unused result that must be used: maybe you meant something else
}

The primary motivation is to mark PartialEq functions as #[must_use]:

#[must_use = "the result of testing for equality should not be discarded"]
fn eq(&self, other: &Rhs) -> bool;

The same thing for ne, and also lt, gt, ge, le in PartialOrd. There is no reason to discard the results of those operations. This means the impls of these functions are not changed, it still issues a warning even for a custom impl.

Drawbacks

This adds a little more complexity to the #[must_use] system, and may be misused by library authors (but then, many features may be misused).

The rule stated doesn’t cover every instance where a #[must_use] function is ignored, e.g. (foo()); and { ...; foo() }; will not be picked up, even though it is passing the result through a piece of no-op syntax. This could be tweaked. Notably, the type-based rule doesn’t have this problem, since that sort of “passing-through” causes the outer piece of syntax to be of the #[must_use] type, and so is considered for the lint itself.

Marking functions #[must_use] is a breaking change in certain cases, e.g. if someone is ignoring their result and has the relevant lint (or warnings in general) set to be an error. This is a general problem of improving/expanding lints.

Alternatives

  • Adjust the rule to propagate #[must_used]ness through parentheses and blocks, so that (foo());, { foo() }; and even if cond { foo() } else { 0 }; are linted.

  • Should we let particular impls of a function have this attribute? Current design allows you to attach it inside the declaration of the trait.

Unresolved questions

  • Should this be feature gated?

Summary

Add a notation how to create relative links in documentation comments (based on Rust item paths) and extend Rustdoc to automatically turn this into working links.

Motivation

It is good practice in the Rust community to add documentation to all public items of a crate, as the API documentation as rendered by Rustdoc is the main documentation of most libraries. Documentation comments at the module (or crate) level are used to give an overview of the module (or crate) and describe how the items of a crate can be used together. To make navigating the documentation easy, crate authors make these items link to their individual entries in the API docs.

Currently, these links are plain Markdown links, and the URLs are the (relative) paths of the items’ pages in the rendered Rustdoc output. This is sadly very fragile in several ways:

  1. As the same doc comment can be rendered on several Rustdoc pages and thus on separate directory levels (e.g., the summary page of a module, and a struct’s own page), it is not possible to confidently use relative paths. For example, adding a link to ../foo/struct.Bar.html to the first paragraph of the doc comment of the module lorem will work on the rendered /lorem/index.html page, but not on the crate’s summary page /index.html.
  2. Using absolute paths in links (like /crate-name/foo/struct.Bar.html) to circumvent the previous issue might work for the author’s own hosted version, but will break when looking at the documentation using cargo doc --open (which uses file:/// URLs) or when using docs.rs.
  3. Should Rustdoc’s file name scheme ever change (it has change before, cf. Rust issue #35236), all manually created links need to be updated.

To solve this dilemma, we propose extending Rustdoc to be able to generate relative links that work in all contexts.

Detailed Design

Markdown/CommonMark allow writing links in several forms (the names are from the CommonMark spec in version 0.27):

  1. [link text](URL) (inline link)
  2. [link text][link label] (reference link, link label can also be omitted, cf. shortcut reference links) and somewhere else in the document: [link label]: URL (this part is called link reference definition)
  3. <URL> which will be turned into the equivalent of [URL](URL) (autolink, required to start with a schema)

We propose that in each occurrence of URL of inline links and link reference definitions, it should also be possible to write a Rust path (as defined in the reference). Additionally, automatic link reference definitions should be generated to allow easy linking to obvious targets.

Additions To The Documentation Syntax

Rust paths as URLs in inline and reference links:

  1. [Iterator](std::iter::Iterator)
  2. [Iterator][iter], and somewhere else in the document: [iter]: std::iter::Iterator
  3. [Iterator], and somewhere else in the document: [Iterator]: std::iter::Iterator

The third syntax example above shows a shortcut reference link, which is a reference link whose link text and link label are the same, and there exists a link reference definition for that label. For example: [HashMap] will be rendered as a link given a link reference definition like [HashMap]: std::collections::HashMap.

To make linking to items easier, we introduce “implied link reference definitions”:

  1. [std::iter::Iterator], without having a link reference definition for Iterator anywhere else in the document
  2. [`std::iter::Iterator`], without having a link reference definition for Iterator anywhere else in the document (same as previous style but with back ticks to format link as inline code)

If Rustdoc finds a shortcut reference link

  1. without a matching link reference definition
  2. whose link label, after stripping leading and trailing back ticks, is a valid Rust path

it will add a link reference definition for this link label pointing to the Rust path.

Collapsed reference links ([link label][]) are handled analogously.

(This was one of the first ideas suggested by CommonMark forum members as well as by Guillaume Gomez.)

Standard-conforming Markdown

These additions are valid Markdown, as defined by the original Markdown syntax definition as well as the CommonMark project. Especially, Rust paths are valid CommonMark link destinations, even with the suffixes described below.

The following:

The offers several ways to fooify [Bars](bars::Bar).

should be rendered as:

The offers several ways to fooify <a href="bars/struct.Bar.html">Bars</a>.

when on the crates index page (index.html), and as this when on the page for the foos module (foos/index.html):

The offers several ways to fooify <a href="../bars/struct.Bar.html">Bars</a>.

When using the autolink syntax (<URL>), the URL has to be an absolute URI, i.e., it has to start with an URI scheme. Thus, it will not be possible to write <Foo> to link to a Rust item called Foo that is in scope (this also conflicts with Markdown ability to contain arbitrary HTML elements). And while <std::iter::Iterator> is a valid URI (treating std: as the scheme), to avoid confusion, the RFC does not propose adding any support for autolinks.

This means that this will not render a valid link:

Does not work: <bars::Bar> :(

It will just output what any CommonMark compliant renderer would generate:

Does not work: <a href="bars::Bar">bars::Bar</a> :(

We suggest to use Implied Shortcut Reference Links instead:

Does work: [`bars::Bar`] :)

which will be rendered as

Does work: <a href="../bars/struct.Bar.html"><code>bars::Bar</code></a> :)

Resolving Paths

The Rust paths used in links are resolved relative to the item in whose documentation they appear. Specifically, when using inner doc comments (//!, /*!), the paths are resolved from the inside of the item, while regular doc comments (///, /**) start from the parent scope.

Here’s an example:

/// Container for a [Dolor](ipsum::Dolor).
struct Lorem(ipsum::Dolor);

/// Contains various things, mostly [Dolor](ipsum::Dolor) and a helper function,
/// [sit](ipsum::sit).
mod ipsum {
    pub struct Dolor;

    /// Takes a [Dolor] and does things.
    pub fn sit(d: Dolor) {}
}

mod amet {
  //! Helper types, can be used with the [ipsum](super::ipsum) module.
}

And here’s an edge case:

use foo::Iterator;

/// Uses `[Iterator]`. <- This resolves to `foo::Iterator` because it starts
/// at the same scope as `foo1`.
fn foo1() { }

fn foo2() {
    //! Uses `[Iterator]`. <- This resolves to `bar::Iterator` because it starts
    //! with the inner scope of `foo2`'s body.

    use bar::Iterator;
}

Cross-crate re-exports

If an item is re-exported from an inner crate to an outer crate, its documentation will be resolved the same in both crates, as if it were in the original scope. For example, this function will link to f in both crates, even though f is not in scope in the outer crate:

// inner-crate

pub fn f() {}
/// This links to [f].
pub fn g() {}
// outer-crate
pub use inner_crate::g;

If a public item links to a private one, and --document-private-items is not passed, rustdoc should give a warning. If a private item links to another private item, no warning should be emitted. If a public item links to another private item and --document-private-items is passed, rustdoc should emit the link, but it is up to the implementation whether to give a warning.

Path Ambiguities

Rust has three different namespaces that items can be in, types, values, and macros. That means that in a given source file, three items with the same name can be used, as long as they are in different namespaces.

To illustrate, in the following example we introduce an item called FOO in each namespace:

pub trait FOO {}

pub const FOO: i32 = 42;

macro_rules! FOO { () => () }

To be able to link to each item, we’ll need a way to disambiguate the namespaces. Our proposal is this:

  • In unambiguous cases paths can be written as described earlier, with no pre- or suffix, e.g., Look at the [FOO] trait. This also applies to modules and tuple structs which exist in both namespaces. Rustdoc will throw a warning if you use a non-disambiguated path in the case of there being a value in both the type and value namespace.
  • Links to types can be disambiguated by prefixing them with the concrete item type:
    • Links to any type-namespace object can be prefixed with type@, e.g., See [type@foo]. This will work for structs, enums, mods, traits, and unions.
    • Links to structs can be prefixed with struct@, e.g., See [struct@Foo].
    • Links to enums can be prefixed with enum@, e.g., See [enum@foo].
    • Links to modules can be prefixed with mod@, e.g., See [mod@foo].
    • Links to traits can be prefixed with trait@, e.g., See [trait@foo].
    • Links to unions can be prefixed with union@, e.g., See [union@foo].
    • It is possible that disambiguators for one kind of type-namespace object will work for the other (i.e. you can use struct@ to refer to an enum), but do not rely on this.
  • Modules exist in both the type and value namespace and can be disambiguated with a mod@ or module@, e.g. [module@foo]
  • In links to macros, the link label can end with a !, e.g., Look at the [FOO!] macro. You can alternatively use a macro@ prefix, e.g. [macro@foo]
  • For disambiguating links to values, we differentiate three cases:
    • Links to any kind of value (function, const, static) can be prefixed with value@, e.g., See [value@foo].
    • Links to functions and methods can be written with a () suffix, e.g., Also see the [foo()] function. You can also use function@, fn@, or method@.
    • Links to constants are prefixed with const@, e.g., As defined in [const@FOO].
    • Links to statics are prefixed with static@, e.g., See [static@FOO].
    • It is possible that disambiguators for one kind of type-namespace object will work for the other (i.e. you can use static@ to refer to a const),

If a disambiguator for a type does not match, rustdoc should issue a warning. For example, given struct@Foo, attempting to link to it using [enum@Foo] should not be allowed.

Errors

Ideally, Rustdoc would be able to recognize Rust path syntax, and if the path cannot be resolved, print a warning (or an error). These diagnostic messages should highlight the specific link that Rustdoc was not able to resolve, using the original Markdown source from the comment and correct line numbers.

Complex Example

(Excerpt from Diesel’s expression module.)

// diesel/src/expression/mod.rs

//! AST types representing various typed SQL expressions. Almost all types
//! implement either [`Expression`] or [`AsExpression`].

/// Represents a typed fragment of SQL. Apps should not need to implement this
/// type directly, but it may be common to use this as type boundaries.
/// Libraries should consider using [`infix_predicate!`] or
/// [`postfix_predicate!`] instead of implementing this directly.
pub trait Expression {
    type SqlType;
}

/// Describes how a type can be represented as an expression for a given type.
/// These types couldn't just implement [`Expression`] directly, as many things
/// can be used as an expression of multiple types. ([`String`] for example, can
/// be used as either [`VarChar`] or [`Text`]).
///
/// [`VarChar`]: diesel::types::VarChar
/// [`Text`]: diesel::types::Text
pub trait AsExpression<T> {
    type Expression: Expression<SqlType=T>;
    fn as_expression(self) -> Self::Expression;
}

Please note:

  • This uses implied shortcut reference links most often. Since the original documentation put the type/trait names in back ticks to render them as code, we preserved this style. (We don’t propose this as a general convention, though.)
  • Even though implied shortcut reference links could be used throughout, they are not used for the last two links (to VarChar and Text), which are not in scope and need to be linked to by their absolute Rust path. To make reading easier and less noisy, reference links are used to rename the links. (An assumption is that most readers will recognize these names and know they are part of diesel::types.)

How We Teach This

  • Extend the documentation chapter of the book with a subchapter on How to Link to Items.
  • Reference the chapter on the module system, to let reads familiarize themselves with Rust paths.
  • Maybe present an example use case of a module whose documentation links to several related items.

Drawbacks

  • Rustdoc gets more complex.
  • These links won’t work when the doc comments are rendered with a default Markdown renderer.
  • The Rust paths might conflict with other valid links, though we could not think of any.

Possible Extensions

Linking to Fields

To link to the fields of a struct we propose to write the path to the struct, followed by a dot, followed by the field name.

For example:

This is stored in the [`size`](storage::Filesystem.size) field.

Linking to Enum Variants

To link to the variants of an enum, we propose to write the path to the enum, followed by two colons, followed by the field name, just like use Foo::Bar can be used to import the Bar variant of an enum Foo.

For example:

For custom settings, supply the [`Custom`](storage::Engine::Other) field.

Linking to associated Items

To link to associated items, i.e., the associated functions, types, and constants of a trait, we propose to write the path to the trait, followed by two colons, followed by the associated item’s name. It may be necessary to use fully-qualified paths (cf. the reference’s section on disambiguating function calls), like See the [<Foo as Bar>::bar()] method. We have yet to analyze in which cases this is necessary, and what syntax should be used.

Traits in scope

If linking to an associated item that comes from a trait, the link should only be resolved in the trait is in scope. This prevents ambiguities if multiple traits are available with the associated item. For example, this should issue a warning:

#[derive(Debug)]
/// Link to [S::fmt]
struct S;

but this should link to the implementation of Debug::fmt for S:

use std::fmt::Debug;

#[derive(Debug)]
/// Link to [S::fmt]
struct S;

Linking to External Documentation

Currently, Rustdoc is able to link to external crates, and renders documentation for all dependencies by default. Referencing the standard library (or core) generates links with a well-known base path, e.g. https://doc.rust-lang.org/nightly/. Referencing other external crates links to the pages Rustdoc has already rendered (or will render) for them. Special flags (e.g. cargo doc --no-deps) will not change this behavior.

We propose to generalize this approach by adding parameters to rustdoc that allow overwriting the base URLs it used for external crate links. (These parameters will at first be supplied as CLI flags but could also be given via a config file, environment variables, or other means in the future.)

We suggest the following syntax:

rustdoc --extern-base-url="regex=https://docs.rs/regex/0.2.2/regex/" [...]

By default, the core/std libraries should have a default base URL set to the latest known Rust release when the version of rustdoc was built.

In addition to that, cargo doc may be extended with CLI flags to allow shortcuts to some common usages. E.g., a --external-docs flag may add base URLs using docs.rs for all crates that are from the crates.io repository (docs.rs automatically renders documentation for crates published to crates.io).

Known Issues

Automatically linking to external docs has the following known tradeoffs:

  • The generated URLs may not/no longer exist
    • Not all crate documentation can be rendered without a known local setup, e.g., for crates that use procedural macros/build scripts to generate code based on the local environment.
    • Not all crate documentation can be rendered without having 3rd-party tools installed.
  • The generated URLs may not/no have the expected content, because
    • The exact Cargo features used to build a crate locally were not used when building the docs available at the given URL.
    • The crate has platform-specific items, and the local platform and the platform used to render the docs available at the given URL differ (note that docs.rs renders docs for multiple platforms, though).

Alternatives

  • Prefix Rust paths with a URI scheme, e.g. rust: (cf. path ambiguities).

  • Prefix Rust paths with a URI scheme for the item type, e.g. struct:, enum:, trait:, or fn:.

  • javadoc and jsdoc use {@link java.awt.Panel} or [link text]{@link namepathOrURL}

Unresolved Questions

  • Is it possible for Rustdoc to resolve paths? Is it easy to implement this?
  • There is talk about switching Rustdoc to a different markdown renderer (pulldown-cmark). Does it support this? Does the current renderer?

Summary

This RFC proposes several steps forward for impl Trait:

  • Settling on a particular syntax design, resolving questions around the some/any proposal and others.

  • Resolving questions around which type and lifetime parameters are considered in scope for an impl Trait.

  • Adding impl Trait to argument position.

The first two proposals, in particular, put us into a position to stabilize the current version of the feature in the near future.

Motivation

To recap, the current impl Trait feature allows functions to write a return type like impl Iterator<Item = u64> or impl Fn(u64) -> bool, which says that the function’s return type satisfies the given trait bounds, but nothing more about it can be assumed. It’s useful to impose an abstraction barrier and to avoid writing down complex (or un-nameable) types. The current feature was designed very conservatively, and only allows impl Trait to be used in function return position on inherent or free functions.

The core motivation for this RFC is to pave the way toward stabilization of impl Trait; from that perspective, it inherits the motivation of the previous RFC. Making progress on this front falls clearly under the rubric of the productivity and learnability goals for the 2017 roadmap.

Stabilization is currently blocked on three inter-related questions:

  • Will impl Trait ever be usable in argument position? With what semantics?

  • Will we want to distinguish between some and any, that is, between existential types (where the callee chooses the type) and universal types (where the caller chooses)? Or is it enough to deduce the desired meaning from context?

  • When you use impl Trait, what lifetime and type parameters are in scope for the hidden, concrete type that will be returned? Can you customize this set?

This RFC is aimed squarely at resolving these questions. However, by resolving some of them, it also unlocks the door to an expansion of the feature to new locations (arguments, traits, trait impls), as we’ll see.

Motivation for expanding to argument position

This RFC proposes to allow impl Trait to be used in argument position, with “universal” (aka generics-style) semantics. There are three lines of argument in favor of doing so, given here along with rebuttals from the lang team.

Argument from learnability

There’s been a lot of discussion around universals vs. existentials (in today’s Rust, generics vs impl Trait). The RFC makes a few assumptions:

  • Most programmers won’t come to Rust with a crisp understanding of the distinction.
  • Even when people learn the distinction, it’s often confusing and hard to remember with precision.
  • But, on the other hand, programmers have a very deep intuition around the difference between arguments and return values, and “who” provides which (amongst caller and callee).

Now, consider a new Rust programmer, who has learned about generics:

fn take_iter<T: Iterator>(t: T)

What happens when they want to return an unstated iterator instead? It’s pretty natural to reach for:

fn give_iter<T: Iterator>() -> T

if you don’t have a crisp understanding of the unversal/existential distinction. If we only allowed impl Trait in return position, we’d have to say: when returning an unknown type, please use a completely different mechanism.

By contrast, a programmer who first learned:

fn take_iter(t: impl Iterator)

and then tried:

fn give_iter() -> impl Iterator

would be successful, without any rigorous understanding that they just transitioned from a universal to an existential.

What’s at play here is who gets to pick a type? And as above, programmers have a strong intuition about callers providing arguments, and callees providing return values. The proposed impl Trait extension to argument aligns with this intuition (and with what is most definitely the common case in practice), so that:

  • If you pick the value, you also pick the type

Thus in fn f(x: impl Foo) -> impl Bar, the caller picks the value of x and so picks the type for impl Foo, but the function picks the return value, so it picks the type for impl Bar.

This intuitive basis lets you get a lot of work done without learning the deeper distinction; you can fake it ’til you make it. If we, in addition, have an explicit syntax, you can eventually come to a fully rigorous understanding in terms of that syntax. And then you can go back to mostly operating intuitively with impl Trait, reaching for the fine distinctions only when you need them (the “post-rigorous” stage of learning).

@solson did a great job of laying this kind of argument out.

Argument from ergonomics

Ergonomics is rarely about raw character count, and the argument here isn’t about shaving off a few characters. Rather, it’s about how much you have to hold in your head.

Generic syntax requires you to introduce a name for an argument’s type, and to separate information about that type from the argument itself:

fn map<U, F: FnOnce(T) -> U>(self, f: F) -> Option<U>

To read this signature, you have to first parse the type parameters and bounds, then remember which ones applied to F, and then see where F shows up in the argument.

By contrast:

fn map<U>(self, f: impl FnOnce(T) -> U) -> Option<U>

Here, there are no additional names or indirections to hold in your head, and the relevant information about the argument type is located right next to the argument’s name. Even better:

fn map<U>(self, f: FnOnce(T) -> U) -> Option<U>

Also, when programming at speed, the fact that you can use the same impl Trait syntax in argument and return position – and it almost always has the meaning you want – means less pausing to think “hm, am I dealing with an existential here?”

Argument from familiarity

Finally, there’s an argument from familiarity, which was given eloquently by @withoutboats:

The proposal is (syntactically) more like Java. In Java, non-static methods aren’t parametric; generics are used at the type level, and you just use interfaces at the method level.

We’d end up with APIs that look very similar to Java or C#:

impl<T> Option<T> {
    fn map<U>(self, f: impl FnOnce(T) -> U) -> Option<U> { ... }
}

I think this is a good thing from the pre-rigorous/rigourous/post-rigourous sense: you have this incremental onboarding experience in which at first blush it is quite similar to what you’re used to. What I like even more, though, is that under the hood its all parametric polymorphism. In Java you actually have inheritance, and interfaces, and generics, and they all interact but not in a very unified way. In Rust, this is just a syntactic easement into a unitary polymorphism system which is fundamentally one idea: parametric polymorphism with trait constraints.

Critique from the lang team

@nrc argued that there’s also a learnability downside, because Rust programmers now have one additional syntax for generic arguments to learn.

Rebuttal: I agree that there’s an additional syntax to learn, but a key here is that there’s no genuine complexity addition: it’s pure sugar. In other words, it’s not a new concept, and learning that there’s an alternative, more verbose and expressive syntax tends to be a relatively easy step to take in practice. In addition, treating it as “anonymous generics” (for arguments) makes it pretty easy to understand the relationship.


@nrc argued that there would also be stylistic overhead: when to use impl Trait vs generics vs where clauses? And won’t you often end up having to use where clauses anyway, when things get longer?

Rebuttal: @withoutboats pointed out that impl Trait can actually help ease such style questions:

fn foo<
    T: Whatever + SomethingElse,
    U: Whatever,
>(
    t: T,
    u: U,
)

// vs

fn foo<T, U>(t: T, u: U) where
    T: Whatever + SomethingElse,
    U: Whatever,

// vs

fn foo(
    t: impl Whatever + SomethingElse,
    u: impl Whatever,
)

It seems plausible that impl Trait syntax should simply always be used whenever it can be, since expanding out an argument list into multiple lines tends to be preferable to expanding out a where clause to multiple lines (and even more so, expanding out a generics list).


@joshtriplett raised concerns about the purported learnability benefits absent having an explicit syntax for the “rigorous” stage.

Rebuttal: the RFC takes as a basic assumption that we will eventually have such a syntax. But I think it’s worth diving into greater detail on the learnability tradeoffs here. I think that if we offered an explicit syntax that was similar to today’s generic syntax, it could help tell a coherent, intuitive story.


@nrc raised his point about auto traits.

Rebuttal: the auto trait story here is essentially the same as with generics:

fn foo(t: impl Trait) -> impl Trait { t }
fn foo<T: Trait>(t: T) -> T { t }

In both of these functions, if you pass in an argument that is Send, you will be able to rely on Send in the return value.

Detailed design

The proposal in a nutshell

  • Expand impl Trait to allow use in arguments, where it behaves like an anonymous generic parameter. This will be separately feature-gated.

  • Stick with the impl Trait syntax, rather than introducing a some/any distinction.

  • Treat all type parameters as in scope for the concrete “witness” type underlying a use of impl Trait.

  • Treat any explicit lifetime bounds (as in impl Trait + 'a) as bringing those lifetimes into scope, and no other lifetime parameters are explicitly in scope. However, type parameters may mention lifetimes which are hence indirectly in scope.

Background

Before diving more deeply into the design, let’s recap some of the background that’s emerged over time for this corner of the language.

Universals (any) versus existentials (some)

There are basically two ways to talk about an “unknown type” in something like a function signature:

  • Universal quantification, i.e. “for any type T”, i.e. “caller chooses”. This is how generics work today. When you write fn foo<T>(t: T), you’re saying that the function will work for any choice of T, and leaving it to your caller to choose the T.

  • Existential quantification, i.e. “for some type T”, i.e. “callee chooses”. This is how impl Trait works today (which is in return position only). When you write fn foo() -> impl Iterator, you’re saying that the function will produce some type T that implements Iterator, but the caller is not allowed to assume anything else about that type.

When it comes to functions, we usually want any T for arguments, and some T for return values. However, consider the following function:

fn thin_air<T: Default>() -> T {
    T::default()
}

The thin_air function says it can produce a value of type T for any T the caller chooses—so long as T: Default. The collect function works similarly. But this pattern is relatively uncommon.

As we’ll see later, there are also considerations for higher-order functions, i.e. when you take another function as an argument.

In any case, one longstanding proposal for impl Trait is to split it into two distinct features: some Trait and any Trait. Then you’d have:

// These two are equivalent
fn foo<T: MyTrait>(t: T)
fn foo(t: any MyTrait)

// These two are equivalent
fn foo() -> impl Iterator
fn foo() -> some Iterator

// These two are equivalent
fn foo<T: Default>() -> T
fn foo() -> any Default

Scoping for lifetime and type parameters

There’s a subtle issue for the semantics of impl Trait: what lifetime and type parameters are considered “in scope” for the underlying concrete type that implements Trait?

Type parameters and type equalities

It’s easiest to understand this issue through examples where it matters. Suppose we have the following function:

fn foo<T>(t: T) -> impl MyTrait { .. }

Here we’re saying that the function will yield some type back, whose identity we don’t know, but which implements MyTrait. But, in addition, we have the type parameter T. The question is: can the return type of the function depend on T?

Concretely, we expect at least the following to work:

vec![
    foo(0u8),
    foo(1u8),
]

because we expect both expressions to have the same type, and hence be eligible to place into a single vector. That’s because, although we don’t know the identity of the return type, everything it could depend on is the same in both cases: T is instantiated with u8. (Note: there are “generative” variants of existentials for which this is not the case; see Unresolved questions);

But what about the following:

vec![
    foo(0u8),
    foo(0u16),
]

Here, we’re making different choices of T in the two expressions; can that affect what return type we get? The impl Trait semantics needs to give an answer to that question.

Clearly there are cases where the return type very much depends on type parameters, for example the following:

fn buffer<T: Write>(t: T) -> impl Write {
    BufWriter::new(t)
}

But there are also cases where there isn’t a dependency, and tracking that information may be important for type equalities like the vectors above. And this applies equally to lifetime parameters as well.

Lifetime parameters

It’s vital to know what lifetime parameters might be used in the concrete type underlying an impl Trait, because that information will affect lifetime inference.

For concrete types, we’re pretty used to thinking about this. Let’s take slices:

impl<T> [T] {
    fn len(&self) -> usize { ... }
    fn first(&self) -> Option<&T> { ... }
}

A seasoned Rustacean can read the ownership story directly from these two signatures. In the case of len, the fact that the return type does not involve any borrowed data means that the borrow of self is only used within len, and doesn’t need to persist afterwards. For first, by contrast, the return value contains &T, which will extend the borrow of self for at least as long as that return value is kept around by the caller.

As a caller, this difference is quite apparent:

{
    let len = my_slice.len(); // the borrow of `my_slice` lasts only for this line
    *my_slice[0] = 1;         // ... so this mutable borrow is allowed
}

{
    let first = my_slice.first(); // the borrow of `my_slice` lasts for the rest of this scope
    *my_slice[0] = 1;             // ... so this mutable borrow is *NOT* allowed
}

Now, the issue is that for impl Trait, we’re not writing down the concrete return type, so it’s not obvious what borrows might be active within it. In other words, if we write:

impl<T> [T] {
    fn bork(&self) -> impl SomeTrait { ... }
}

it’s not clear whether the function is more like len or more like first.

This is again a question of what lifetime parameters are in scope for the actual return type. It’s a question that needs a clear answer (and some flexibility) for the impl Trait design.

Core assumptions

The design in this RFC is guided by several assumptions which are worth laying out explicitly.

Assumption 1: we will eventually have a fully expressive and explicit syntax for existentials

The impl Trait syntax can be considered an “implicit” or “sugary” syntax in that it (1) does not introduce a name for the existential type and (2) does not allow you to control the scope in which the underlying concrete type is known.

Moreover, some versions of the design (including in this RFC) impose further limitations on the power of the feature for the sake of simplicity.

This is done under the assumption that we will eventually introduce a fully expressive, explicit syntax for existentials. Such a syntax is sketched in an appendix to this RFC.

Assumption 2: treating all type variables as in scope for impl Trait suffices for the vast majority of cases

The background section discussed scoping issues for impl Trait, and the main implication for type parameters (as opposed to lifetimes) is what type equalities you get for an impl Trait return type. We’re making two assumptions about that:

  • In practice, you usually need to close over most of all of the type parameters.
  • In practice, you usually don’t care much about type equalities with impl Trait.

This latter point means, for example, that it’s relatively unusual to do things like construct the vectors described in the background section.

Assumption 3: there should be an explicit marker when a lifetime could be embedded in a return type

As mentioned in a recent blog post, one regret we have around lifetime elision is the fact that it applies when leaving off a lifetime for a non-& type constructor that expects one. For example, consider:

impl<T> SomeType<T> {
    fn bork(&self) -> Ref<T> { ... }
}

To know whether the borrow of self persists in the return value, you have to know that Ref takes a lifetime parameter that’s being left out here. This is a tad too implicit for something as central as ownership.

Now, we also don’t want to force you to write an explicit lifetime. We’d instead prefer a notation that says “there is a lifetime here; it’s the usual one from elision”. As a purely strawman syntax (an actual RFC on the topic is upcoming), we might write:

impl<T> SomeType<T> {
    fn bork(&self) -> Ref<'_, T> { ... }
}

In any case, to avoid compounding the mistake around elision, there should be some marker when using impl Trait that a lifetime is being captured.

Assumption 4: existentials are vastly more common in return position, and universals in argument position

As discussed in the background section, it’s possible to make sense of some Trait and any Trait in arbitrary positions in a function signature. But experience with the language strongly suggests that some Trait semantics is virtually never wanted in argument position, and any Trait semantics is rarely used in return position.

Assumption 5: we may be interested in eventually pursuing a bare fn foo() -> Trait syntax rather than fn foo() -> impl Trait

Today, traits can be used directly as (unsized) types, so that you can write things like Box<MyTrait> to designate a trait object. However, with the advent of impl Trait, there’s been a desire to repurpose that syntax, and instead write Box<dyn Trait> or some such to designate trait objects.

That would, in particular, allow syntax like the following when taking a closure:

fn map<U>(self, f: FnOnce(T) -> U) -> Option<U>

The pros, cons, and logistics of such a change are out of scope for this RFC. However, it’s taken as an assumption that we want to keep the door open to such a syntax, and so shouldn’t stabilize any variant of impl Trait that lacks a good story for evolving into a bare Trait syntax later on.

Sticking with the impl Trait syntax

This RFC proposes to stabilize the impl Trait feature with its current syntax, while also expanding it to encompass argument position. That means, in particular, not introducing an explicit some/any distinction.

This choice is based partly on the core assumptions:

  • Assumption 1, we’ll have a fully expressive syntax later.
  • Assumption 4, we can use the some semantics in return position and any in argument position, and almost always be right.
  • Assumption 5, we may want bare Trait syntax, which would not give “syntactic space” for a some/any distinction.

One important question is: will people find it easier to understand and use impl Trait, or something like some Trait and any Trait? Having an explicit split may make it easier to understand what’s going on. But on the other hand, it’s a somewhat complicated distinction to make, and while you usually know intuitively what you want, being forced to spell it out by choosing the correct choice of some or any seems like an unnecessary burden, especially if the choice is almost always dictated by the position.

Pedagogically, if we have an explicit syntax, we retain the option of explaining what’s going on with impl Trait by “desugaring” it into that syntax. From that standpoint, impl Trait is meant purely for ergonomics, which means not just what you type, but also what you have to remember. Having impl Trait “just do the right thing” seems pretty clearly to be the right choice ergonomically.

Expansion to arguments

This RFC proposes to allow impl Trait in function arguments, in addition to return position, with the any Trait semantics (as per assumption 4). In other words:

// These two are equivalent
fn map<U>(self, f: impl FnOnce(T) -> U) -> Option<U>
fn map<U, F>(self, f: F) -> Option<U> where F: FnOnce(T) -> U

However, this RFC also proposes to disallow use of impl Trait within Fn trait sugar or higher-ranked bounds, i.e. to disallow examples like the following:

fn foo(f: impl Fn(impl SomeTrait) -> impl OtherTrait)
fn bar() -> (impl Fn(impl SomeTrait) -> impl OtherTrait)

While we will eventually want to allow such uses, it’s likely that we’ll want to introduce nested universal quantifications (i.e., higher-ranked bounds) in at least some cases; we don’t yet have the ability to do so. We can revisit this question later on, once higher-ranked bounds have gained full expressiveness.

Explicit instantiation

This RFC does not propose any means of explicitly instantiating an impl Trait in argument position. In other words:

fn foo<T: Trait>(t: T)
fn bar(t: impl Trait)

foo::<u32>(0) // this is allowed
bar::<u32>(0) // this is not

Thus, while impl Trait in argument position in some sense “desugars” to a generic parameter, the parameter is treated fully anonymously.

Scoping for type and lifetime parameters

In argument position, the type fulfilling an impl Trait is free to reference any types or lifetimes whatsoever. So in a signature like:

fn foo(iter: impl Iterator<Item = u32>);

the actual argument type may contain arbitrary lifetimes and mention arbitrary types. This follows from the desugaring to “anonymous” generic parameters.

For return position, things are more nuanced.

This RFC proposes that all type parameters are considered in scope for impl Trait in return position, as per Assumption 2 (which claims that this suffices for most use-cases) and Assumption 1 (which claims that we’ll eventually provide an explicit syntax with finer-grained control).

The lifetimes in scope include only those mentioned “explicitly” in a bound on the impl Trait. That is:

  • For impl SomeTrait + 'a, the 'a is in scope for the concrete witness type.
  • For impl SomeTrait + '_, the lifetime that elision would imply is in scope (this is again using the strawman shorthand syntax for an elided lifetime).

Note, however, that the witness type can freely mention type parameters, which may themselves involve embedded lifetimes. Consider, for example:

fn transform(iter: impl Iterator<Item = u32>) -> impl Iterator<Item = u32>

Here, if the actual argument type was SomeIter<'a>, the return type can mention SomeIter<'a>, and therefore can indirectly mention 'a.

In terms of Assumption 3 – the constraint that lifetime embedding must be explicitly marked – we clearly get that for the explicitly in-scope variables. For indirect mentions of lifetimes, it follows from whatever is provided for the type parameters, much like the following:

fn foo<T>(v: Vec<T>) -> vec::IntoIter<T>

In this example, the return type can of course reference any lifetimes that T does, but this is apparent from the signature. Likewise with impl Trait, where you should assume that all type parameters could appear in the return type.

Relationship to trait objects

It’s worth noting that this treatment of lifetimes is related but not identical to the way they’re handled for trait objects.

In particular, Box<SomeTrait> imposes a 'static requirement on the underlying object, while Box<SomeTrait + 'a> only imposes a 'a constraint. The key difference is that, for impl Trait, in-scope type parameters can appear, which indirectly mention additional lifetimes, so impl SomeTrait imposes 'static only if those type parameters do:

// In these cases, we know that the concrete return type is 'static
fn foo() -> impl SomeTrait;
fn foo(x: u32) -> impl SomeTrait;
fn foo<T: 'static>(t: T) -> impl SomeTrait;

// In the following case, the concrete return type may embed lifetimes that appear in T:
fn foo<T>(t: T) -> impl SomeTrait;

// ... whereas with Box, the 'static constraint is imposed
fn foo<T>(t: T) -> Box<SomeTrait>;

This difference is a natural one when you consider the difference between generics and trait objects in general – which is precisely that with generics, the actual types are not erased, and hence auto traits like Send work transparently, as do lifetime constraints.

How We Teach This

Generics and traits are a fundamental aspect of Rust, so the pedagogical approach here is really important. We’ll outline the basic contours below, but in practice it’s going to take some trial and error to find the best approach.

One of the hopes for impl Trait, as extended by this RFC, is that it aids learnability along several dimensions:

  • It makes it possible to meaningfully work with traits without visibly using generics, which can provide a gentler learning curve. In particular, signatures involving closures are much easier to understand. This effect would be further heightened if we eventually dropped the need for impl, so that you could write fn map<U>(self, f: FnOnce(T) -> U) -> Option<U>.

  • It provides a greater degree of analogy between static and dynamic dispatch when working with traits. Introducing trait objects is easier when they can be understood as a variant of impl Trait, rather than a completely different approach. This effect would be further heightened if we moved to dyn Trait syntax for trait objects.

  • It provides a more intuitive way of working with traits and static dispatch in an “object” style, smoothing the transition to Rust’s take on the topic.

  • It provides a more uniform story for static dispatch, allowing it to work in both argument and return position.

There are two ways of teaching impl Trait:

  • Introduce it prior to bounded generics, as the first way you learn to “consume” traits. That works particularly well with teaching Iterator as one of the first real traits you see, since impl Trait is a strong match for working with iterators. As mentioned above, this approach also provides a more intuitive stepping stone for those coming from OO-ish languages. Later, bounded generics can be introduced as a more powerful, explicit syntax, which can also reveal a bit more about the underlying semantic model of impl Trait. In this approach, the existential use case doesn’t need a great deal of ceremony—it just follows naturally from the basic feature.

  • Alternatively, introduce it after bounded generics, as (1) a sugar for generics and (2) a separate mechanism for existentials. This is, of course, the way all existing Rust users will come to learn impl Trait. And it’s ultimately important to understand the mechanism in this way. But it’s likely not the ideal way to introduce it at first.

In either case, people should learn impl Trait early (since it will appear often) and in particular prior to learning trait objects. As mentioned above, trait objects can then be taught using intuitions from impl Trait.

There’s also some ways in which impl Trait can introduce confusion, which we’ll cover in the drawbacks section below.

Drawbacks

It’s widely recognized that we need some form of static existentials for return position, both to be able to return closures (which have un-nameable types) and to ergonomically return things like iterator chains.

However, there are two broad classes of drawbacks to the approach taken in this RFC.

Relatively inexpressive sugary syntax

This RFC is built on the idea that we’ll eventually have a fully expressive explicit syntax, and so we should tailor the “sugary” impl Trait syntax to the most common use cases and intuitions.

That means, however, that we give up an opportunity to provide more expressive but still sugary syntax like some Trait and any Trait—we certainly don’t want all three.

That syntax is further discussed in Alternatives below.

Potential for confusion

There are two main avenues for confusion around impl Trait:

  • Because it’s written where a type would normally go, one might expect it to be usable everywhere a type is accepted (e.g., within struct definitions and impl headers). While it’s feasible to allow the feature to be used in more locations, the semantics is tricky, and in any case it doesn’t behave like a normal type, since it’s introducing an existential. The approach in this RFC is to have a very clear line: impl Trait is a notation for function signatures only, and there’s a separate explicit notation (TBD) that can be used to provide more general existentials (which can then be used as if they were normal types).

  • You can use impl Trait in both argument and return position, but the meaning is different in the two cases. On the one hand, the meaning is generally the intuitive one—it behaves as one would likely expect. But it blurs the line a bit between the some and any meanings, which could lead to people trying to use generics for existentials. We may be able to provide some help through errors, or eventually provide a syntax like <out T> for named existentials.

There’s also the fact that impl Trait introduces “yet another” way to take a bounded generic argument (in addition to <T: Trait> and <T> where T: Trait). However, these ways of writing a signature are not semantically distinct ways; they’re just stylistically different. It’s feasible that rustfmt could even make the choice automatically.

Alternatives

There’s been a lot of discussion about the impl Trait feature and various alternatives. Let’s look at some of the most prominent of them.

  • Limiting to return position forever. A particularly conservative approach would be to treat impl Trait as used purely for existentials and limit its use to return position in functions (and perhaps some other places where we want to allow for existentials). Limiting the feature in this way, however, loses out on some significant ergonomic and pedagogical wins (previously discussed in the RFC), and risks confusion around the “special case” treatment of return types.

  • Finer grained sugary syntax. There are a couple options for making the sugary syntax more powerful:

    • some/any notation, which allows selecting between universals and existentials at will. The RFC has already made some argument for why it does not seem so important to permit this distinction for impl Trait. And doing so has some significant downsides: it demands a more sophisticated understanding of the underlying type theory, which precludes using impl Trait as an early teaching tool; it seems easy to get confused and choose the wrong variant; and we’d almost certainly need different keywords (that don’t mirror the existing Some and Any names), but it’s not clear that there are good choices.

    • impl<...> Trait syntax, as a way of giving more precise control over which type and lifetime parameters are in scope. The idea is that the parameters listed in the <...> are in scope, and nothing else is. This syntax, however, is not forward-compatible with a bare Trait syntax. It’s also not clear how to get the right defaults without introducing some inconsistency; if you leave off the <> altogether, we’d presumably like something like the defaults proposed in this RFC (otherwise, the feature would be very unergonomic). But that would mean that, when transitioning from no <> to including a <> section, you go from including all type parameters to including only the listed set, which is a bit counterintuitive.

Unresolved questions

Full evidence for core assumptions. The assumptions in this RFC are stated with anecdotal and intuitive evidence, but the argument would be stronger with more empirical evidence. It’s not entirely clear how best to gather that, though many of the assumptions could be validated by using an unstable version of the proposed feature.

The precedence rules around impl Trait + 'a need to be nailed down.

The RFC assumes that we only want “applicative” existentials, which always resolve to the same type when in-scope parameters are the same:

fn foo() -> impl SomeTrait { ... }

fn bar() {
    // valid, because we know the underlying return type will be the same in both cases:
    let v = vec![foo(), foo()];
}

However, it’s also possible to provide “generative” existentials, which give you a fresh type whenever they are unpacked, even when their arguments are the same—which would rule out the example above. That’s a powerful feature, because it means in effect that you can generate a fresh type for every dynamic invocation of a function, thereby giving you a way to hoist dynamic information into the type system.

As one example, generative existentials can be used to “bless” integers as being in bounds for a particular slice, so that bounds checks can be safely elided. This is currently possible to encode in Rust by using callbacks with fresh lifetimes (see Section 6.3 of @Gankro’s thesis, but generative existentials would provide a much more natural mechanism.

We may want to consider adding some form of generative existentials in the future, but would almost certainly want to do so via the fully expressive/explicit syntax, rather than through impl Trait.

Appendix: a sketch of a fully-explicit syntax

This section contains a brief sketch of a fully-explicit syntax for existentials. It’s a strawman proposal based on many previously-discussed ideas, and should not be bikeshedded as part of this RFC. The goal is just to give a flavor of how the full system could eventually fit together.

The basic idea is to introduce an abstype item for declaring abstract types:

abstype MyType: SomeTrait;

This construct would be usable anywhere items currently are. It would declare an existential type whose concrete implementation is known within the item scope in which it is declared, and that concrete type would be determined by inference based on the same scope. Outside of that scope, the type would be opaque in the same way as impl Trait.

So, for example:

mod example {
    static NEXT_TOKEN: Cell<u64> = Cell::new(0);

    pub abstype Token: Eq;
    pub fn fresh() -> Token {
        let r = NEXT_TOKEN.get();
        NEXT_TOKEN.set(r + 1);
        r
    }
}

fn main() {
    assert!(example::fresh() != example::fresh());

    // fails to compile, because in this scope we don't know that `Token` is `u64`
    let _ = example::fresh() + 1;
}

Of course, in this particular example we could just as well have used fn fresh() -> impl Eq, but abstype allows us to use the same existential type in multiple locations in an API:

mod example {
    pub abstype Secret: SomeTrait;

    pub fn foo() -> Secret { ... }
    pub fn bar(s: Secret) -> Secret { ... }

    pub struct Baz {
        quux: Secret,
        // ...
    }
}

Already abstype gives greater expressiveness than impl Trait in several respects:

  • It allows existentials to be named, so that the same one can be referred to multiple times within an API.

  • It allows existentials to appear within structs.

  • It allows existentials to appear within function arguments.

  • It gives tight control over the “scope” of the existential—what portion of the code is allowed to know what the concrete witness type for the existential is. For impl Trait, it’s always just a single function.

But we also wanted more control over scoping of type and lifetime parameters. For this, we can introduce existential type constructors:

abstype MyIter<'a>: Iterator<Item = u32>;

impl SomeType<T> {
    // we know that 'a is in scope for the return type, but *not* `T`
    fn iter<'a>(&'a self, ) -> MyIter<'a> { ... }
}

(These type constructors raise various issues around inference, which I believe are tractable, but are out of scope for this sketch).

It’s worth noting that there’s some relationship between abstype and the “newtype deriving” concept: from an external perspective, abstype introduces a new type but automatically delegates any of the listed trait bounds to the underlying witness type.

Finally, a word on syntax:

  • Why abstype Foo: Trait; rather than type Foo = impl Trait;?

    • Two reasons. First, to avoid confusion about impl Trait seeming to be like a type, when it is actually an existential. Second, for forward compatibility with bare Trait syntax.
  • Why not type Foo: Trait?

    • That may be a fine syntax, but for clarity in presenting the idea I preferred to introduce a new keyword.

There are many detailed questions that would need to be resolved to fully specify this more expressive syntax, but the hope here is to show that (1) there’s a plausible direction to take here and (2) give a sense for how impl Trait and a more expressive form could fit together.

Summary

Add functions to the language which take a value and an inclusive range, and will “clamp” the input to the range. I.E.

if input > max {
    return max;
}
else if input < min {
    return min;
} else {
    return input;
}

These would be on the Ord trait, and have a special version implemented for f32 and f64.

Motivation

Clamp is a very common pattern in Rust libraries downstream. Some observed implementations of this include:

http://nalgebra.org/rustdoc/nalgebra/fn.clamp.html

http://rust-num.github.io/num/num/fn.clamp.html

Many libraries don’t expose or consume a clamp function but will instead use patterns like this:

if input > max {
    max
}
else if input < min {
    min
} else {
    input
}

and

input.max(min).min(max);

and even

match input {
    c if c >  max =>  max,
    c if c < min => min,
    c              =>  c,
}

Typically these patterns exist where there is a need to interface with APIs that take normalized values or when sending output to hardware that expects values to be in a certain range, such as audio samples or painting to pixels on a display.

While this is pretty trivial to implement downstream there are quite a few ways to do it and just writing the clamp inline usually results in rather a lot of control flow structure to describe a fairly simple and common concept.

Detailed design

Add the following to std::cmp::Ord

/// Returns max if self is greater than max, and min if self is less than min.  
/// Otherwise this will return self.  Panics if min > max.
#[inline]
pub fn clamp(self, min: Self, max: Self) -> Self {
    assert!(min <= max);
    if self < min {
        min
    }
    else if self > max {
        max
    } else {
        self
    }
}

And the following to libstd/f32.rs, and a similar version for f64

/// Returns max if self is greater than max, and min if self is less than min.
/// Otherwise this returns self.  Panics if min > max, min equals NaN, or max equals NaN.
///
/// # Examples
///
/// ```
/// assert!((-3.0f32).clamp(-2.0f32, 1.0f32) == -2.0f32);
/// assert!((0.0f32).clamp(-2.0f32, 1.0f32) == 0.0f32);
/// assert!((2.0f32).clamp(-2.0f32, 1.0f32) == 1.0f32);
/// ```
pub fn clamp(self, min: f32, max: f32) -> f32 {
    assert!(min <= max);
    let mut x = self;
    if x < min { x = min; }
    if x > max { x = max; }
    x
}

This NaN handling behavior was chosen because a range with NaN on either side isn’t really a range at all and the function can’t be guaranteed to behave correctly if that is the case.

How We Teach This

The proposed changes would not mandate modifications to any Rust educational material.

Drawbacks

This is trivial to implement downstream, and several versions of it exist downstream.

Alternatives

Alternatives were explored at https://internals.rust-lang.org/t/clamp-function-for-primitive-types/4999

Additionally there is the option of placing clamp in std::cmp in order to avoid backwards compatibility problems. This is however semantically undesirable, as 1.clamp(2, 3); is more readable than clamp(1, 2, 3);

Unresolved questions

None

Summary

Copy most of the static ptr:: functions to methods on unsafe pointers themselves. Also add a few conveniences for ptr.offset with unsigned integers.

// So this:
ptr::read(self.ptr.offset(idx as isize))

// Becomes this:
self.ptr.add(idx).read()

More conveniences should probably be added to unsafe pointers, but this proposal is basically the “minimally controversial” conveniences.

Motivation

Swift lets you do this:

let val = ptr.advanced(by: idx).move()

And we want to be cool like Swift, right?

Static Functions

ptr::foo(ptr) is an odd interface. Rust developers generally favour the type-directed dispatch provided by methods; ptr.foo(). Generally the only reason we’ve ever shied away from methods is when they would be added to a type that implements Deref generically, as the . operator will follow Deref impls to try to find a matching function. This can lead to really confusing compiler errors, or code “spuriously compiling” but doing something unexpected because there was an unexpected match somewhere in the Deref chain. This is why many of Rc’s operations are static functions that need to be called as Rc::foo(&the_rc).

This reasoning doesn’t apply to the raw pointer types, as they don’t provide a Deref impl. Although there are coercions involving the raw pointer types, these coercions aren’t performed by the dot operator. This is why it has long been considered fine for raw pointers to have the deref and as_ref methods.

In fact, the static functions are sometimes useful precisely because they do perform raw pointer coercions, so it’s possible to do ptr::read(&val), rather than ptr::read(&val as *const _).

However these static functions are fairly cumbersome in the common case, where you already have a raw pointer.

Signed Offset

The cast in ptr.offset(idx as isize) is unnecessarily annoying. Idiomatic Rust code uses unsigned offsets, but low level code is forced to constantly cast those offsets. To understand why this interface is designed as it is, some background is needed.

offset is directly exposing LLVM’s getelementptr instruction, with the inbounds keyword. wrapping_offset removes the inbounds keyword. offset takes a signed integer, because that’s what GEP exposes. It’s understandable that we’ve been conservative here; GEP is so confusing that it has an entire FAQ.

That said, LLVM is pretty candid that it models pointers as two’s complement integers, and a negative integer is just a really big positive integer, right? So can we provide an unsigned version of offset, and just feed it down into GEP?

The relevant FAQ entry is as follows:

What happens if a GEP computation overflows?

If the GEP lacks the inbounds keyword, the value is the result from evaluating the implied two’s complement integer computation. However, since there’s no guarantee of where an object will be allocated in the address space, such values have limited meaning.

If the GEP has the inbounds keyword, the result value is undefined (a “trap value”) if the GEP overflows (i.e. wraps around the end of the address space).

As such, there are some ramifications of this for inbounds GEPs: scales implied by array/vector/pointer indices are always known to be “nsw” since they are signed values that are scaled by the element size. These values are also allowed to be negative (e.g. “gep i32 *%P, i32 -1”) but the pointer itself is logically treated as an unsigned value. This means that GEPs have an asymmetric relation between the pointer base (which is treated as unsigned) and the offset applied to it (which is treated as signed). The result of the additions within the offset calculation cannot have signed overflow, but when applied to the base pointer, there can be signed overflow.

This is written in a bit of a confusing way, so here’s a simplified summary of what we care about:

  • The pointer is treated as an unsigned number, and the offset as signed.
  • While computing the offset in bytes (idx * size_of::<T>()), we aren’t allowed to do signed overflow (nsw).
  • While applying the offset to the pointer (ptr + offset), we aren’t allowed to do unsigned overflow (nuw).

Part of the historical argument for signed offset in Rust has been a warning against these overflow concerns, but upon inspection that doesn’t really make sense.

  • If you offset a *const i16 by isize::MAX / 3 * 2 (which fits into a signed integer), then you’ll still overflow a signed integer in the implicit offset computation.
  • There’s no indication that unsigned overflow should be a concern at all.
  • The location of the offset isn’t even the place to handle this issue. The ultimate consequence of offset being signed is that LLVM can’t support allocations larger than isize::MAX bytes. Therefore this issue should be handled at the level of memory allocation code.
  • The fact that offset is unsafe is already surprising to anyone with the “it’s just addition” mental model, pushing them to read the documentation and learn the actual rules.

In conclusion: as isize doesn’t help developers write better code.

Detailed design

Methodization

Add the following method equivalents for the static ptr functions on *const T and *mut T:

(Note that this proposal doesn’t deprecate the static functions, as they still make some code more ergonomic than methods, and we’d like to avoid regressing the ergonomics of any usecase. More discussion can be found in the alternatives.)

impl<T> *(const|mut) T {
  unsafe fn read(self) -> T;
  unsafe fn read_volatile(self) -> T;
  unsafe fn read_unaligned(self) -> T;

  unsafe fn copy_to(self, dest: *mut T, count: usize);
  unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize);
  unsafe fn copy_from(self, src: *mut T, count: usize);
  unsafe fn copy_from_nonoverlapping(self, src: *mut T, count: usize);
}

And these only on *mut T:

impl<T> *mut T {
  // note that I've moved these from both to just `*mut T`, to go along with `copy_from`
  unsafe fn drop_in_place(self) where T: ?Sized;
  unsafe fn write(self, val: T);
  unsafe fn write_bytes(self, val: u8, count: usize);
  unsafe fn write_volatile(self, val: T);
  unsafe fn write_unaligned(self, val: T);
  unsafe fn replace(self, val: T) -> T;
  unsafe fn swap(self, with: *mut T);
}

(see the alternatives for why we provide both copy_to and copy_from)

Unsigned Offset

Add the following conveniences to both *const T and *mut T:

impl<T> *(const|mut) T {
  unsafe fn add(self, offset: usize) -> Self;
  unsafe fn sub(self, offset: usize) -> Self;
  fn wrapping_add(self, offset: usize) -> Self;
  fn wrapping_sub(self, offset: usize) -> Self;
}

I expect ptr.add to replace ~95% of all uses of ptr.offset, and ptr.sub to replace ~95% of the remaining 5%. It’s very rare to have an offset that you don’t know the sign of, and also don’t need special handling for.

How We Teach This

Docs should be updated to use the new methods over the old ones, pretty much unconditionally. Otherwise I don’t think there’s anything to do there.

All the docs for these methods can be basically copy-pasted from the existing functions they’re wrapping, with minor tweaks.

Drawbacks

The only drawback I can think of is that this introduces a “what is idiomatic” schism between the old functions and the new ones.

Alternatives

Overload operators for more ergonomic offsets

Rust doesn’t support “unsafe operators”, and offset is an unsafe function because of the semantics of GetElementPointer. We could make wrapping_add be the implementation of +, but almost no code should actually be using wrapping offsets, so we shouldn’t do anything to make it seem “preferred” over non-wrapping offsets.

Beyond that, (ptr + idx).read_volatile() is a bit wonky to write – methods chain better than operators.

Make offset generic

We could make offset generic so it accepts usize and isize. However we would still want the sub method, and at that point we might as well have add for symmetry. Also add is shorter which is a nice carrot for users to migrate to it.

Only one of copy_to or copy_from

copy is the only mutating ptr operation that doesn’t write to the first argument. In fact, it’s clearly backwards compared to C’s memcpy. Instead it’s ordered in analogy to fs::copy.

Methodization could be an opportunity to “fix” this, and reorder the arguments, providing only copy_from. However there is concern that this will lead to users doing a blind migration without checking argument order.

One possibly solution would be deprecating ptr::copy along with this as a “signal” that something strange has happened. But as discussed in the following section, immediately deprecating an API along with the introduction of its replacement tends to cause a mess in the broader ecosystem.

On the other hand, copy_to isn’t as idiomatic (see: clone_from), and there was disastisfaction in reinforcing this API design quirk.

As a compromise, we opted to provide both, forcing users of copy to decided which they want. Ideally this will be copy_from with reversed arguments, as this is more idiomatic. Longterm we can look to deprecating copy_to and ptr::copy if desirable. Otherwise having these duplicate methods isn’t a big deal (and is technically a bit more convenient for users with a reference and a raw pointer).

Deprecate the Static Functions

To avoid any issues with the methods and static functions coexisting, we could deprecate the static functions. As noted in the motivation, these functions are currently useful for their ability to perform coercions on the first argument. However those who were taking advantage of this property can easily rewrite their code to either of the following:

(ptr as *mut _).foo();
<*mut _>::foo(ptr);

I personally consider this a minor ergonomic and readability regression from ptr::foo(ptr), and so would rather not do this.

More importantly, this would cause needless churn for old code which is still perfectly fine, if a bit less ergonomic than it could be. More ergonomic interfaces should be adopted based on their own merits; not because This Is The New Way, And Everyone Should Do It The New Way.

In fact, even if we decide we should deprecate these functions, we should still stagger the deprecation out several releases to minimize ecosystem churn. When a deprecation occurs, users of the latest compiler will be pressured by diagnostics to update their code to the new APIs. If those APIs were introduced in the same release, then they’ll be making their library only compile on the latest release, effectively breaking the library for anyone who hasn’t had a chance to upgrade yet. If the deprecation were instead done several releases later, then by the time users are pressured to use the new APIs there will be a buffer of several stable releases that can compile code using the new APIs.

Unresolved questions

None.

Summary

This RFC proposes the concept of patching sources for Cargo. Sources can be have their existing versions of crates replaced with different copies, and sources can also have “prepublished” crates by adding versions of a crate which do not currently exist in the source. Dependency resolution will work as if these additional or replacement crates actually existed in the original source.

One primary feature enabled by this is the ability to “prepublish” a crate to crates.io. Prepublication makes it possible to perform integration testing within a large crate graph before publishing anything to crates.io, and without requiring dependencies to be switched from the crates.io index to git branches. It can, to a degree, simulate an “atomic” change across a large number of crates and repositories, which can then actually be landed in a piecemeal, non-atomic fashion.

Motivation

Large Rust projects often end up pulling in dozens or hundreds of crates from crates.io, and those crates often depend on each other as well. If the project author wants to contribute a change to one of the crates nestled deep in the graph (say, xml-rs), they face a couple of related challenges:

  • Before submitting a PR upstream to xml-rs, they will usually want to try integrating the change within their project, to make sure it actually meets their needs and doesn’t lead to unexpected problems. That might involve a cascade of changes if several crates in the graph depend on xml-rs. How do they go about this kind of integration work prior to sending a PR?

  • If the change to the upstream xml-rs crate is breaking (would require a new major version), it’s vital to carefully track which other crates in the graph have successfully been updated to this version, and which ones are still at the old version (and can stay there). This issue is related to the notion of private dependencies, which should have a separate RFC in the near future.

  • Once they’re satisfied with the change to xml-rs (and any other intermediate crates), they’ll need to make PRs and request a new publication to crates.io. But they would like to cleanly continue local development in the meantime, with an easy migration as each PR lands and each crate is published.

The Goldilocks problem

It’s likely that a couple of Cargo’s existing features have already come to mind as potential solutions to the challenges above. But the existing features suffer from a Goldilocks problem:

  • You might reach for git (or even path) dependencies. That would mean, for example, switching an xml-rs dependency in your crate graph from crates.io to point at, for example, a forked copy on github. The problem is that this approach does not provide enough dependency unification: if other parts of the crate graph refer to the crates.io version of xml-rs, it is treated as an entirely separate library and thus compiled separately. That in turn means that two crates in the graph using these distinct versions won’t be able to talk to each other about xml-rs types (even when those types are identical).

  • You might think that [replace] was designed precisely for the use case above. But it provides too much dependency unification: it reroutes all uses of a particular existing crate version to new source for the crate, even if there are breaking changes involved. The feature is designed for surgical patching of specific dependency versions.

Prepublication dependencies add another tool to this arsenal, with just the right amount of dependency unification: the precise amount you’d get after publication to crates.io.

Detailed design

The design itself is relatively straightforward. The Cargo.toml file will support a new section for patching a source of crates:

[patch.crates-io]
xml-rs = { path = "path/to/fork" }

The listed dependencies have the same syntax as the normal [dependencies] section, but they must all come form a different source than the source being patched. For example you can’t patch crates.io with other crates from crates.io! Cargo will load the crates and extract the version information for each dependency’s name, supplementing the source specified with the version it finds. If the same name/version pair already exists in the source being patched, then this will act just like [replace], replacing its source with the one specified in the [patch] section.

Like [replace], the [patch] section is only taken into account for the root crate (or workspace root); allowing it to accumulate anywhere in the crate dependency graph creates intractable problems for dependency resolution.

The sub-table of [patch] (where crates-io is used above) is used to specify the source that’s being patched. Cargo will know ahead of time one identifier, literally crates-io, but otherwise this field will currently be interpreted as a URL of a source. The name crates-io will correspond to the crates.io index, and other urls, such as git repositories, may also be specified for patching. Eventually it’s intended we’ll grow support for multiple registries here with their own identifiers, but for now just literally crates-io and other URLs are allowed.

Examples

It’s easiest to see how the feature works by looking at a few examples.

Let’s imagine that xml-rs is currently at version 0.9.1 on crates.io, and we have the following dependency setup:

  • Crate foo lists dependency xml-rs = "0.9.0"
  • Crate bar lists dependency xml-rs = "0.9.1"
  • Crate baz lists dependency xml-rs = "0.8.0"
  • Crate servo has foo, bar and baz as dependencies.

With this setup, the dependency graph for Servo will contain two versions of xml-rs: 0.9.1 and 0.8.0. That’s because minor versions are coalesced; 0.9.1 is considered a minor release against 0.9.0, while 0.9.0 and 0.8.0 are incompatible.

Scenario: patching with a bugfix

Let’s say that while developing foo we’ve got a lock file pointing to xml-rs 0.9.0, and we found the 0.9.0 branch of xml-rs that hasn’t been touched since it was published. We then find a bug in the 0.9.0 publication of xml-rs which we’d like to fix.

First we’ll check out foo locally and implement what we believe is a fix for this bug, and next, we change Cargo.toml for foo:

[patch.crates-io]
xml-rs = { path = "../xml-rs" }

When compiling foo, Cargo will resolve the xml-rs dependency to 0.9.0, as it did before, but that version’s been replaced with our local copy. The local path dependency, which has version 0.9.0, takes precedence over the version found in the registry.

Once we’ve confirmed a fix bug we then continue to run tests in xml-rs itself, and then we’ll send a PR to the main xml-rs repo. This then leads us to the next section where a new version of xml-rs comes into play!

Scenario: prepublishing a new minor version

Now, suppose that foo needs some changes to xml-rs, but we want to check that all of Servo compiles before pushing the changes through.

First, we change Cargo.toml for foo:

[patch.crates-io]
xml-rs = { git = "https://github.com/aturon/xml-rs", branch = "0.9.2" }

[dependencies]
xml-rs = "0.9.2"

For servo, we also need to record the prepublication, but don’t need to modify or introduce any xml-rs dependencies; it’s enough to be using the fork of foo, which we would be anyway:

[patch.crates-io]
xml-rs = { git = "https://github.com/aturon/xml-rs", branch = "0.9.2" }
foo = { git = "https://github.com/aturon/foo", branch = "fix-xml" }

Note that if Servo depended directly on foo it would also be valid to do:

[patch.crates-io]
xml-rs = { git = "https://github.com/aturon/xml-rs", branch = "0.9.2" }

[dependencies]
foo = { git = "https://github.com/aturon/foo", branch = "fix-xml" }

With this setup:

  • When compiling foo, Cargo will resolve the xml-rs dependency to 0.9.2, and retrieve the source from the specified git branch.

  • When compiling servo, Cargo will again resolve two versions of xml-rs, this time 0.9.2 and 0.8.0, and for the former it will use the source from the git branch.

The Cargo.toml files that needed to be changed here span from the crate that actually cares about the new version (foo) upward to the root of the crate we want to do integration testing for (servo); no sibling crates needed to be changed.

Once xml-rs version 0.9.2 is actually published, we will likely be able to remove the [patch] sections. This is a discrete step that must be taken by crate authors, however (e.g. doesn’t happen automatically) because the actual published 0.9.2 may not be precisely what we thought it was going to be. For example more changes could have been merged, it may not actually fix the bug, etc.

Scenario: prepublishing a breaking change

What happens if foo instead needs to make a breaking change to xml-rs? The workflow is identical. For foo:

[patch.crates-io]
xml-rs = { git = "https://github.com/aturon/xml-rs", branch = "0.10.0" }

[dependencies]
xml-rs = "0.10.0"

For servo:

[patch.crates-io]
xml-rs = { git = "https://github.com/aturon/xml-rs", branch = "0.10.0" }

[dependencies]
foo = { git = "https://github.com/aturon/foo", branch = "fix-xml" }

However, when we compile, we’ll now get three versions of xml-rs: 0.8.0, 0.9.1 (retained from the previous lockfile), and 0.10.0. Assuming that xml-rs is a public dependency used to communicate between foo and bar this will result in a compilation error, since they are using distinct versions of xml-rs. To fix that, we’ll need to update bar to also use the new, 0.10.0 prepublication version of xml-rs.

(Note that a private dependency distinction would help catch this issue at the Cargo level and give a maximally informative error message).

Impact on Cargo.lock

Usage of [patch] will perform backwards-incompatible modifications to Cargo.lock, meaning that usage of [patch] will prevent previous versions of Cargo from interpreting the lock file. Cargo will unconditionally resolve all entries in the [patch] section to precise dependencies, encoding them all in the lock file whether they’re used or not.

Dependencies formed on crates listed in [patch] will then be listed directly in Cargo.lock, and the original listed crate will not be listed. In our example above we had:

  • Crate foo lists dependency xml-rs = "0.9.0"
  • Crate bar lists dependency xml-rs = "0.9.1"
  • Crate baz lists dependency xml-rs = "0.8.0"

We then update the crate foo to have a dependency of xml-rs = "0.10.0". This causes Cargo to encode in the lock file that foo depends directly on the git repository of xml-rs containing 0.10.0, but it does not mention that foo depends on the crates.io version of xml-rs-0.10.0 (it doesn’t exist!). Note, however, that the lock file will still mention xml-rs-0.8.0 and xml-rs-0.9.1 because bar and baz depend on it.

To help put some TOML where our mouth is let’s say we depend on env_logger but we’re using [patch] to depend on a git version of the log crate, a dependency of env_logger. First we’ll have our Cargo.toml including:

# Cargo.toml
[dependencies]
env_logger = "0.4"

With that we’ll find this in Cargo.lock, notably everything comes from crates.io

# Cargo.lock
[[package]]
name = "env_logger"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
 "log 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)",
]

[[package]]
name = "log"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"

Next up we’ll add our [patch] section to crates.io:

# Cargo.toml
[patch.crates-io]
log = { git = 'https://github.com/rust-lang-nursery/log' }

and that will generate a lock file that looks (roughly) like:

# Cargo.lock
[[package]]
name = "env_logger"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
 "log 0.3.7 (git+https://github.com/rust-lang-nursery/log)",
]

[[package]]
name = "log"
version = "0.3.7"
source = "git+https://github.com/rust-lang-nursery/log#cb9fa28812ac27c9cadc4e7b18c221b561277289"

Notably log from crates.io is not mentioned at all here, and crucially so! Additionally Cargo has the fully resolved version of the log patch available to it, down to the sha of what to check out.

When Cargo rebuilds from this Cargo.lock it will not query the registry for versions of log, instead seeing that there’s an exact dependency on the git repository (from the Cargo.lock) and the repository is listed as a patch, so it’ll follow that pointer.

Impact on [replace]

The [patch] section in the manifest can in many ways be seen as a “replace 2.0”. It is, in fact, strictly more expressive than the current [replace] section! For example these two sections are equivalent:

[replace]
'log:0.3.7' = { git = 'https://github.com/rust-lang-nursery/log' }

# is the same as...

[patch.crates-io]
log = { git = 'https://github.com/rust-lang-nursery/log' }

This is not accidental! The initial development of the [patch] feature was actually focused on prepublishing dependencies and was called [prepublish], but while discussing it a conclusion was reached that [prepublish] already allowed replacing existing versions in a registry, but issued a warning when doing so. It turned out that without a warning we ended up having a full-on [replace] replacement!

At this time, though, it is not planned to deprecate the [replace] section, nor remove it. After the [patch] section is implemented, if it ends up working out this may change. If after a few cycles on stable the [patch] section seems to be working well we can issue an official deprecation for [replace], printing a warning if it’s still used.

Documentation, however, will immediately begin to recommend [patch] over [replace].

How We Teach This

Patching is a feature intended for large-scale projects spanning many repos and crates, where you want to make something like an atomic change across the repos. As such, it should likely be explained in a dedicated section for large-scale Cargo usage, which would also include build system integration and other related topics.

The mechanism itself is straightforward enough that a handful of examples (as in this RFC) is generally enough to explain it. In the docs, these examples should be spelled out in greater detail.

Most notably, however, the overriding dependencies section of Cargo’s documentation will be rewritten to primarily mention [patch], but [replace] will be mentioned still with a recommendation to use [patch] instead if possible.

Drawbacks

This feature adds yet another knob around where, exactly, Cargo is getting its source and version information. In particular, it’s basically deprecating [replace] if it works out, and it’s typically a shame to deprecate major stable features.

Fortunately, because these features are intended to be relatively rarely used, checked in even more rarely, are only used for very large projects, and cannot be published to crates.io, the knobs are largely invisible to the vast majority of Cargo users, who are unaffected by them.

Alternatives

The primary alternative for addressing the motivation of this RFC would be to loosen the restrictions around [replace], allowing it to arbitrarily change the version of the crate being replaced.

As explained in the motivation section, however, such an approach does not fully address the desired workflow, for a few reasons:

  • It does not make it possible to track which crates in the dependency graph have successfully upgraded to a new major version of the replaced dependency, which could have the effect of masking important behavioral breaking changes (that still allow the crates to compile).

  • It does not provide an easy-to-understand picture of what the crates will likely look like after the relevant dependencies are published. In particular, you can’t use the usual resolution algorithm to understand what’s going on with version resolution. A good example of this is the “breaking change” example above where we ended up with three versions of xml-rs after our prepublished version. It’s crucial that 0.9.1 was still in the lock file because we hadn’t updated that dependency on 0.9.1 yet, so it wasn’t ready for 0.10.0. With [replace], however, we would only possibly be able to replace all usage of 0.9.1 with 0.10.0, not having an incremental solution.

Unresolved questions

  • It would be extremely helpful to provide a first-class workflow for forking a dependency and making the necessary changes to Cargo.toml for prepublication, and for fixing things up when publication actually occurs. That shouldn’t be hard to do, but is out of scope for this RFC.

Summary

Overhaul the global allocator APIs to put them on a path to stabilization, and switch the default allocator to the system allocator when the feature stabilizes.

This RFC is a refinement of the previous RFC 1183.

Motivation

The current API

The unstable allocator feature allows developers to select the global allocator which will be used in a program. A crate identifies itself as an allocator with the #![allocator] annotation, and declares a number of allocation functions with specific #[no_mangle] names and a C ABI. To override the default global allocator, a crate simply pulls an allocator in via an extern crate.

There are a couple of issues with the current approach:

A C-style ABI is error prone - nothing ensures that the signatures are correct, and if a function is omitted that error will be caught by the linker rather than compiler.

Allocators have some state, and with the current API, that state is forced to be truly global since bare functions can’t carry state.

Since an allocator is automatically selected when it is pulled into the crate graph, it is painful to compose allocators. For example, one may want to create an allocator which records statistics about active allocations, or adds padding around allocations to attempt to detect buffer overflows in unsafe code. To do this currently, the underlying allocator would need to be split into two crates, one which contains all of the functionality and another which is tagged as an #![allocator].

jemalloc

Rust’s default allocator has historically been jemalloc. While jemalloc does provide significant speedups over certain system allocators for some allocation heavy workflows, it has has been a source of problems. For example, it has deadlock issues on Windows, does not work with Valgrind, adds ~300KB to binaries, and has caused crashes on macOS 10.12. See this comment for more details. As a result, it is already disabled on many targets, including all of Windows. While there are certainly contexts in which jemalloc is a good choice, developers should be making that decision, not the compiler. The system allocator is a more reasonable and unsurprising default choice.

A third party crate allowing users to opt-into jemalloc would also open the door to provide access to some of the library’s other features such as tracing, arena pinning, and diagnostic output dumps for code that depends on jemalloc directly.

Detailed design

Defining an allocator

Global allocators will use the Allocator trait defined in RFC 1398. However Allocator’s methods take &mut self since it’s designed to be used with individual collections. Since this allocator is global across threads, we can’t take &mut self references to it. So, instead of implementing Allocator for the allocator type itself, it is implemented for shared references to the allocator. This is a bit strange, but similar to File’s Read and Write implementations, for example.

pub struct Jemalloc;

impl<'a> Allocator for &'a Jemalloc {
    // ...
}

Using an allocator

The alloc::heap module will contain several items:

/// Defined in RFC 1398
pub struct Layout { ... }

/// Defined in RFC 1398
pub unsafe trait Allocator { ... }

/// An `Allocator` which uses the system allocator.
///
/// This uses `malloc`/`free` on Unix systems, and `HeapAlloc`/`HeapFree` on
/// Windows, for example.
pub struct System;

unsafe impl Allocator for System { ... }

unsafe impl<'a> Allocator for &'a System { ... }

/// An `Allocator` which uses the configured global allocator.
///
/// The global allocator is selected by defining a static instance of the
/// allocator and annotating it with `#[global_allocator]`. Only one global
/// allocator can be defined in a crate graph.
///
/// # Note
///
/// For technical reasons, only non-generic methods of the `Allocator` trait
/// will be forwarded to the selected global allocator in the current
/// implementation.
pub struct Heap;

unsafe impl Allocator for Heap { ... }

unsafe impl<'a> Allocator for &'a Heap { ... }

This module will be reexported as std::alloc, which will be the location at which it will be stabilized. The alloc crate is not proposed for stabilization at this time.

An example of setting the global allocator:

extern crate my_allocator;

use my_allocator::{MyAllocator, MY_ALLOCATOR_INIT};

#[global_allocator]
static ALLOCATOR: MyAllocator = MY_ALLOCATOR_INIT;

fn main() {
    ...
}

Note that ALLOCATOR is still a normal static value - it can be used like any other static would be.

The existing alloc_system and alloc_jemalloc crates will likely be deprecated and eventually removed. The alloc_system crate is replaced with the SystemAllocator structure in the standard library and the alloc_jemalloc crate will become available on crates.io. The alloc_jemalloc crate will likely look like:

pub struct Jemalloc;

unsafe impl Allocator for Jemalloc {
    // ...
}

unsafe impl<'a> Allocator for &'a Jemalloc {
    // ...
}

It is not proposed in this RFC to switch the per-platform default allocator just yet. Assuming everything goes smoothly, however, it will likely be defined as System as platforms transition away from jemalloc-by-default once the jemalloc-from-crates.io is stable and usable.

The compiler will also no longer forbid cyclic the cyclic dependency between a crate defining an implementation of an allocator and the alloc crate itself. As a vestige of the current implementation this is only to get around linkage errors where the liballoc rlib references symbols defined in the “allocator crate”. With this RFC the compiler has far more control over the ABI and linkage here, so this restriction is no longer necessary.

How We Teach This

Global allocator selection would be a somewhat advanced topic - the system allocator is sufficient for most use cases. It is a new tool that developers can use to optimize for their program’s specific workload when necessary.

It should be emphasized that in most cases, the “terminal” crate (i.e. the bin, cdylib or staticlib crate) should be the only thing selecting the global allocator. Libraries should be agnostic over the global allocator unless they are specifically designed to augment functionality of a specific allocator.

Defining an allocator is an even more advanced topic that should probably live in the Nomicon.

Drawbacks

Dropping the default of jemalloc will regress performance of some programs until they manually opt back into that allocator, which may produce confusion in the community as to why things suddenly became slower.

Depending on implementation of a trait for references to a type is unfortunate. It’s pretty strange and unfamiliar to many Rust developers. Many global allocators are zero-sized as their state lives outside of the Rust structure, but a reference to the allocator will be 4 or 8 bytes. If developers wish to use global allocators as “normal” allocators in individual collections, allocator authors may have to implement Allocator twice - for the type and references to the type. One can forward to the other, but it’s still work that would not need to be done ideally.

In theory, there could be a blanket implementation of impl<'a, T> Allocator for T where &'a T: Allocator, but the compiler is unfortunately not able to deal with this currently.

The Allocator trait defines some functions which have generic arguments. They’re purely convenience functions, but if a global allocator overrides them, the custom implementations will not be used when going through the Heap type. This may be confusing.

Alternatives

We could define a separate GlobalAllocator trait with methods taking &self to avoid the strange implementation for references requirement. This does require the duplication of some or all of the API surface and documentation of Allocator to a second trait with only a difference in receiver type.

The GlobalAllocator trait could be responsible for simply returning a type which implements Allocator. This avoids the duplication or the strange implementation for references issues in the other possibilities, but can’t be defined in a reasonable way without HKT, and is a somewhat strange layer of indirection.

Unresolved questions

Are System and Heap the right names for the two Allocator implementations in std::heap?

Should std::heap also have free functions which forward to the global allocator?

Summary

Introduce a public/private distinction to crate dependencies.

Motivation

The crates ecosystem has greatly expanded since Rust 1.0. With that, a few patterns for dependencies have evolved that challenge the currently existing dependency declaration system in Cargo and Rust. The most common problem is that a crate A depends on another crate B but some of the types from crate B are exposed through the API in crate A. This causes problems in practice if that dependency B is also used by the user’s code itself, crate B resolves to different versions for each usage, and the values of types from the two crate B instances need to be used together but don’t match. In this case, the user’s code will refuse to compile because different versions of those libraries are requested, and the compiler messages are less than clear.

The introduction of an explicit distinction between public and private dependencies can solve some of these issues. This distinction should also let us lift some restrictions and make some code compile that previously was prevented from compiling.

Q: What is a public dependency?
A: A dependency is public if some of the types or traits of that dependency are themselves exported through the public API of main crate. The most common places where this happens are return values and function parameters. The same applies to trait implementations and many other things. Because “public” can be tricky to determine for a user, this RFC proposes to extend the compiler infrastructure to detect the concept of a “public dependency”. This will help the user understand this concept so they can avoid making mistakes in the Cargo.toml.

Effectively, the idea is that if you bump a public dependency’s version, it’s a breaking change of your own crate.

Q: What is a private dependency?
A: On the other hand, a private dependency is contained within your crate and effectively invisible for users of your crate. As a result, private dependencies can be freely duplicated in the dependency graph and won’t cause compilation errors. This distinction will also make it possible to relax some restrictions that currently exist in Cargo which sometimes prevent crates from compiling.

Q: Can public become private later?
A: Public dependencies are public within a reachable subgraph but can become private if a crate stops exposing a public dependency. For instance, it is very possible to have a family of crates that all depend on a utility crate that provides common types which is a public dependency for all of them. However, if your own crate ends up being a user of this utility crate but none of its types or traits become part of your own API, then this utility crate dependency is marked private.

Q: Where is public / private defined?
Dependencies are private by default and are made public through a public flag on the dependency in the Cargo.toml file. This also means that crates created before the implementation of this RFC will have all their dependencies private.

Q: How is backwards compatibility handled?
A: It will continue to be permissible to “leak” dependencies (and there are even some use cases of this), however, the compiler or Cargo will emit warnings if private dependencies are part of the public API. Later, it might even become invalid to publish new crates without explicitly silencing these warnings or marking the dependencies as public.

Q: Can I export a type from a private dependency as my own?
A: For now, it will not be strictly permissible to privately depend on a crate and export a type from there as your own. The reason for this is that at the moment it is not possible to force this type to be distinct. This means that users of the crate might accidentally start depending on that type to be compatible if the user starts to depend on the crate that actually implements that type. The limitations from the previous answer apply (e.g.: you can currently overrule the restrictions).

Q: How do semver and dependencies interact?
A: It is already the case that changing your own dependencies would require a semver bump for your own library because your API contract to the outside world changes. This RFC, however, makes it possible to only have this requirement for public dependencies and would permit Cargo to prevent new crate releases with semver violations.

Detailed design

There are a few areas that need to be changed for this RFC:

  • The compiler needs to be extended to understand when crate dependencies are considered a public dependency
  • The Cargo.toml manifest needs to be extended to support declaring public dependencies. This will start as an unstable cargo feature available on nightly and only via opt-in.
  • The public attribute of dependencies needs to appear in the Cargo index in order to be used by Cargo during version resolution
  • Cargo’s version resolution needs to change to reject crate graph resolutions where two versions of a crate are publicly reachable to each other.
  • The cargo publish process needs to be changed to warn (or prevent) the publishing of crates that have undeclared public dependencies
  • cargo publish will resolve dependencies to the lowest possible versions in order to check that the minimal version specified in Cargo.toml is correct.
  • Crates.io should show public dependencies more prominently than private ones.

Compiler Changes

The main change to the compiler will be to accept a new parameter that Cargo supplies which is a list of public dependencies. The flag will be called --extern-public. The compiler then emits warnings if it encounters private dependencies leaking to the public API of a crate. cargo publish might change this warning into an error in its lint step.

Additionally, later on, the warning can turn into a hard error in general.

In some situations, it can be necessary to allow private dependencies to become part of the public API. In that case one can permit this with #[allow(external_private_dependency)]. This is particularly useful when paired with #[doc(hidden)] and other already existing hacks.

This most likely will also be necessary for the more complex relationship of libcore and libstd in Rust itself.

Changes to Cargo.toml

The Cargo.toml file will be amended to support the new public parameter on dependencies. Old Cargo versions will emit a warning when this key is encountered but otherwise continue. Since the default for a dependency to be private only, public ones will need to be tagged which should be the minority.

This will start as an unstable Cargo feature available on nightly only that authors will need to opt into via a feature specified in Cargo.toml before Cargo will start using the public attribute to change the way versions are resolved. The Cargo unstable feature will turn on a corresponding rustc unstable feature for the compiler changes noted above.

Example dependency:

[dependencies]
url = { version = "1.4.0", public = true }

Changes to the Cargo Index

The Cargo index used by Cargo when resolving versions will contain the public attribute on dependencies as specified in Cargo.toml. For example, an index line for a crate named example that publicly depends on the url crate would look like (JSON prettified for legibility):

{
    "name":"example",
    "vers":"0.1.0",
    "deps":[
        {
            "name":"url",
            "req":"^1.4.0",
            "public":"true",
            "features":[],
            "optional":false,
            "default_features":true,
            "target":null,
            "kind":"normal"
        }
    ]
}

Changes to Cargo Version Resolution

Cargo will specifically reject graphs that contain two different versions of the same crate being publicly depended upon and reachable from each other. This will prevent the strange errors possible today at version resolution time rather than at compile time.

How this will work:

  • First, a resolution graph has a bunch of nodes. These nodes are “package ids” which are a triple of (name, source, version). Basically this means that different versions of the same crate are different nodes, and different sources of the same name (e.g. git and crates.io) are also different nodes.
  • There are directed edges between nodes. A directed edge represents a dependency. For example if A depends on B then there’s a directed edge from A to B.
  • With public/private dependencies, we can now say that every edge is either tagged with public or private.
  • This means that we can have a collection of subgraphs purely connected by public dependency edges. The directionality of the public dependency edges within the subgraph doesn’t matter. Each of these subgraphs represents an “ecosystem” of crates publicly depending on each other. These subgraphs are “pools of public types” where if you have access to the subgraph, you have access to all types within that pool of types.
  • We can place a constraint that each of these “publicly connected subgraphs” are required to have exactly one version of all crates internally. For example, each subgraph can only have one version of Hyper.
  • Finally, we can consider all pairs of edges coming out of one node in the resolution graph. If the two edges point to two distinct publicly connected subgraphs from above and those subgraphs contain two different versions of the same crate, we consider that an error. This basically means that if you privately depend on Hyper 0.3 and Hyper 0.4, that’s an error.

Changes to Cargo Publish: Warnings

When a new crate version is published, Cargo will warn about types and traits that the compiler determined to be public but did not come from a public dependency. For now, it should be possible to publish anyways but in some period in the future it will be necessary to explicitly mark all public dependencies as such or explicitly mark them with #[allow(external_private_dependency)].

Changes to Cargo Publish: Lowest Version Resolution

A very common situation today is that people write the initial version of a dependency in their Cargo.toml, but never bother to update it as they take advantage of new features in newer versions. This works out okay because (1) Cargo will generally use the largest version it can find, compatible with constraints, and (2) upper bounds on constraints (at least within a particular minor version) are relatively rare. That means, in particular, that Cargo.toml is not a fully accurate picture of version dependency information; in general it’s a lower bound at best. There can be “invisible” dependencies that don’t cause resolution failures but can create compilation errors as APIs evolve.

Public dependencies exacerbate the above problem, because you can end up relying on features of a “new API” from a crate you didn’t even know you depended on! For example:

  • A depends on:
    • B 1.0 which publicly depends on C ^1.0
    • D 1.0, which has no dependencies
  • When A is initially built, it resolves to B 1.0 and C 1.1.
    • Because C’s APIs are available to A via re-exports in B, A effectively depends on C 1.1 now, even though B only claims to depend on C ^1.0
    • In particular, the code in A might depend on APIs only available in C 1.1
    • However, if A is a library, we don’t check in any lockfile for it, so this information is lost.
  • Now we change A to depend on D 1.1, which depends on C =1.0
    • A fresh copy of A, when built, will now resolve the crate graph to B 1.0, D 1.1, C 1.0
    • But now A may suddenly fail to compile, because it was implicitly depending on C 1.1 features via B.

This example and others like it rely on a common ingredient: a crate somewhere using an API that only is available in a newer version of a crate than the version listed in Cargo.toml.

To attempt to surface this problem earlier, cargo publish will attempt to resolve the graph while picking the smallest versions compatible with constraints. If the crate fails to build with this resolution graph, the publish will fail.

How We Teach This

From the user’s perspective, the initial scope of the RFC will be quite transparent, but it will definitely show up for users as a question of what the new restrictions mean. In particular, a common way to leak out types from APIs that most crates do is error handling. Quite frequently it happens that users wrap errors from other libraries in their own types. It might make sense to identify common cases of where type leakage happens and provide hints in the lint about how to deal with it.

Cases that I anticipate that should be explained separately:

  • Type leakage through errors: This should be easy to spot for a lint because the wrapper type will implement std::error::Error. The recommendation should most likely be to encourage wrapping the internal error.
  • Traits from other crates: In particular, serde and some other common crates will show up frequently. It might make sense to separately explain types and traits.
  • Type leakage through derive: Users might not be aware they have a dependency on a type when they derive a trait (think serde_derive). The lint might want to call this out separately.

The feature will be called public_private_dependencies and it comes with one lint flag called external_private_dependency. For all intents and purposes, this should be the extent of the new terms introduced in the beginning. This RFC, however, lays the groundwork for later providing aliasing so that a private dependency could be forcefully re-exported as the crate’s own types. As such, it might make sense to consider how to refer to this.

It is assumed that this feature will eventually become quite popular due to patterns that already exist in the crate ecosystem. It’s likely that it will evoke some negative opinions initially. As such, it would be a good idea to make a run with cargobomb/crater to see what the actual impact of the new linter warnings is and how far away we are from making them errors.

Crates.io should be updated to render public and private dependencies separately.

End user experience

Author of a crate with one dependency

Assume today that an author of a library crate onedep has a dependency on the url crate and the url::Url type is exposed in onedep’s public API.

onedep’s Cargo.toml:

[package]
name = "onedep"
version = "0.1.0"

[dependencies]
url = "1.0.0"

onedep’s src/lib.rs:

extern crate url;
use url::Origin;

use std::collections::HashMap;

#[derive(Default)]
pub struct OriginTracker {
    origin_counts: HashMap<Origin, usize>,
}

impl OriginTracker {
    pub fn log_origin(&mut self, origin: Origin) {
        let counter = self.origin_counts.entry(origin).or_insert(0);
        *counter += 1;
    }
}

When the author of onedep upgrades Rust/Cargo to a version where this RFC is completely implemented, the author will notice two changes:

  1. When they run cargo build, the build will succeed but they will get a warning that a private dependency (the url crate specifically) is used in their public API (the url::Origin type in the pub fn log_origin function specifically) and that they should consider adding public = true to their Cargo.toml. Ideally the warning would say something like:

        consider changing dependency:
    
        ```
        url = "1.0.0"
        ```
    
        to:
    
        ```
        url = { version = "1.0.0", public = true }
        ```
    

The warning could also encourage the author to then bump their crate’s major version since adding public dependencies is a breaking change.

  1. When they run cargo publish, the build check that happens after packaging will fail and the publish will fail. This is because deriving Hash on url::Origin wasn’t added until v1.5.1 of the url crate. The author of onedep has been running cargo update periodically, and their Cargo.lock has url 1.5.1, but they never updated Cargo.toml to indicate that they have a new lower bound. Since cargo publish will try to resolve dependencies to the lowest possible versions, it will choose version 1.0.0 of the url crate, which doesn’t implement Hash on Origin.

There should be a clear error message for this case that indicates Cargo has resolved crates to their lowest possible versions, that this might be the cause of the compilation failure, and that the author should investigate the versions of their dependencies in Cargo.toml to see if they should be updated. This command should change the Cargo.lock so that running cargo build will reproduce the error for the author to fix.

Author of a crate with multiple dependencies

twodep’s Cargo.toml:

[package]
name = "twodep"
version = "0.1.0"

[dependencies]
// this is the version of onedep above using a public dep on url 1.5.1
onedep = "1.0.0"
url = "1.0.0"

twodep’s src/main.rs:

extern crate url;
use url::Origin;

extern crate onedep;

fn main() {
    let mut origin_tracker = onedep::OriginTracker::default();

    loop {
        println!("Please enter a URL!");
        // pseudocode because I'm lazy
        let url = stdin::readline().unwrap();
        let url = Url::parse(url).unwrap();
        origin_tracker.log_origin(url.origin());
        // other stuff
    }
    println!("Here are all the origins you mentioned: {:#?}", origin_tracker);
}

Before upgrading Rust/Cargo to a version where this RFC has been implemented, this code might have been getting a compilation error if Cargo had resolved the direct dependency on the url crate to a different version than the version of onedep resolved to. Or it might have been resolving and compiling fine if the versions had resolved to be the same.

After upgrading Rust/Cargo, if this code had a compilation error, it would now have a version resolution problem that cargo would either automatically resolve or prompt the user to change version constraints/cargo update to resolve. If the code was compiling before, that must mean the previous resolution graph was good, so nothing will change on upgrading.

This crate is a binary and doesn’t have a public API, so it won’t get any warnings about crates not being marked public.

If the author publishes to crates.io after upgrading Rust/Cargo, since onedep’s public dependency on url now has a lower bound of 1.5.1, the only valid graphs that Cargo will generate will be with url 1.5.1 or greater, which is also compatible with the url 1.0.0 direct dependency. Publish will work without any errors or further changes.

Drawbacks

I believe that there are no drawbacks if implemented well (this assumes good linters and error messages).

Alternatives

For me, the biggest alternative to this RFC would be a variation of it where type and trait aliasing becomes immediately part of it. This would mean that a crate can have a private dependency and re-export it as its own type, hiding where it came from originally. This would most likely be easier to teach users and can get rid of a few “cul-de-sac” situations users can end up in where their only way out is to introduce a public dependency for now. The assumption is that if trait and type aliasing is available, the external_public_dependency would not need to exist.

Unresolved questions

There are a few open questions about how to best hook into the compiler and Cargo infrastructure:

  • What is the impact of this change going to be? This most likely can be answered running cargobomb/crater.
  • Since changing public dependency pins/ranges requires a change in semver, it might be worth exploring if Cargo could prevent the user from publishing new crate versions that violate that constraint.
  • If this is implemented before the RFC to deprecate extern crate, how would this work if you’re not using --extern?

Summary

Amend RFC 1242 to require an RFC for deprecation of crates from the rust-lang-nursery.

Motivation

There are currently very ubiquitous crates in the nursery that are being used by lots and lots of people, as evidenced by the crates.io download numbers (for lack of a better popularity metric):

Nursery crateDownloads
bitflags3,156k
rand2,615k
log2,417k
lazy-static2,108k
tempdir934k
uuid759k
glob467k
net2452k
getopts452k
rustfmt80k
simd14k

(numbers as of 2017-04-26)

RFC 1242 currently specifies that

The libs subteam can deprecate nursery crates at any time

The libs team can of course be trusted to be judicious in making such decisions. However, considering that many of the nursery crates are depended on by big fractions of the Rust ecosystem, suddenly deprecating things without public discussion seems contrary to Rust’s philosophy of stability and community participation. Involving the Rust community at large in these decisions offers the benefits of the RFC process such as increased visibility, differing viewpoints, and transparency.

Detailed design

The exact amendment is included as a change to the RFC in this PR. View the amended text.

How We Teach This

N/A

Drawbacks

Requiring an RFC for deprecation might impose an undue burden on the library subteam in terms of crate maintenance. However, as RFC 1242 states, this is not a major commitment.

Acceptance into the nursery could be hindered if it is believed it could be hard to reverse course later due to the required RFC being perceived as an obstacle. On the other hand, RFCs with broad consensus do not generally impose a large procedural burden, and if there is no consensus it might be too early to deprecate a nursery crate anyway.

Alternatives

Don’t change the process and let the library subteam make deprecation decisions for nursery crates.

Unresolved questions

None as of yet.

Summary

Official web content produced by the Rust teams for consumption by Rust users should work in the majority of browsers that Rust users are visiting these sites in. The Rust compiler only supports a finite number of targets, with varying degrees of support, due to the limits on time, expertise, and testing resources. Similarly, we don’t have enough time, expertise and testing resources to be sure that our web content works in every version of every browser. We should have a list of browsers and versions in various tiers of support.

Motivation

This pull request to remove JQuery from rustdoc’s output had discussion about what we could and could not do because of browser support. This is a discussion we haven’t yet had as a community.

Crates.io doesn’t display correctly in browsers without support for flexbox, such as Windows Phone 8.1, a device that is no longer supported. I made the decision that it wasn’t worth it for the community to spend time fixing this issue, did I make the correct tradeoff for the community?

Supporting all versions of all browsers with the same behavior is impossible with the small number of people who work on Rust’s web content. Crates.io is not currently doing any cross-browser testing; there are some JavaScript tests of the UI that run in PhantomJS, a headless WebKit. Since we’re not testing, we don’t actually know what our current web support even is, except for when we get bug reports from users.

In order to fully test on all browsers to be sure of our support, we would either need to have all the devices, operating systems, browsers, and versions available and people with the time and inclination to do manual testing on all of these, or we would need to be running automated tests on something like BrowserStack. BrowserStack does appear to have a free plan for open source projects, but it’s unclear how many parallel tests the open source plan would give us, and we’d at least be spending time waiting for test results on the various stacks. BrowserStack also doesn’t support every platform, Linux on the desktop being a notable section of our userbase missing from their platforms.

Detailed design

Rust web content

Officially produced web content includes:

  • rust-lang.org
  • blog.rust-lang.org
  • play.rust-lang.org
  • crates.io
  • Rustdoc output
  • thanks.rust-lang.org

Explicitly not included:

Things that are not really under our control but are used for official or almost-official Rust web content:

  • GitHub
  • docs.rs
  • Discourse (used for urlo and irlo)
  • mdBook output (used for the books and other documentation)

Proposed browser support tiers

Based on actual usage metrics and with a goal of spending our time in an effective way, the browser support tiers would be defined as:

Browsers are listed in browserslist format.

Supported browsers

Goal: Ensure functionality of our web content for 80% of users.

Browsers:

last 2 Chrome versions
last 1 Firefox version
Firefox ESR
last 1 Safari version
last 1 iOS version
last 1 Edge version
last 1 UCAndroid version

On browserl.ist

Support:

  • We add automated testing of functionality in a variety of browsers through a service such as BrowserStack for each of these as much as possible (and work on adding this type of automated testing to those web contents that aren’t currently tested, such as rustdoc output).
  • Bugs affecting the functionality of the sites in these browsers are prioritized highly.

Unsupported browsers

Goal: Avoid spending large amounts of time and code complexity debugging and hacking around quirks in older or more obscure browsers.

Browsers:

  • Any not mentioned above

Support:

  • No automated testing for these.
  • Bug reports for these browsers are closed as WONTFIX.
  • Pull requests to fix functionality for these browsers are accepted only if they’re deemed to not add an inordinate amount of complexity or maintenance burden (subjective, reviewers’ judgment).

The following principles are assumptions I’m making that we currently follow and that we should continue to strive for, no matter what browser support policy we end up with:

  • Follow best practices for accessibility, fix bug reports from blind users, reach out to blind users in the community about how the accessibility of the web content could be improved.
    • This would include supporting lynx/links as these are sometimes used with screen readers.
  • Follow best practices for colorblindness, such as have information conveyed through color also conveyed through an icon or text.
  • Follow best practices for making content usable from mobile devices with a variety of screen sizes.
  • Render content without requiring JavaScript (especially on crates.io). Additional functionality beyond reading (ex: search, follow/unfollow crate) may require JavaScript, but we will attempt to use links and forms for progressive enhancement as much as possible.

Please comment if you think any of these should not be assumed, but rest assured it is not the intent of this RFC to get rid of these kinds of support.

Relevant data

CanIUse.com has some statistics on global usage of browsers and versions, but our audience (developers) isn’t the same as the general public.

Google analytics browser usage stats

We have Google Analytics on crates.io and on rust-lang.org. The entire data set of the usage stats by browser, browser version, and OS are available in this Google sheet for the visits to crates.io in the last month. I chose to use just crates.io because on initial analysis, the top 90% of visits to rust-lang.org were less varied than the top 90% of visits to crates.io.

This data does not include those users who block Google Analytics.

This is the top 80% aggregated by browser and major browser version:

BrowserBrowser VersionSessions% of sessionsCumulative %
Chrome5718,04034.8534.85
Firefox528,13615.7250.56
Chrome56730214.1164.67
Safari10.1 (macos)1,5923.0867.74
Safari10 (ios)1,4372.7870.52
Safari10.0.3 (macos)8511.6472.16
Firefox537671.4873.65
Chrome557171.3975.03
Firefox456931.3476.37
UC Browser115301.0277.40
Chrome585201.0078.40
Safari (in-app)(not set) (ios)5000.9779.37
Firefox544720.9180.28

Interesting to note: Firefox 45 is the latest ESR (Firefox 52 will also be an ESR but it was just released). Firefox 52 was the current major version for most of this past month; I’m guessing the early adopters of 53 and 54 are likely Mozilla employees.

What do other sites in our niche support?

  • GitHub - Current versions of Chrome, Firefox, Safari, Edge and IE 11. Best effort for Firefox ESR.
  • Discourse - Chrome 32+, Firefox 27+, Safari 6.1+, IE 11+, iPad 3+, iOS 8+, Android 4.3+ (doesn’t specify which browser on the devices, doesn’t look like they’ve updated these numbers in a while)

How We Teach This

We should call this “Rust Browser Support”, and we should have the tiers listed on the Rust Forge in a similar way to the tiers of Rust platforms supported.

We should link to the tiered browser support page from places where Rust web content is developed and on the Rust FAQ.

Drawbacks

We exclude some people who are unwilling or unable to use a modern browser.

Alternatives

We could adopt the tiers proposed above but with different browser versions.

We could adopt the browsers proposed above but with different levels of support.

Other alternatives:

Not have official browser support tiers (status quo)

By not creating official levels of browser support, we will continue to have the situation we have today: discussions and decisions are happening that affect the level of support that Rust web content has in various browsers, but we don’t have any agreed-upon guidelines to guide these discussions and decisions.

We continue to not test in multiple browsers, instead relying on bug reports from users. The people doing the work continue to decide on an ad-hoc basis whether a fix is worth making or not.

Support all browsers in all configurations

We could choose to attempt to support any version of any browser on any device, testing with as much as we can. We would still have to rely on bug reports and help from the community to test with some configurations, but we wouldn’t close any bug report or pull request due to the browser or version required to reproduce it.

Unresolved questions

  • Am I missing any official web content that this policy should apply to?
  • Is it possible to add browser tests to rustdoc or would that just make the current situation of long, flaky rustc builds worse?

Summary

Documentation is an important part of any project, it allows developers to explain how to use items within a library as well as communicate the intent of how to use it through examples. Rust has long championed this feature through the use of documentation comments and rustdoc to generate beautiful, easy to navigate documentation. However, there is no way right now to have documentation be imported into the code from an external file. This RFC proposes a way to extend the functionality of Rust to include this ability.

Motivation

  1. Many smaller crates are able to do all of the documentation that’s needed in a README file within their repo. Being able to include this as a crate or module level doc comment would mean not having to duplicate documentation and is easier to maintain. This means that one could run cargo doc with the small crate as a dependency and be able to access the contents of the README without needing to go online to the repo to read it. This also would help with this issue on crates.io by making it easy to have the README in the crate and the crate root at the same.
  2. The feature would provide a way to have easier to read code for library maintainers. Sometimes doc comments are quite long in terms of line count (items in libstd are a good example of this). Doc comments document behavior of functions, structs, and types to the end user, they do not explain for a coder working on the library as to how they work internally. When actually writing code for a library the doc comments end up cluttering the source code making it harder to find relevant lines to change or skim through and read what is going on.
  3. Localization is something else that would further open up access to the community. By providing docs in different languages we could significantly expand our reach as a community and be more inclusive of those where English is not their first language. This would be made possible with a config flag choosing what file to import as a doc comment.

These are just a few reasons as to why we should do this, but the expected outcome of this feature is expected to be positive with little to no downside for a user.

Detailed Design

All files included through the attribute will be relative paths from the crate root directory. Given a file like this stored in docs/example.md:

# I'm an example
This is a markdown file that gets imported to Rust as a Doc comment.

where src is in the same directory as docs. Given code like this:

#[doc(include = "../docs/example.md")]
fn my_func() {
  // Hidden implementation
}

It should expand to this at compile time:

#[doc("# I'm an example\nThis is a markdown file that gets imported to Rust as a doc comment.")]
fn my_func() {
  // Hidden implementation
}

Which rustdoc should be able to figure out and use for documentation.

If the code is written like this:

#![doc(include = "../docs/example.md")]
fn my_func() {
  // Hidden implementation
}

It should expand out to this at compile time:

#![doc("# I'm an example\nThis is a markdown file that gets imported to Rust as a doc comment.")]
fn my_func() {
  // Hidden implementation
}

In the case of this code:

mod example {
    #![doc(include = "../docs/example.md")]
    fn my_func() {
      // Hidden implementation
    }
}

It should expand out to:

mod example {
    #![doc("# I'm an example\nThis is a markdown file that gets imported to Rust as a doc comment.")]
    fn my_func() {
      // Hidden implementation
    }
}

Acceptable Paths

If you’ve noticed the path given ../docs/example.md is a relative path to src. This was decided upon as a good first implementation and further RFCs could be written to expand on what syntax is acceptable for paths. For instance not being relative to src.

Missing Files or Incorrect Paths

If a file given to include is missing then this should trigger a compilation error as the given file was supposed to be put into the code but for some reason or other it is not there.

Line Numbers When Errors Occur

As with all macros being expanded this brings up the question of line numbers and for documentation tests especially so, to keep things simple for the user the documentation should be treated separately from the code. Since the attribute only needs to be expanded with rustdoc or cargo test, it should be ignored by the compiler except for having the proper lines for error messages.

For example if we have this:

#[doc(include = "../docs/example.md")] // Line 1
f my_func() {                          // Line 2
  // Hidden implementation             // Line 3
}                                      // Line 4

Then we would have a syntax error on line 2, however the doc comment comes before that. In this case the compiler would ignore the attribute for expansion, but would say that the error occurs on line 2 rather than saying it is line 1 if the attribute is ignored. This makes it easy for the user to spot their error. This same behavior should be observed in the case of inline tests and those in the tests directory.

If we have a documentation test failure the line number should be for the external doc file and the line number where it fails, rather than a line number from the code base itself. Having the numbers for the lines being used because they were inserted into the code for these scenarios would cause confusion and would obfuscate where errors occur, making it harder not easier for end users, making this feature useless if it creates ergonomic overhead like this.

How We Teach This

#[doc(include = "file_path")] is an extension of the current #[doc = "doc"] attribute by allowing documentation to exist outside of the source code. This isn’t entirely hard to grasp if one is familiar with attributes but if not then this syntax vs a /// or //! type of comment could cause confusion. By labeling the attribute as external_doc, having a clear path and type (either line or mod) then should, at the very least, provide context as to what’s going on and where to find this file for inclusion.

The acceptance of this proposal would minimally impact all levels of Rust users as it is something that provides convenience but is not a necessary thing to learn to use Rust. It should be taught to existing users by updating documentation to show it in use and to include in the Rust Programming Language book to teach new users. Currently the newest version of The Rust Programming Language book has a section for doc comments that will need to be expanded to show how users can include docs from external sources. The Rust Reference comments section would need to updated to include this new syntax as well.

Drawbacks

  • This might confuse or frustrate people reading the code directly who prefer those doc comments to be inline with the code rather than in a separate file. This creates a burden of ergonomics by having to know the context of the code that the doc comment is for while reading it separately from the code it documents.

Alternatives

Currently there already exists a plugin that could be used as a reference and has shown that there is interest. Some limitations though being that it did not have module doc support and it would make doc test failures unclear as to where they happened, which could be solved with better support and intrinsics from the compiler.

This same idea could be implemented as a crate with procedural macros (which are on nightly now) so that others can opt in to this rather than have it be part of the language itself. Docs will remain the same as they always have and will continue to work as is if this alternative is chosen, though this means we limit what we do and do not want rustc/rustdoc to be able to achieve here when it comes to docs.

Unresolved questions

  • What would be best practices for adding docs to crates?

Summary

Allow types to be generic over constant values; among other things this will allow users to write impls which are abstract over all array types.

Motivation

Rust currently has one type which is parametric over constants: the built-in array type [T; LEN]. However, because const generics are not a first class feature, users cannot define their own types which are generic over constant values, and cannot implement traits for all arrays.

As a result of this limitation, the standard library only contains trait implementations for arrays up to a length of 32; as a result, arrays are often treated as a second-class language feature. Even if the length of an array might be statically known, it is more common to heap allocate it using a vector than to use an array type (which has certain performance trade offs).

Const parameters can also be used to allow users to more naturally specify variants of a generic type which are more accurately reflected as values, rather than types. For example, if a type takes a name as a parameter for configuration or other reasons, it may make more sense to take a &'static str than take a unit type which provides the name (through an associated const or function). This can simplify APIs.

Lastly, consts can be used as parameters to make certain values determined at typecheck time. By limiting which values a trait is implemented over, the orphan rules can enable a crate to ensure that only some safe values are used, with the check performed at compile time (this is especially relevant to cryptographic libraries for example).

Detailed design

Today, types in Rust can be parameterized by two kinds: types and lifetimes. We will additionally allow types to be parameterized by values, so long as those values can be computed at compile time. A single constant parameter must be of a single, particular type, and can be validly substituted with any value of that type which can be computed at compile time and the type meets the equality requirements laid out later in this RFC.

(Exactly which expressions are evaluable at compile time is orthogonal to this RFC. For our purposes we assume that integers and their basic arithmetic operations can be computed at compile time, and we will use them in all examples.)

Glossary

  • Const (constant, const value): A Rust value which is guaranteed to be fully evaluated at compile time. Unlike statics, consts will be inlined at their use sites rather than existing in the data section of the compiled binary.

  • Const parameter (generic const): A const which a type or function is abstract over; this const is input to the concrete type of the item, such as the length parameter of a static array.

  • Associated const: A const associated with a trait, similar to an associated type. Unlike a const parameter, an associated const is determined by a type.

  • Const variable: Either a const parameter or an associated const, contrast with concrete const; a const which is undetermined in this context (prior to monomorphization).

  • Concrete const: In contrast to a const variable, a const which has a known and singular value in this context.

  • Const expression: An expression which evaluates to a const. This may be an identity expression or a more complex expression, so long as it can be evaluated by Rust’s const system.

  • Abstract const expression: A const expression which involves a const variable (and therefore the value that it evaluates to cannot be determined until after monomorphization).

  • Const projection: The value of an abstract const expression (which cannot be determined in a generic context because it is dependent on a const variable).

  • Identity expression: An expression which cannot be evaluated further except by substituting it with names in scope. This includes all literals as well all idents - e.g. 3, "Hello, world", foo_bar.

Declaring a const parameter

In any sequence of type parameter declarations (such as in the definition of a type or on the impl header of an impl block) const parameters can also be declared. Const parameters declarations take the form const $ident: $ty:

struct RectangularArray<T, const WIDTH: usize, const HEIGHT: usize> {
    array: [[T; WIDTH]; HEIGHT],
}

The idents declared are the names used for these const parameters (interchangeably called “const variables” in this RFC text), and all values must be of the type ascribed to it. Which types can be ascribed to const parameters is restricted later in this RFC.

The const parameter is in scope for the entire body of the item (type, impl, function, method, etc) in which it is declared.

Applying a const as a parameter

Any const expression of the type ascribed to a const parameter can be applied as that parameter. When applying an expression as const parameter (except for arrays), which is not an identity expression, the expression must be contained within a block. This syntactic restriction is necessary to avoid requiring infinite lookahead when parsing an expression inside of a type.

const X: usize = 7;

let x: RectangularArray<i32, 2, 4>;
let y: RectangularArray<i32, X, {2 * 2}>;

Arrays

Arrays have a special construction syntax: [T; CONST]. In array syntax, braces are not needed around any const expressions; [i32; N * 2] is a syntactically valid type.

When a const variable can be used

A const variable can be used as a const in any of these contexts:

  1. As an applied const to any type which forms a part of the signature of the item in question: fn foo<const N: usize>(arr: [i32; N]).
  2. As part of a const expression used to define an associated const, or as a parameter to an associated type.
  3. As a value in any runtime expression in the body of any functions in the item.
  4. As a parameter to any type used in the body of any functions in the item, as in let x: [i32; N] or <[i32; N] as Foo>::bar().
  5. As a part of the type of any fields in the item (as in struct Foo<const N: usize>([i32; N]);).

In general, a const variable can be used where a const can. There is one significant exception: const variables cannot be used in the construction of consts, statics, functions, or types inside a function body. That is, these are invalid:

fn foo<const X: usize>() {
    const Y: usize = X * 2;
    static Z: (usize, usize)= (X, X);

    struct Foo([i32; X]);
}

This restriction can be analogized to the restriction on using type variables in types constructed in the body of functions - all of these declarations, though private to this item, must be independent of it, and do not have any of its parameters in scope.

Theory of equality for type equality of two consts

During unification and the overlap check, it is essential to determine when two types are equivalent or not. Because types can now be dependent on consts, we must define how we will compare the equality of two constant expressions.

For most cases, the equality of two consts follows the same reasoning you would expect - two constant values are equal if they are equal to one another. But there are some particular caveats.

Structural equality

Const equality is determined according to the definition of structural equality defined in RFC 1445. Only types which have the “structural match” property can be used as const parameters. This would exclude floats, for example.

The structural match property is intended as a stopgap until a final solution for matching against consts has been arrived at. It is important for the purposes of type equality that whatever solution const parameters use will guarantee that the equality is reflexive, so that a type is always the same type as itself. (The standard definition of equality for floating point numbers is not reflexive.)

This may diverge someday from the definition used by match; it is not necessary that matching and const parameters use the same definition of equality, but the definition of equality used by match today is good enough for our purposes.

Because consts must have the structural match property, and this property cannot be enforced for a type variable, it is not possible to introduce a const parameter which is ascribed to a type variable (Foo<T, const N: T> is not valid).

Equality of two abstract const expressions

When comparing the equality of two abstract const expressions (that is, those that depend on a variable) we cannot compare the equality of their values because their values are determined by a const variable, the value of which is unknown prior to monomorphization.

For this reason we will (initially, at least) treat the return value of const expressions as projections - values determined by the input, but which are not themselves known. This is similar to how we treat associated types today. When comparing the evaluation of an abstract const expression - which we’ll call a const projection - to another const of the same type, its equality is always unknown.

Each const expression generates a new projection, which is inherently anonymous. It is not possible to unify two anonymous projections (imagine two associated types on a generic - T::Assoc and T::Item: you can’t prove or disprove that they are the same type). For this reason, const expressions do not unify with one another unless they are literally references to the same AST node. That means that one instance of N + 1 does not unify with another instance of N + 1 in a type.

To be clearer, this does not typecheck, because N + 1 appears in two different types:

fn foo<const N: usize>() -> [i32; N + 1] {
    let x: [i32; N + 1] = [0; N + 1];
    x
}

But this does, because it appears only once:

type Foo<const N: usize> = [i32; N + 1];

fn foo<const N: usize>() -> Foo<N> {
    let x: Foo<N> = Default::default();
    x
}

Future extensions

Someday we could introduce knowledge of the basic properties of some operations

  • such as the commutativity of addition and multiplication - to begin making smarter judgments on the equality of const projections. However, this RFC does not proposing building any knowledge of that sort into the language and doing so would require a future RFC.

Specialization on const parameters

It is also necessary for specialization that const parameters have a defined ordering of specificity. For this purpose, literals are defined as more specific than other expressions, otherwise expressions have an indeterminate ordering.

Just as we could some day support more advanced notions of equality between const projections, we could some day support more advanced definitions of specificity. For example, given the type (i32, i32), we could determine that (0, PARAM2) is more specific than (PARAM1, PARAM2) - roughly the analog of understanding that (i32, U) is more specific than the type (T, U). We could also someday support intersectional and other more advanced definitions of specialization on constants.

How We Teach This

Const generics is a large feature, and will require significant educational materials - it will need to be documented in both the book and the reference, and will probably need its own section in the book. Documenting const generics will be a big project in itself.

However, const generics should be treated as an advanced feature, and it should not be something we expose to new users early in their use of Rust.

Drawbacks

This feature adds a significant amount of complexity to the type system, allowing types to be determined by constants. It requires determining the rules around abstract const equality, which result in surprising edge cases. It adds a lot of syntax to the language. The language would definitely be simpler if we don’t adopt this feature.

However, we have already introduced a type which is determined by a constant - the array type. Generalizing this feature seems natural and even inevitable given that early decision.

Alternatives

There are not really alternatives other than not doing this, or staging it differently.

We could limit const generics to the type usize, but this would not make the implementation simpler.

We could move more quickly to more complex notions of equality between consts, but this would make the implementation more complex up front.

We could choose a slightly different syntax, such as separating consts from types with a semicolon.

Unresolved questions

  • Unification of abstract const expressions: This RFC performs the most minimal unification of abstract const expressions possible - it essentially doesn’t unify them. Possibly this will be an unacceptable UX for stabilization and we will want to perform some more advanced unification before we stabilize this feature.
  • Well formedness of const expressions: Types should be considered well formed only if during monomorphization they will not panic. This is tricky for overflow and out of bound array access. However, we can only actually provide well formedness constraints of expressions in the signature of functions; what to do about abstract const expressions appearing in function bodies in regards to well formedness is currently unclear & is delayed to implementation.
  • Ordering and default parameters: Do all const parameters come last, or can they be mixed with types? Do all parameters with defaults have to come after parameters without defaults? We delay this decision to implementation of the grammar.

Summary

Better ergonomics for pattern-matching on references.

Currently, matching on references requires a bit of a dance using ref and & patterns:

let x: &Option<_> = &Some(0);

match x {
    &Some(ref y) => { ... },
    &None => { ... },
}

// or using `*`:

match *x {
    Some(ref x) => { ... },
    None => { ... },
}

After this RFC, the above form still works, but now we also allow a simpler form:

let x: &Option<_> = &Some(0);

match x {
    Some(y) => { ... }, // `y` is a reference to `0`
    None => { ... },
}

This is accomplished through automatic dereferencing and the introduction of default binding modes.

Motivation

Rust is usually strict when distinguishing between value and reference types. In particular, distinguishing borrowed and owned data. However, there is often a trade-off between explicit-ness and ergonomics, and Rust errs on the side of ergonomics in some carefully selected places. Notably when using the dot operator to call methods and access fields, and when declaring closures.

The match expression is an extremely common expression and arguably, the most important control flow mechanism in Rust. Borrowed data is probably the most common form in the language. However, using match expressions and borrowed data together can be frustrating: getting the correct combination of *, &, and ref to satisfy the type and borrow checkers is a common problem, and one which is often encountered early by Rust beginners. It is especially frustrating since it seems that the compiler can guess what is needed but gives you error messages instead of helping.

For example, consider the following program:

enum E { Foo(...), Bar }

fn f(e: &E) {
    match e { ... }
}

It is clear what we want to do here - we want to check which variant e is a reference to. Annoyingly, we have two valid choices:

match e {
    &E::Foo(...) => { ... }
    &E::Bar => { ... }
}

and

match *e {
    E::Foo(...) => { ... }
    E::Bar => { ... }
}

The former is more obvious, but requires more noisey syntax (an & on every arm). The latter can appear a bit magical to newcomers - the type checker treats *e as a value, but the borrow checker treats the data as borrowed for the duration of the match. It also does not work with nested types, match (*e,) ... for example is not allowed.

In either case if we further bind variables, we must ensure that we do not attempt to move data, e.g.,

match *e {
    E::Foo(x) => { ... }
    E::Bar => { ... }
}

If the type of x does not have the Copy bound, then this will give a borrow check error. We must use the ref keyword to take a reference: E::Foo(ref x) (or &E::Foo(ref x) if we match e rather than *e).

The ref keyword is a pain for Rust beginners, and a bit of a wart for everyone else. It violates the rule of patterns matching declarations, it is not found anywhere outside of patterns, and it is often confused with &. (See for example, https://github.com/rust-lang/rust-by-example/issues/390).

Match expressions are an area where programmers often end up playing ‘type Tetris’: adding operators until the compiler stops complaining, without understanding the underlying issues. This serves little benefit - we can make match expressions much more ergonomic without sacrificing safety or readability.

Match ergonomics has been highlighted as an area for improvement in 2017: internals thread and Rustconf keynote.

Detailed design

This RFC is a refinement of the match ergonomics RFC. Rather than using auto-deref and auto-referencing, this RFC introduces default binding modes used when a reference value is matched by a non-reference pattern.

In other words, we allow auto-dereferencing values during pattern-matching. When an auto-dereference occurs, the compiler will automatically treat the inner bindings as ref or ref mut bindings.

Example:

let x = Some(3);
let y: &Option<i32> = &x;
match y {
  Some(a) => {
    // `y` is dereferenced, and `a` is bound like `ref a`.
  }
  None => {}
}

Note that this RFC applies to all instances of pattern-matching, not just match expressions:

struct Foo(i32);

let foo = Foo(6);
let foo_ref = &foo;
// `foo_ref` is dereferenced, and `x` is bound like `ref x`.
let Foo(x) = foo_ref;

Definitions

A reference pattern is any pattern which can match a reference without coercion. Reference patterns include bindings, wildcards (_), consts of reference types, and patterns beginning with & or &mut. All other patterns are non-reference patterns.

Default binding mode: this mode, either move, ref, or ref mut, is used to determine how to bind new pattern variables. When the compiler sees a variable binding not explicitly marked ref, ref mut, or mut, it uses the default binding mode to determine how the variable should be bound. Currently, the default binding mode is always move. Under this RFC, matching a reference with a non-reference pattern, would shift the default binding mode to ref or ref mut.

Binding mode rules

The default binding mode starts out as move. When matching a pattern, the compiler starts from the outside of the pattern and works inwards. Each time a reference is matched using a non-reference pattern, it will automatically dereference the value and update the default binding mode:

  1. If the reference encountered is &val, set the default binding mode to ref.
  2. If the reference encountered is &mut val: if the current default binding mode is ref, it should remain ref. Otherwise, set the current binding mode to ref mut.

If the automatically dereferenced value is still a reference, it is dereferenced and this process repeats.

                        Start                                
                          |                                  
                          v                                  
                +-----------------------+                     
                | Default Binding Mode: |                     
                |        move           |                     
                +-----------------------+                     
               /                        \                     
Encountered   /                          \  Encountered       
  &mut val   /                            \     &val
            v                              v                  
+-----------------------+        +-----------------------+    
| Default Binding Mode: |        | Default Binding Mode: |    
|        ref mut        |        |        ref            |    
+-----------------------+        +-----------------------+    
                          ----->                              
                        Encountered                           
                            &val

Note that there is no exit from the ref binding mode. This is because an &mut inside of a & is still a shared reference, and thus cannot be used to mutate the underlying value.

Also note that no transitions are taken when using an explicit ref or ref mut binding. The default binding mode only changes when matching a reference with a non-reference pattern.

The above rules and the examples that follow are drawn from @nikomatsakis’s comment proposing this design.

Examples

No new behavior:

match &Some(3) {
    p => {
        // `p` is a variable binding. Hence, this is **not** a ref-defaulting
        // match, and `p` is bound with `move` semantics
        // (and has type `&Option<i32>`).
    },
}

One match arm with new behavior:

match &Some(3) {
    Some(p) => {
        // This pattern is not a `const` reference, `_`, or `&`-pattern,
        // so this is a "non-reference pattern."
        // We dereference the `&` and shift the
        // default binding mode to `ref`. `p` is read as `ref p` and given
        // type `&i32`.
    },
    x => {
        // In this arm, we are still in `move`-mode by default, so `x` has type
        // `&Option<i32>`
    },
}

// Desugared:
match &Some(3) {
  &Some(ref p) => {
    ...
  },
  x => {
    ...
  },
}

match with “or” (|) patterns:

let x = &Some((3, 3));
match x {
  // Here, each of the patterns are treated independently
  Some((x, 3)) | &Some((ref x, 5)) => { ... }
  _ => { ... }
}

// Desugared:
let x = &Some((3, 3));
match x {
  &Some((ref x, 3)) | &Some((ref x, 5)) => { ... }
  None => { ... }
}

Multiple nested patterns with new and old behavior, respectively:

match (&Some(5), &Some(6)) {
    (Some(a), &Some(mut b)) => {
        // Here, the `a` will be `&i32`, because in the first half of the tuple
        // we hit a non-reference pattern and shift into `ref` mode.
        //
        // In the second half of the tuple there's no non-reference pattern,
        // so `b` will be `i32` (bound with `move` mode). Moreover, `b` is
        // mutable.
    },
    _ => { ... }
}

// Desugared:
match (&Some(5), &Some(6)) {
  (&Some(ref a), &Some(mut b)) => {
    ...
  },
  _  => { ... },
}

Example with multiple dereferences:

let x = (1, &Some(5));
let y = &Some(x);
match y {
  Some((a, Some(b))) => { ... }
  _ => { ... }
}

// Desugared:
let x = (1, &Some(5));
let y = &Some(x);
match y {
  &Some((ref a, &Some(ref b))) => { ... }
  _ => { ... }
}

Example with nested references:

let x = &Some(5);
let y = &x;
match y {
    Some(z) => { ... }
    _ => { ... }
}

// Desugared:
let x = &Some(5);
let y = &x;
match y {
    &&Some(ref z) => { ... }
    _ => { ... }
}

Example of new mutable reference behavior:

let mut x = Some(5);
match &mut x {
    Some(y) => {
        // `y` is an `&mut` reference here, equivalent to `ref mut` before
    },
    None => { ... },
}

// Desugared:
match &mut x {
  &mut Some(ref mut y) => {
    ...
  },
  &mut None => { ... },
}

Example using let:

struct Foo(i32);

// Note that these rules apply to any pattern matching
// whether it be in a `match` or a `let`.
// For example, `x` here is a `ref` binding:
let Foo(x) = &Foo(3);

// Desugared:
let &Foo(ref x) = &Foo(3);

Backwards compatibility

In order to guarantee backwards-compatibility, this proposal only modifies pattern-matching when a reference is matched with a non-reference pattern, which is an error today.

This reasoning requires that the compiler knows if the type being matched is a reference, which isn’t always true for inference variables. If the type being matched may or may not be a reference and it is being matched by a non-reference pattern, then the compiler will default to assuming that it is not a reference, in which case the binding mode will default to move and it will behave exactly as it does today.

Example:

let x = vec![];

match x[0] { // This will panic, but that doesn't matter for this example

    // When matching here, we don't know whether `x[0]` is `Option<_>` or
    // `&Option<_>`. `Some(y)` is a non-reference pattern, so we assume that
    // `x[0]` is not a reference
    Some(y) => {

        // Since we know `Vec::contains` takes `&T`, `x` must be of type
        // `Vec<Option<usize>>`. However, we couldn't have known that before
        // analyzing the match body.
        if x.contains(&Some(5)) {
            ...
        }
    }
    None => {}
}

How We Teach This

This RFC makes matching on references easier and less error-prone. The documentation for matching references should be updated to use the style outlined in this RFC. Eventually, documentation and error messages should be updated to phase-out ref and ref mut in favor of the new, simpler syntax.

Drawbacks

The major downside of this proposal is that it complicates the pattern-matching logic. However, doing so allows common cases to “just work”, making the beginner experience more straightforward and requiring fewer manual reference gymnastics.

Future Extensions

In the future, this RFC could be extended to add support for autodereferencing custom smart-pointer types using the Deref and DerefMut traits.

let x: Box<Option<i32>> = Box::new(Some(0));
match &x {
    Some(y) => { ... }, // y: &i32
    None => { ... },
}

This feature has been omitted from this RFC. A few of the details of this feature are unclear, especially when considering interactions with a future DerefMove trait or similar.

Nevertheless, a followup RFC should be able to backwards-compatibly add support for custom autodereferencable types.

Alternatives

  1. We could only infer ref, leaving users to manually specify the mut in ref mut bindings. This has the advantage of keeping mutability explicit. Unfortunately, it also has some unintuitive results. ref mut doesn’t actually produce mutable bindings– it produces immutably-bound mutable references.
// Today's behavior:
let mut x = Some(5);
let mut z = 6;
if let Some(ref mut y) = *(&mut x) {
    // `y` here is actually an immutable binding.
    // `y` can be used to mutate the value of `x`, but `y` can't be rebound to
    // a new reference.
    y = &mut z; //~ ERROR: re-assignment of immutable variable `y`
}

// With this RFC's behavior:
let mut x = Some(5);
let mut z = 6;
if let Some(y) = &mut x {
    // The error is the same as above-- `y` is an immutable binding.
    y = &mut z; //~ ERROR: re-assignment of immutable variable `y`
}

// If we modified this RFC to require explicit `mut` annotations:
let mut x = Some(5);
let mut z = 6;
if let Some(mut y) = &mut x {
    // The error is the same, but is now horribly confusing.
    // `y` is clearly labeled `mut`, but it can't be modified.
    y = &mut z; //~ ERROR: re-assignment of immutable variable `y`
}

Additionally, we don’t require mut when declaring immutable reference bindings today:

// Today's behavior:
let mut x = Some(5);
// `y` here isn't declared as `mut`, even though it can be used to mutate `x`.
let y = &mut x;
*y = None;

Forcing users to manually specify mut in reference bindings would be inconsistent with Rust’s current semantics, and would result in confusing errors.

  1. We could support auto-ref / deref as suggested in the original match ergonomics RFC. This approach has troublesome interaction with backwards-compatibility, and it becomes more difficult for the user to reason about whether they’ve borrowed or moved a value.
  2. We could allow writing move in patterns. Without this, move, unlike ref and ref mut, would always be implicit, leaving no way override a default binding mode of ref or ref mut and move the value out from behind a reference. However, moving a value out from behind a shared or mutable reference is only possible for Copy types, so this would not be particularly useful in practice, and would add unnecessary complexity to the language.

Summary

This RFC introduces the #[non_exhaustive] attribute for enums and structs, which indicates that more variants/fields may be added to an enum/struct in the future.

Adding this hint to enums will force downstream crates to add a wildcard arm to match statements, ensuring that adding new variants is not a breaking change.

Adding this hint to structs or enum variants will prevent downstream crates from constructing or exhaustively matching, to ensure that adding new fields is not a breaking change.

This is a post-1.0 version of RFC 757, with some additions.

Motivation

Enums

The most common use for non-exhaustive enums is error types. Because adding features to a crate may result in different possibilities for errors, it makes sense that more types of errors will be added in the future.

For example, the rustdoc for std::io::ErrorKind shows:

pub enum ErrorKind {
    NotFound,
    PermissionDenied,
    ConnectionRefused,
    ConnectionReset,
    ConnectionAborted,
    NotConnected,
    AddrInUse,
    AddrNotAvailable,
    BrokenPipe,
    AlreadyExists,
    WouldBlock,
    InvalidInput,
    InvalidData,
    TimedOut,
    WriteZero,
    Interrupted,
    Other,
    UnexpectedEof,
    // some variants omitted
}

Because the standard library continues to grow, it makes sense to eventually add more error types. However, this can be a breaking change if we’re not careful; let’s say that a user does a match statement like this:

use std::io::ErrorKind::*;

match error_kind {
    NotFound => ...,
    PermissionDenied => ...,
    ConnectionRefused => ...,
    ConnectionReset => ...,
    ConnectionAborted => ...,
    NotConnected => ...,
    AddrInUse => ...,
    AddrNotAvailable => ...,
    BrokenPipe => ...,
    AlreadyExists => ...,
    WouldBlock => ...,
    InvalidInput => ...,
    InvalidData => ...,
    TimedOut => ...,
    WriteZero => ...,
    Interrupted => ...,
    Other => ...,
    UnexpectedEof => ...,
}

If we were to add another variant to this enum, this match would fail, requiring an additional arm to handle the extra case. But, if force users to add an arm like so:

match error_kind {
    // ...
    _ => ...,
}

Then we can add as many variants as we want without breaking any downstream matches.

How we do this today

We force users add this arm for std::io::ErrorKind by adding a hidden variant:

#[unstable(feature = "io_error_internals",
           reason = "better expressed through extensible enums that this \
                     enum cannot be exhaustively matched against",
           issue = "0")]
#[doc(hidden)]
__Nonexhaustive,

Because this feature doesn’t show up in the docs, and doesn’t work in stable rust, we can safely assume that users won’t use it.

A lot of crates take advantage of #[doc(hidden)] variants to tell users that they should add a wildcard branch to matches. However, the standard library takes this trick further by making the variant unstable, ensuring that it cannot be used in stable Rust. Outside the standard library, here’s a look at diesel::result::Error:

pub enum Error {
    InvalidCString(NulError),
    DatabaseError(String),
    NotFound,
    QueryBuilderError(Box<StdError+Send+Sync>),
    DeserializationError(Box<StdError+Send+Sync>),
    #[doc(hidden)]
    __Nonexhaustive,
}

Even though the variant is hidden in the rustdoc, there’s nothing actually stopping a user from using the __Nonexhaustive variant. This code works totally fine, for example:

use diesel::Error::*;
match error {
    InvalidCString(..) => ...,
    DatabaseError(..) => ...,
    NotFound => ...,
    QueryBuilderError(..) => ...,
    DeserializationError(..) => ...,
    __Nonexhaustive => ...,
}

This seems unintended, even though this is currently the best way to make non-exhaustive enums outside the standard library. In fact, even the standard library remarks that this is a hack. Recall the hidden variant for std::io::ErrorKind:

#[unstable(feature = "io_error_internals",
           reason = "better expressed through extensible enums that this \
                     enum cannot be exhaustively matched against",
           issue = "0")]
#[doc(hidden)]
__Nonexhaustive,

Using #[doc(hidden)] will forever feel like a hack to fix this problem. Additionally, while plenty of crates could benefit from the idea of non-exhaustiveness, plenty don’t because this isn’t documented in the Rust book, and only documented elsewhere as a hack until a better solution is proposed.

Opportunity for optimisation

Currently, the #[doc(hidden)] hack leads to a few missed opportunities for optimisation. For example, take this enum:

pub enum Error {
    Message(String),
    Other,
}

Currently, this enum takes up the same amount of space as String because of the non-zero optimisation. If we add our non-exhaustive variant:

pub enum Error {
    Message(String),
    Other,
    #[doc(hidden)]
    __Nonexhaustive,
}

Then this enum needs an extra bit to distinguish Other and __Nonexhaustive, which is ultimately never used. This will likely add an extra 8 bytes on a 64-bit system to ensure alignment.

More importantly, take the following code:

use Error::*;
match error {
    Message(ref s) => /* lots of code */,
    Other => /* lots of code */,
    _ => /* lots of code */,
}

As a human, we can determine that the wildcard match is dead code and can be removed from the binary. Unfortunately, Rust can’t make this distinction because we could still technically use that wildcard branch.

Although these options will unlikely matter in this example because error-handling code (hopefully) shouldn’t run very often, it could matter for other use cases.

Structs

The most common use for non-exhaustive structs is config types. It often makes sense to make fields public for ease-of-use, although this can ultimately lead to breaking changes if we’re not careful.

For example, take this config struct:

pub struct Config {
    pub window_width: u16,
    pub window_height: u16,
}

As this configuration struct gets larger, it makes sense that more fields will be added. In the future, the crate may decide to add more public fields, or some private fields. For example, let’s assume we make the following addition:

pub struct Config {
    pub window_width: u16,
    pub window_height: u16,
    pub is_fullscreen: bool,
}

Now, code that constructs the struct, like below, will fail to compile:

let config = Config { window_width: 640, window_height: 480 };

And code that matches the struct, like below, will also fail to compile:

if let Ok(Config { window_width, window_height }) = load_config() {
    // ...
}

Adding this new setting is now a breaking change! To rectify this, we could always add a private field:

pub struct Config {
    pub window_width: u16,
    pub window_height: u16,
    pub is_fullscreen: bool,
    non_exhaustive: (),
}

But this makes it more difficult for the crate itself to construct Config, because you have to add a non_exhaustive: () field every time you make a new value.

Other kinds of structs

Because enum variants are kind of like a struct, any change we make to structs should apply to them too. Additionally, any change should apply to tuple structs as well.

Detailed design

An attribute #[non_exhaustive] is added to the language, which will (for now) fail to compile if it’s used on anything other than an enum or struct definition, or enum variant.

Enums

Within the crate that defines the enum, this attribute is essentially ignored, so that the current crate can continue to exhaustively match the enum. The justification for this is that any changes to the enum will likely result in more changes to the rest of the crate. Consider this example:

use std::error::Error as StdError;

#[non_exhaustive]
pub enum Error {
    Message(String),
    Other,
}
impl StdError for Error {
    fn description(&self) -> &str {
        match *self {
            Message(ref s) => s,
            Other => "other or unknown error",
        }
    }
}

It seems undesirable for the crate author to use a wildcard arm here, to ensure that an appropriate description is given for every variant. In fact, if they use a wildcard arm in addition to the existing variants, it should be identified as dead code, because it will never be run.

Outside the crate that defines the enum, users should be required to add a wildcard arm to ensure forward-compatibility, like so:

use mycrate::Error;

match error {
    Message(ref s) => ...,
    Other => ...,
    _ => ...,
}

And it should not be marked as dead code, even if the compiler does mark it as dead and remove it.

Note that this can potentially cause breaking changes if a user adds #[deny(dead_code)] to a match statement and the upstream crate removes the #[non_exhaustive] lint. That said, modifying warn-only lints is generally assumed to not be a breaking change, even though users can make it a breaking change by manually denying lints.

Structs

Like with enums, the attribute is essentially ignored in the crate that defines the struct, so that users can continue to construct values for the struct. However, this will prevent downstream users from constructing or exhaustively matching the struct, because fields may be added to the struct in the future.

Additionally, adding #[non_exhaustive] to an enum variant will operate exactly the same as if the variant were a struct.

Using our Config again:

#[non_exhaustive]
pub struct Config {
    pub window_width: u16,
    pub window_height: u16,
}

We can still construct our config within the defining crate like so:

let config = Config { window_width: 640, window_height: 480 };

And we can even exhaustively match on it, like so:

if let Ok(Config { window_width, window_height }) = load_config() {
    // ...
}

But users outside the crate won’t be able to construct their own values, because otherwise, adding extra fields would be a breaking change.

Users can still match on Configs non-exhaustively, as usual:

let &Config { window_width, window_height, .. } = config;

But without the .., this code will fail to compile.

Although it should not be explicitly forbidden by the language to mark a struct with some private fields as non-exhaustive, it should emit a warning to tell the user that the attribute has no effect.

Tuple structs

Non-exhaustive tuple structs will operate similarly to structs, however, will disallow matching directly. For example, take this example on stable today:

pub Config(pub u16, pub u16, ());

The below code does not work, because you can’t match tuple structs with private fields:

let Config(width, height, ..) = config;

However, this code does work:

let Config { 0: width, 1: height, .. } = config;

So, if we label a struct non-exhaustive:

#[non_exhaustive]
pub Config(pub u16, pub u16)

Then we the only valid way of matching will be:

let Config { 0: width, 1: height, .. } = config;

We can think of this as lowering the visibility of the constructor to pub(crate) if it is marked as pub, then applying the standard structure rules.

Unit structs

Unit structs will work very similarly to tuple structs. Consider this struct:

#[non_exhaustive]
pub struct Unit;

We won’t be able to construct any values of this struct, but we will be able to match it like:

let Unit { .. } = unit;

Similarly to tuple structs, this will simply lower the visibility of the constructor to pub(crate) if it were marked as pub.

Functional record updates

Functional record updates will operate very similarly to if the struct had an extra, private field. Take this example:

#[derive(Debug)]
#[non_exhaustive]
pub struct Config {
    pub width: u16,
    pub height: u16,
    pub fullscreen: bool,
}
impl Default for Config {
    fn default() -> Config {
        Config { width: 640, height: 480, fullscreen: false }
    }
}

We’d expect this code to work without the non_exhaustive attribute:

let c = Config { width: 1920, height: 1080, ..Config::default() };
println!("{:?}", c);

Although outside of the defining crate, it will not, because Config could, in the future, contain private fields that the user didn’t account for.

Changes to rustdoc

Right now, the only indicator that rustdoc gives for non-exhaustive enums and structs is a comment saying “some variants/fields omitted.” This shows up whenever variants or fields are marked as #[doc(hidden)], or when fields are private. rustdoc should continue to emit this message in these cases.

However, after this message (if any), it should offer an additional message saying “more variants/fields may be added in the future,” to clarify that the enum/struct is non-exhaustive. It also hints to the user that in the future, they may want to fine-tune any match code for enums to include future variants when they are added.

These two messages should be distinct; the former says “this enum/struct has stuff that you shouldn’t see,” while the latter says “this enum/struct is incomplete and may be extended in the future.”

How We Teach This

Changes to rustdoc should make it easier for users to understand the concept of non-exhaustive enums and structs in the wild.

In the chapter on enums, a section should be added specifically for non-exhaustive enums. Because error types are common in almost all crates, this case is important enough to be taught when a user learns Rust for the first time.

Additionally, non-exhaustive structs should be documented in an early chapter on structs. Public fields should be preferred over getter/setter methods in Rust, although users should be aware that adding extra fields is a potentially breaking change. In this chapter, users should be taught about non-exhaustive enum variants as well.

Drawbacks

  • The #[doc(hidden)] hack in practice is usually good enough.
  • An attribute may be more confusing than a dedicated syntax.
  • non_exhaustive may not be the clearest name.

Alternatives

  • Provide a dedicated syntax instead of an attribute. This would likely be done by adding a ... variant or field, as proposed by the original extensible enums RFC.
  • Allow creating private enum variants and/or private fields for enum variants, giving a less-hacky way to create a hidden variant/field.
  • Document the #[doc(hidden)] hack and make it more well-known.

Unresolved questions

It may make sense to have a “not exhaustive enough” lint to non-exhaustive enums or structs, so that users can be warned if they are missing fields or variants despite having a wildcard arm to warn on them.

Although this is beyond the scope of this particular RFC, it may be good as a clippy lint in the future.

Extending to traits

Tangentially, it also makes sense to have non-exhaustive traits as well, even though they’d be non-exhaustive in a different way. Take this example from byteorder:

pub trait ByteOrder: Clone + Copy + Debug + Default + Eq + Hash + Ord + PartialEq + PartialOrd {
   // ...
}

The ByteOrder trait requires these traits so that a user can simply write a bound of T: ByteOrder without having to add other useful traits, like Hash or Eq.

This trait is useful, but the crate has no intention of letting other users implement this trait themselves, because then adding an additional trait dependency for ByteOrder could be a breaking change.

The way that this crate solves this problem is by adding a hidden trait dependency:

mod private {
    pub trait Sealed {}
    impl Sealed for super::LittleEndian {}
    impl Sealed for super::BigEndian {}
}

pub trait ByteOrder: /* ... */ + private::Sealed {
    // ...
}

This way, although downstream crates can use this trait, they cannot actually implement things for this trait.

This pattern could again be solved by using #[non_exhaustive]:

#[non_exhaustive]
pub trait ByteOrder: /* ... */ {
    // ...
}

This would indicate to downstream traits that this trait might gain additional requirements (dependent traits or methods to implement), and as such, cannot be implemented downstream.

Summary

Make the assert! macro recognize more expressions (utilizing the power of procedural macros), and extend the readability of debug dumps.

Motivation

While clippy warns about assert! usage that should be replaced by assert_eq!, it’s quite annoying to migrate around.

Unit test frameworks like Catch for C++ does cool message printing already by using macros.

Detailed design

We’re going to parse AST and break up them by operators (excluding . (dot, member access operator)). Function calls and bracket surrounded blocks are considered as one block and don’t get expanded. The exact expanding rules should be determined when implemented, but an example is provided for reference.

On assertion failure, the expression itself is stringified, and another line with intermediate values are printed out. The values should be printed with Debug, and a plain text fallback if the following conditions fail:

  • the type doesn’t implement Debug.
  • the operator is non-comparison (those in std::ops) and the type (may also be a reference) doesn’t implement Copy.

To make sure that there’s no side effects involved (e.g. running next() twice on Iterator), each value should be stored as temporaries and dumped on assertion failure.

The new assert messages are likely to generate longer code, and it may be simplified for release builds (if benchmarks confirm the slowdown).

Examples

These examples are purely for reference. The implementor is free to change the rules.

let a = 1;
let b = 2;
assert!(a == b);
thread '<main>' panicked at 'assertion failed:
Expected: a == b
With expansion: 1 == 2'

With addition operators:

let a = 1;
let b = 1;
let c = 3;
assert!(a + b == c);
thread '<main>' panicked at 'assertion failed:
Expected: a + b == c
With expansion: 1 + 1 == 3'

Bool only:

let v = vec![0u8;1];
assert!(v.is_empty());
thread '<main>' panicked at 'assertion failed:
Expected: v.is_empty()'

With short-circuit:

assert!(true && false && true);
thread '<main>' panicked at 'assertion failed:
Expected: true && false && true
With expansion: true && false && (not evaluated)'

With bracket blocks:

let a = 1;
let b = 1;
let c = 3;
assert!({a + b} == c);
thread '<main>' panicked at 'assertion failed:
Expected: {a + b} == c
With expansion: 2 == 3'

With fallback:

let a = NonDebug{};
let b = NonDebug{};
assert!(a == b);
thread '<main>' panicked at 'assertion failed:
Expected: a == b
With expansion: (a) == (b)'

How We Teach This

  • Port the documentation (and optionally compiler source) to use assert!.
  • Mark the old macros (assert_{eq,ne}!) as deprecated.

Drawbacks

  • This will generate a wave of deprecation warnings, which will be some cost for users to migrate. However, this doesn’t mean that this is backward-incompatible, as long as the deprecated macros aren’t removed.
  • This has a potential performance degradation on complex expressions, due to creating more temporaries on stack (or register). However, if this had clear impacts confirmed through benchmarks, we should use some kind of alternative implementation for release builds.

Alternatives

  • Defining via macro_rules! was considered, but the recursive macro can often reach the recursion limit.
  • Negating the operator (!= to ==) was considered, but this isn’t suitable for all cases as not all types are total ordering.

Unresolved questions

These questions should be settled during the implementation process.

Error messages

  • Should we dump the AST as a formatted one?
  • How are we going to handle multi-line expressions?

Operators

  • Should we handle non-comparison operators?

Summary

Enable “nested method calls” where the outer call is an &mut self borrow, such as vec.push(vec.len()) (where vec: Vec<usize>). This is done by extending MIR with the concept of a two-phase borrow; in this model, select &mut borrows are modified so that they begin with a “reservation” phase and can later be “activated” into a full mutable borrow. During the reservation phase, reads and shared borrows of the borrowed data are permitted (but not mutation), as long as they are confined to the reservation period. Once the mutable borrow is activated, it acts like an ordinary mutable borrow.

Two-phase borrows in this RFC are only used when desugaring method calls; this is intended as a conservative step. In the future, if desired, the scheme could be extended to other syntactic forms, or else subsumed as part of non-lexical lifetimes or some other generalization of the lifetime system.

Motivation

The overriding goal here is that we want to accept nested method calls where the outer call is an &mut self method, like vec.push(vec.len()). This is a common limitation that beginners stumble over and find confusing and which experienced users have as a persistent annoyance. This makes it a natural target to eliminate as part of the 2017 Roadmap.

This problem has been extensively discussed on the internals discussion board (e.g., 1, 2), and a number of different approaches to solving it have been proposed. This RFC itself is intended to represent a “maximally minimal” approach, in the sense that it tries to avoid making larger changes to the set of Rust code that will be accepted, and instead focuses precisely on the method-call form. It is compatible with the various alternatives, and tries to leave room for future expansion in a variety of directions. See the Alternatives section for more details.

Why do we get an error in the first place?

You may wonder why this code isn’t accepted in the first place. To see why, consider what the (somewhat simplified) resulting MIR looks like:

/* 0 */ tmp0 = &'a mut vec;    // <-- mutable borrow starts here
/* 1 */ tmp1 = &'b vec;        // <-- shared borrow overlaps here
/* 2 */ tmp2 = Vec::len(tmp1);
/* 3 */ EndRegion('b);         // <-- shared borrow ends here
/* 3 */ Vec::push(tmp0, tmp2);
/* 5 */ EndRegion('a);         // <-- mutable borrow ends here

As you can see, we first take a mutable reference to vec for tmp0. This “locks” vec from being accessed in any other way until after the call to Vec::push(), but then we try to access it again when calling vec.len(). Hence the error.

(In this MIR, I’ve included the EndRegion annotations that the current MIR borrowck relies on. In most examples, I will elide them unless they are needed to make a point. Also, in the future, when we move to NLL, those statements will not be present, and regions will be inferred based solely on where the references are used, but the general idea remains the same.)

When you see the code desugared in that way, it should not surprise you that there is in fact a real danger here for code to crash if we just “turned off” this check (if we even could do such a thing). For example, consider this rather artificial Rust program:

let mut v: Vec<String> = vec![format!("Hello, ")];
let s: String = format!("foo");
v[0].push_str({ v.push(s); "World!" });
//              ^^^^^^^^^ sneaky attempt to mutate `v`

This last line, if desugared into MIR, looks something like this;

// First evaluate `v[0]` to get a `&mut String`:
tmp0 = &mut v;
tmp1 = IndexMut::index_mut(tmp0, 0);
tmp2 = tmp1;

// Next, evaluate `{ v.push(s); "World!" }` block:
tmp3 = &mut v;
tmp4 = s;
Vec::push(tmp3, tmp4);
tmp5 = "World!";

// Finally, invoke `push_str`:
String::push_str(tmp2, tmp5);

The danger here lies in the fact that we evaluate v[0] into a reference first, but this reference could well be invalidated by the call to Vec::push() that occurs later on (which may resize the vector and hence change the address of its elements). The Rust type system naturally prevents this, however, because the first line (tmp0 = &mut v) borrows v, and that borrow lasts until the final call to push_str().

In fact, even when the receiver is just a local variable (e.g., vec.push(vec.len())) we have to be wary. We wouldn’t want it to be possible to give ownership of the receiver away in one of the arguments: vec.push({ send_to_another_thread(vec); ... }). That should still be an error of course.

(Naturally, these complex arguments that are blocks look really artificial, but keep in mind that most of the time when this occurs in practice, the argument is a method or fn call, and that could in principle have arbitrary side-effects.)

Introducing reservations

This RFC proposes extending MIR with the concept of a two-phase borrow. These borrows are a variant of mutable borrows where the value starts out as reserved and only becomes mutably borrowed when the resulting reference is first used (which is called activating the borrow). During the reservation phase before a mutable borrow is activated, it acts exactly like a shared borrow – hence the borrowed value can still be read.

As discussed earlier, this RFC itself only introduces these two-phase borrows in a limited way. Specifically, we extend the MIR with a new kind of borrow (written mut2, for two-phase), and we generate those new kinds of borrows when lowering method calls.

To understand how two-phased borrows help, let’s revisit our two examples. We’ll start with the motivating example, vec.push(vec.len()). When this expression is desugared, the resulting reference is stored into a temporary, tmp0. Therefore, until tmp0 is referenced again, vec is only considered reserved:

/* 0 */ tmp0 = &mut2 vec;       // reservation of `vec` starts here
/* 1 */ tmp1 = &vec;
/* 2 */ tmp2 = Vec::len(tmp1);
/* 3 */ Vec::push(tmp0, tmp2); // first use of `tmp0`, upgrade is here

The first use of tmp0 is on line 3, and hence the mutable borrow begins then, and lasts until the end of the borrow region. Crucially, lines 1 and 2 (which did a shared borrow of vec) took place during the reservation period, and hence no error results. This is because a reservation is equivalent to a shared borrow, and multiple shared borrows are allowed.

Next, let’s consider the sneaky example, where the argument attempts to mutate the vector that is being used in the receiver:

let mut v: Vec<String> = vec![format!("Hello, ")];
let s: String = format!("foo");
v[0].push_str({ v.push(s); "World!" });
//              ^^^^^^^^^ sneaky attempt to mutate `v`

In this case, if we examine the resulting MIR, we can see that the borrow of v is almost immediately used, as part of the IndexMut operation:

// First evaluate `v[0]` to get a `&mut String`:
tmp0 = &mut2 v;
tmp1 = IndexMut::index_mut(tmp0, 0); // tmp0 used here!
tmp2 = tmp1;

// Next, evaluate `{ v.push(s); "World!" }` block:
tmp3 = &mut2 v; // <-- Error! mutable borrow of `v` is active.
... // see above

This implies that the mutable borrow will be active later on, when v is borrowed again during the arguments, and hence an error is still reported.

Note that this same treatment will also rule out some “harmless” examples, such as this one:

v[0].push_str(&format!("{}", v.len()));

This might seem analogous to example 1, but in this case the mutable borrow of v is “activated” by the indexing, and hence v is considered mutably borrowed when v.len() is called, not reserved, which results in an error.

Detailed design

New MIR form for two-phase borrows

Currently, the MIR rvalue for borrows has one of three forms (these are internal syntax only, naturally, since MIR doesn’t have a defined written representation)

&'a <lvalue>
&'a mut <lvalue>
&'a unique <lvalue>

In either case, the rvalue returns a reference with lvalue 'a that refers to the address of lvalue (an lvalue is a path that leads to memory). This can be either a shared, mutable, or unique reference (unique references are an internal concept that appears only in MIR; they are used when desugaring closures, but there is no direct equivalent in Rust surface syntax).

This RFC proposes adding a third form: &'a mut2 <lvalue>. Like &unique borrows, this would be used by the compiler when desugaring and would not have a direct user representation for the time being. For most purposes, an &mut2 borrow would act precisely the same as an &mut borrow; the borrow checker however would treat it differently, as described below.

When are two-phase borrows used

Two-phase borrows would be used in the specific case of desugaring a call to an &mut self method. Currently, in the initially generated MIR, calls to such methods always have a “auto-mut-ref” inserted (this is because vec.push(), where vec: &mut Vec<i32>, is considered a borrow of vec, not a move). This “auto-mut-ref” will be changed from an &mut to an &mut2.

Integrating reserved borrows into the borrow checker

Existing MIR borrowck algorithm

The proposed fix for this problem is described in terms of a MIR-based borrowck (which is coming soon). The basic structure of the existing borrow checker, transposed onto MIR, is as follows:

  • Every borrow in MIR always has the same form:
    • lv1 = &'r lv2 or lv1 = &'r mut lv2, where:
      • lv1 and lv2 are MIR lvalues (path naming a memory location)
      • 'r is the duration of the borrow
  • Let each borrow be named by its position P, which has the form BB/n, where BB is the basic block containing the borrow statement and n is the index within that basic block.
  • The borrow at position P is then considered live for all points reachable from P without passing through the end of the region 'r.
    • The full set of borrows live at a given point can be readily computed using a standard data-flow analysis.
  • For each write to an lvalue lv_w at point P:
    • A write is either a mutable borrow &mut lv_w or an assignment lv_w = ...
    • It is an error if there is any borrow (mutable or shared) of some path lv_b that is live at P where lv_b may overlap lv_w
  • For each read from an lvalue lv_r at point P:
    • A read is any use of lv_r as an operand.
    • It is an error if there is any mutable borrow of some path lv_b that is live at P where lv_b may overlap lv_r

Proposed change

When the borrow checker encounters a mut2 borrow, it will handle it in a slightly different way. Because of the limited places where mut2 borrows are generated, we know that they will only ever be encountered in a statement that assigns them to a MIR temporary:

tmp = &'r mut2 lv

In that case, the path lv would initially be considered reserved. The temporary tmp will only be used once, as an argument to the actual call: at that point, the path lv will be considered mutably borrowed.

In terms of the safety checks, reservations act just as a shared borrow does. Therefore, a write to lv at point P is illegal if there is any active borrow or in-scope reservation of lv at the point P. Similarly, a read from lv at point P is legal if there exists a reservation (but not with a mutable borrow).

There is one new check required. At the point Q where a mutable borrow is activated, we must check that there are no active borrows or reservations in scope (other than the reservation being upgraded). Otherwise, a test such as this might pass:

fn foo<'a>(x: &'a Vec<i32>) -> &'a i32 { &x[0] }

let mut v = vec![0, 1, 2];
let p;
v.push({p = foo(&v); 3});
use(*p);

When desugared into MIR, this would look something like:

tmp0 = &'a mut2 v;   // reservation begins
tmp1 = &'b v;       // shared borrow begins; allowed, because `v` is reserved
p = foo(tmp1);
Vec::push(tmp0, 3); // mutable borrow activated
EndRegion('a);      // mutable borrow ends
tmp2 = *p;          // shared borrow still valid!
use(tmp2) 
EndRegion('b);

Note that, here, we created a borrow of v[0] before we called Vec::push(), and we continue to use it afterwards. This should not be accepted, but it could be without this additional check at the activation point. In particular, at the time that the shared borrow starts, v is reserved; the mutable borrow of v is activated later, but still within the scope of the shared borrow. (In today’s borrow checker, this cannot happen, so we only check at the start of a borrow whether other borrows are in scope.)

How We Teach This

For the most part, because this change is so targeted, it seems that discussion of how it works is out of scope for introductory texts such as The Rust Programming Language or Rust By Example. In particular, the idea simply makes code that seems intuitively like it should work (e.g., vec.push(vec.len())) work.

However, there are a few related topics which likely might make sense to cover at some point in works like this:

  • People will likely first encounter surprises when they attempt more complicated method calls that are not covered by this proposal, such as the v[0].push_str(&format!("{}", v.len())); example. In that case, a simple desugaring can be used to show why the compiler rejects this code – in particular, a comparison with the erroneous examples may be helpful. A keen observer may note the contrast with vec.push(vec.len()), but such an observer can be referred to the reference. =)

  • One interesting point that came up in discussing this example is that many people expect that vec.push(vec.len()) would be desugared as follows:

    let tmp = vec.len();
    vec.push(tmp)
    

    In particular, note that vec, in this desugaring, is not assigned to a temporary. This is in fact not how the language works (as discussed in more detail under the Alternatives section); instead, vec is treated like any other argument. It is evaluated to a temporary, and autorefs etc are applied. It may be worth covering this sort of example when doing an in-depth explanation of how method desugaring works.

Coverage of these rules seems most appropriate for the Rust reference, as part of detailed general coverage on how MIR desugaring and the borrow checker work. At the moment, no such coverage exists, but this would be a logical part of it. In that context, explaining it in a similar fashion to how the RFC presents the change seems appropriate.

Drawbacks

The obvious downside of this proposal is that it is narrowly targeted at the method call form. This means that “manual desugarings” of method calls will not necessarily work, particularly if the user faithfully follows what the compiler does. There are a number of reasons to think this will be not be a very big deal in practice:

  • There is rarely a desire to do manual desugaring of method calls anyway.
  • In practice, when a desugaring is needed, people have a lot of latitude to adjust the ordering of statements and so forth, and hence they can achieve the effect that they need (in fact, every time that you are forced to rewrite an instance of the vec.push(vec.len()) pattern to save vec.len() into a temporary, you are doing a partial desugaring of this kind).
  • Truly faithful desugarings are rare in any case. As discussed in the How We Teach This section, many people overlook the role of autoref and the precise evaluation order. Fewer still will get the precise lifetime of temporaries correctly or other details. This is not a big deal.

Nonetheless, this change slightly widens the gap between the surface language and the underlying “desugared” view that MIR takes, and in general that is to be avoided. The Alternatives section discuses some possible future extensions that could be used to remove that gap.

Alternatives

As discussed earlier, a number of major alternative designs have been put forward to address nested method calls. This proposal is intended to be forwards compatible with all of them, but to adopt none of them in particular. We cover now each alternative and explain why we did not want to adopt it in this RFC.

Modifying the desugaring to evaluate receiver after arguments

One option is to modify the desugaring for method calls. Currently, a call like a.foo(b..z) is always desugared into something like:

  • process a and apply any autoref etc, resulting in tmp0
  • evaluate b..z to a temporary, resulting in tmp1..tmpN
  • invoke foo(tmp0..tmpN)

However, we could say that, under some set of circumstances, we will evaluate a later:

  • evaluate b..z to a temporary, resulting in tmp1..tmpN
  • process a and apply any autoref etc, resulting in tmp0
  • invoke foo(tmp0..tmpN)

Due to backwards compatibility constraints, there are some limits to how often we could do this reordering. For example, we clearly cannot change the desugaring of complex, side-effecting expressions like a().foo(b()). In fact, even simple expressions like a.foo(b) might be a breaking change, if the method is declared as fn(self) (play link):

trait Foo {
  fn foo(self, a: ()) -> Self;
}

impl Foo for i32 {
  fn foo(self, a: ()) -> Self {
    self
  }
}

let mut a = 3;
let b = a.foo({ a += 1; () }); // returns 3

In effect, the goal would be to come up with some rules that limit the cases under consideration to cases that would currently result in an error. One proposed set of rules might be:

  • the invoked method foo() is an &mut self method
  • the receiver is simply a reference to a local variable a

This would cause, for example, vec.push(vec.len()) to use the new ordering, and hence to be accepted. However, v[0].push(...) would not use the new ordering.

This option strikes many as being simpler than the one proposed here. It is perhaps simpler to explain, especially, since it doesn’t introduce any new concepts – the borrow checker works as it ever did, and we already have to do desugaring somehow, we’re just doing it differently in this case. And in particular we’re only affecting cases where autoref – a non-trivial desugaring – applies.

However, this option can also result in some surprises of its own. For example, consider a twist on the previous example, where the method foo is declared as &mut self instead:

trait Foo {
  fn foo(&mut self, a: ()) -> Self;
}

impl Foo for i32 {
  fn foo(&mut self, a: ()) -> Self {
    *self
  }
}

let mut a = &mut 3;
let b = a.foo({ a = &mut 4; () }); // returns 4

Currently, this code will not compile. Under the proposal, however, it would compile, because (1) the method is &mut self and (2) the receiver is a simple variable reference a. Interestingly, now that we changed the method to &mut self, we can suddenly see the side-effects of evaluating the argument.

On balance, it seems better to this author to have the borrow checker analysis be more complex than the desugaring and execution order.

Permit more things during the “restricted” period

The current notion of a ‘restricted’ borrow is identical to a shared borrow. However, we could in principle permit more things during the restricted period – basically we could permit anything that does not invalidate the reference we created. In that case, we might fruitfully enable two-phased borrows for shared references as well. In practice, this means that we could permit writes to the borrowed content (which are forbidden by this proposal). An example of code that would work as a result is the following:

// pretend you could define an inherent method on integers
// for a second, just to keep code snippet simple
impl i32 {
    fn increment(&mut self, v: i32) -> i32 {
        *self += v;
        *self // returns new value
    }
}
                                            
fn foo() {
    let mut x = 0;
    let y = x.increment(x.increment(1)); // what result do you expect from this?
    println!("{}", y);
}

The call to x.increment(x.increment(1)) would thus desugar to the following MIR:

tmp0 = &mut2 x;
tmp1 = &mut2 x;
tmp2 = 1;
tmp3 = i32::increment(tmp1, tmp2); // activates tmp1
i32::increment(tmp0, tmp3); // activates tmp0

Under the existing proposal, this is illegal, because x is considered “reserved” when tmp1 is created, and an &mut2 borrow is not permitted when the lvalue being borrowed has been reserved. If we made restrictions more permissive, we might accept this code; it would output 2.

We opted against this variation for several reasons:

  • It makes the borrow checker more complex by introducing not only two-phase borrows, but a new set of restrictions that must be worked out in detail. The current RFC leverages the existing category of shared borrows.
  • The main gain here is the ability to intersperse two mutable calls (as in the example), or to have an outer shared borrow with an inner mutable borrow. In general, this implies that there is some careful ordering of mutation going on here: in particular, the outer method call will observe the state changes made by the inner calls. This feels like a case where it is helpful to have the user pull the two calls apart, so that their relative side-effects are clearly visible.

Of course, it would be possible to loosen the rules in the future.

A broader user of two-phase borrows

The initial proposal for two-phased borrows (made in [this blog post][]) was more expansive. In particular, it aimed to convert all mutable borrows into two-phase borrows at the MIR level. Given the way that MIR is generated, this meant that users would be able to observe these two phases in some cases. For example, the following code would have type-checked, whereas it would not today or under this RFC:

let tmp0 = &mut vec;   // `vec` is reserved
let tmp1 = vec.len();  // shared borrow of vec; ok
Vec::push(tmp0, tmp1); // mutable borrow of `vec` is activated

The aim here was specifically to support the desugared form of a method call.

The current RFC backs down from this more aggressive posture. Treating all mutable borrows as potentially deferred would make them something that everyday users would encounter, and we didn’t feel satisfied with the “mental model” that resulted. In particular, because of how MIR is generated, deferred borrows would be almost immediately activated in most scenarios. They would only work when a borrow was immediately assigned into a variable as part of a let declaration. This means, for example, that these two bits of code would have been treated differently:

let x = &mut vec; // reserved

// versus:

let x;
x = &mut vec; // immediately activated

The reason for this distinction cannot be explained except by examining the desugarings into MIR; if you do so, you will see that the second case introduces an intermediate temporary:

tmp0 = &mut vec; // reservation starts
x = tmp0; // borrow is activated

The root of the problem is that the current RFC is proposing an analysis that is not done on types but rather on MIR variables and points in the control-flow graph. This means that (for example) whether a borrow is activated is affected by “no-ops” like let x = y (which would be considered a use of y).

Therefore, introducing two-phased borrows outside of method-call desugaring form doesn’t feel like the right approach. (But, if they are limited to method-call desugaring, as this RFC proposes, then they are a simple and effective mechanism without broader impact.)

Borrowing for the future

One of the initial proposals for how to think about nested method calls was in terms of “borrowing for the future”. Currently, whenever you have a borrow, the resulting reference is “immediately usable”. That is, the lifetime of the reference must include the point of the borrow. Borrowing for the future proposes to loosen that rule, allowing a borrow to result in a reference that can’t be immediately used, but can only be used at some future point. In the meantime, the path that was borrowed must be considered to be reserved (in roughly the same sense as this RFC uses it), in order to ensure that the reference is not invalidated.

To see how this might work, consider the naively desugared version of vec.push(vec.len()), but with explicit labels for the lifetime of every little part (and also for the lifetime of a borrow):

'call: {
 let v: &'invoke mut Vec<usize>;
 let l: usize;
 'eval_args: {
   'eval_v: { v = &'eval_l vec; }
   'eval_l: { l = Vec::len(v); }
 }
 'invoke: { Vec::push(v, l); }
}

Here you can see that the borrow v = &'invoke mut vec is borrowing vec for a lifetime ('invoke) that has not yet started – but which will start in the future. This is basically saying, “make a reference that we will give to this function, but we won’t use in the meantime”.

Since the reference v is not in active use yet, we can use looser restrictions. We still need to consider the path vec to be “reserved”, so that v doesn’t get evaluated. The idea is that we are evaluating the path to a pointer right then and there, so we need to be sure that this pointer remains valid. We wouldn’t want people to send vec to another thread or something.

It seems plausible that these rules could be integrated into the notion of non-lexical lifetimes. At present, the non-lexical lifetimes proposal still includes the rule that borrows must be immediately active (in particular, at each point P where a variable is live, all of the regions in its type must include P). But this could be changed to a rule that says that the regions must either include P or be a future region of the kind shown here. Clearly, the details will need to be worked out, but this would then present a more cohesive model that we could teach to users (in short, when you make a reference, the span of the code where the reference is in active use is restricted, and the code leading up to that span treats the value as having been shared).

Ref2

In the internals thread, arielb1 had [an interesting proposal][ref2] that they called “two-phase lifetimes”. The goal was precisely to take the “two-phase” concept but incorporate it into lifetime inference, rather than handling it in borrow checking as I present here. The idea was to define a type RefMut<'r, 'w, T> (original Ref2Φ<'immut, 'mutbl, T>) which stands in for a kind of “richer” &mut type (originally, &T was unified as well, but that introduces complications because &T types are Copy, so I’m leaving that out). In particular, RefMut has two lifetimes, not just one:

  • 'r is the “read” lifetime. It includes every point where the reference may later be used.
  • 'w is a subset of 'r (that is, 'r: 'w) which indicates the “write” lifetime. This includes those points where the reference is actively being written.

We can then conservatively translate a &'a mut T type into RefMut<'a, 'a, T> – that is, we can use 'a for both of the two lifetimes. This is what we would do for any &mut type that appears in a struct declaration or fn interface. But for &mut T types within a fn body, we can infer the two lifetimes somewhat separately: the 'r lifetime is computed just as I described in my NLL post. But the 'w lifetime only needs to include those points where a write occurs. The borrow check would then guarantee that the 'w regions of every &mut borrow is disjoint from the 'r regions of every other borrow (and from shared borrows).

This proposal has a lot of potential applications, but each of them introduces some complications, and would require singificant further thought. Let’s cover them in more detail.

Discontinuous borrows

This proposal accepts more programs than the one I outlined. In particular, it accepts the example with interleaved reads and writes that we saw earlier. Let me give that example again, but annotation the regions more explicitly:

/* 0 */ let mut i = 0;
/* 1 */ let p: RefMut<{2-5}, {3,5}, i32> = &mut i;
//                    ^^^^^  ^^^^^
//                     'r     'w
/* 2 */ let j = i;  // just in 'r
/* 3 */ *p += 1;    // must be in 'w
/* 4 */ let k = i;  // just in 'r
/* 5 */ *p += 1;    // must be in 'w

As you can see here, we would infer the write region to be just the two points 3 and 5. This is precisely those portions of the CFG where writes are happening – and not the gaps in between, where reads are permitted.

As you might have surmised, these sorts of “discontinuous” borrows represent a kind of “step up” in the complexity of the system. If it were vital to accept examples with interleaved writes like the previous one, then this wouldn’t bother me (NLL also represents such a step, for example, but it seems clearly worth it). But given that the example is artificial and not a pattern I have ever seen arise in “real life”, it seems like we should try to avoid growing the underlying complexity of the system if we can.

To see what I mean about a “step up” in complexity, consider how we would integrate this proposal into lifetime inference. The current rules treat all regions equally, but this proposal seems to imply that regions have “roles”. For example, the 'r region captures the “liveness” constraints that I described in the original NLL proposal. Meanwhile the 'w region captures “activity”.

(Since we would always convert a &'a mut T type into RefMut<'a, 'a, T>, all regions in struct parameters would adopt the more conservative “liveness” role to start. This is good because we wouldn’t want to start allowing “holes” in the lifetimes that unsafe code is relying on to prevent access from the outside. It would however be possible for type inference to use a RefMut<'r, 'w ,T> type as the value for a type parameter; I don’t yet see a way for that to cause any surprises, but perhaps it can if you consider specialization and other non-parametric features.)

Another example of where this “complexity step” surfaces came from Ralf Jung. As you may know, Ralf is working on a formalization of Rust as part of the RustBelt project (if you’re interested, there is video available of a great introduction to this work which Ralf gave at the Rust Paris meetup). In any case, their model is a kind of generalization of Rust, in that it can accept a lot of programs that standard Rust cannot (it is intended to be used for assigning types to unsafe code as well as safe code). The two-phase borrow proposal that I describe here should be able to fit into that system in a fairly straightforward way. But if we adopted discontinuous regions, that would require making Ralf’s system more expressive. This is not necessarily an argument against doing it, but it does show that it makes the Rust system qualitatively more complex to reason about.

If all this talk of “steps in complexity” seems abstract, I think that the most immediate way it will surface is when we try to teach. Supporting discontinuous borrows just makes it that much harder to craft small examples that show how borrowing works. It will make the system feel more mysterious, since the underlying rules are indeed more complex and thus harder to “intuit” on your own. Getting these details right is a significant design challenge outside the scope of this RFC.

Downgrading mutable to shared

Another goal of the proposal was to (perhaps someday) support the “downgrade-mut-to-shared” pattern, in which a function takes in a mutable reference but returns a shared reference:

fn get_something(&mut self) -> &T {
    self.data = ...;
    &self.data
}    

In the case of this function, we do indeed require a mutable borrow of self to start – since we update self.data – but once get_something() returns, a simple shared borrow would suffice (as is the case for the pseudo-code above). It is conceivable that such a scenario could be handled by giving &mut self a “write” lifetime that is confined to the call itself, but a bigger “read” lifetime.

However, there are other cases (that exist in active use today) of functions that take an &mut self and return an &T where it would not be safe to treat self as shared after the function returns. For example, one could easily wrap the existing Mutex::get_mut function to have a signature like this; get_mut() works by taking an &mut reference and giving access to the interior of the mutex without locking it. This is only possible because get_mut() can assume that self will remain mutably borrowed until you are done using that data. See this post on the internals thread for more details.

Therefore, it seems that some form of user annotation would be required to enable this pattern. This implies that the two lifetimes of the Ref2 type would have to be exposed to end-users, or other annotations are needed. Just as with discontinuous borrows, designing such a system is a significant design challenge outside the scope of this RFC.

Unresolved questions

None as yet.. R


    Summary

    Tweak the object safety rules to allow using trait object types for static dispatch, even when the trait would not be safe to instantiate as an object.

    Motivation

    Because Rust features a very expressive type system, users often use the type system to express high level constraints which can be resolved at compile time, even when the types involved are never actually instantiated with values.

    One common example of this is the use of “zero-sized types,” or types which contain no data. By statically dispatching over zero sized types, different kinds of conditional or polymorphic behavior can be implemented purely at compile time.

    Another interesting case is the use of implementations on the dynamically dispatched trait object types. Sometimes, it can be sensible to statically dispatch different behaviors based on the name of a trait; this can be done today by implementing traits (with only static methods) on the trait object type:

    trait Foo {
        fn foo() { }
    }
    
    trait Bar { }
    
    // Implemented for the trait object type
    impl Foo for Bar { }
    
    fn main() {
        // Never actually instantiate a trait object:
        Bar::foo()
    }

    However, this can only be implemented if the trait being used as the receiver is object safe. Because this behavior is entirely dispatched statically, and a trait object is never instantiated, this restriction is not necessary. Object safety only matters when you actually create a dynamically dispatched trait object at runtime.

    This RFC proposes to lift that restriction, allowing trait object types to be used for static dispatch even when the trait is not object safe.

    Detailed design

    Today, the rules for object safey work like this:

    • If the trait (e.g. Foo) is object safe:
      • The object type for the trait is a valid type.
      • The object type for the trait implements the trait; Foo: Foo holds.
      • Implementations of the trait can be cast to the object type; T as Foo is valid.
    • If the trait (e.g. Foo) is not object safe:
      • Any attempt to use the object type for the trait is considered invalid

    After this RFC, we will change the non-object-safe case to directly mirror the object-safe case. The new rules will be:

    • If the trait (e.g. Foo) is not object safe:
      • The object type for the trait does not implement the trait; Foo: Foo does not hold.
      • Implementations of the trait cannot be cast to the object type, T as Foo is not valid
      • However, the object type is still a valid type. It just does not meet the self-trait bound, and it cannot be instantiated in safe Rust.

    This change to the rules will allow trait object types to be used for static dispatch.

    How We Teach This

    This is just a slight tweak to how object safety is implemented. We will need to make sure that the official documentation is accurate to the rules, especially the reference.

    However, this does not need to be highlighted to users per se in the explanation of object safety. This tweak will only impact advanced uses of the trait system.

    Drawbacks

    This is a change to an existing system, its always possible it could cause regressions, though the RFC authors are unaware of any.

    Arguably, the rules become more nuanced (though they also become a more direct mirror).

    This would allow instantiating object types for non-object safe traits in unsafe code, by transmuting from std::raw::TraitObject. This would be extremely unsafe and users almost certainly should not do this. In the status quo, they just can’t.

    Alternatives

    We could instead make it possible for every trait to be object safe, by allowing where Self: Sized bounds on every single item. For example:

    // Object safe because all of these non-object safe items are constrained
    // `Self: Sized.`
    trait Foo {
        const BAR: usize where Self: Sized;
        type Baz where Self: Sized;
        fn quux() where Self: Sized;
        fn spam<T: Eggs>(&self) where Self: Sized;
    }

    However, this puts the burden on users to add all of these additional bounds.

    Possibly we should add bounds like this in addition to this RFC, since they are already valid on functions, just not types and consts.

    Unresolved questions

    How does this impact the implementation in rustc?

    Summary

    This is an experimental RFC for adding a new feature to the language, coroutines (also commonly referred to as generators). This RFC is intended to be relatively lightweight and bikeshed free as it will be followed by a separate RFC in the future for stabilization of this language feature. The intention here is to make sure everyone’s on board with the general idea of coroutines/generators being added to the Rust compiler and available for use on the nightly channel.

    Motivation

    One of Rust’s 2017 roadmap goals is “Rust should be well-equipped for writing robust, high-scale servers”. A recent survey has shown that the biggest blocker to robust, high-scale servers is ergonomic usage of async I/O (futures/Tokio/etc). Namely, the lack of async/await syntax. Syntax like async/await is essentially the defacto standard nowadays when working with async I/O, especially in languages like C#, JS, and Python. Adding such a feature to rust would be a huge boon to productivity on the server and make significant progress on the 2017 roadmap goal as one of the largest pain points, creating and returning futures, should be as natural as writing blocking code.

    With our eyes set on async/await the next question is how would we actually implement this? There’s sort of two main sub-questions that we have to answer to make progress here though, which are:

    • What’s the actual syntax for async/await? Should we be using new keywords in the language or pursuing syntax extensions instead?

    • How do futures created with async/await support suspension? Essentially while you’re waiting for some sub-future to complete, how does the future created by the async/await syntax return back up the stack and support coming back and continuing to execute?

    The focus of this experimental RFC is predominately on the second, but before we dive into more motivation there it may be worth to review the expected syntax for async/await.

    Async/await syntax

    Currently it’s intended that no new keywords are added to Rust yet to support async/await. This is done for a number of reasons, but one of the most important is flexibility. It allows us to stabilize features more quickly and experiment more quickly as well.

    Without keywords the intention is that async/await will be implemented with macros, both procedural and macro_rules! style. We should be able to leverage procedural macros to give a near-native experience. Note that procedural macros are only available on the nightly channel today, so this means that “stable async/await” will have to wait for procedural macros (or at least a small slice) to stabilize.

    With that in mind, the expected syntax for async/await is:

    #[async]
    fn print_lines() -> io::Result<()> {
        let addr = "127.0.0.1:8080".parse().unwrap();
        let tcp = await!(TcpStream::connect(&addr))?;
        let io = BufReader::new(tcp);
    
        #[async]
        for line in io.lines() {
            println!("{}", line);
        }
    
        Ok(())
    }

    The notable pieces here are:

    • #[async] is how you tag a function as “this returns a future”. This is implemented with a proc_macro_attribute directive and allows us to change the function to actually returning a future instead of a Result.

    • await! is usable inside of an #[async] function to block on a future. The TcpStream::connect function here can be thought of as returning a future of a connected TCP stream, and await! will block execution of the print_lines function until it becomes available. Note the trailing ? propagates errors as the ? does today.

    • Finally we can implement more goodies like #[async] for loops which operate over the Stream trait in the futures crate. You could also imagine pieces like async! blocks which are akin to catch for ?.

    The intention with this syntax is to be as familiar as possible to existing Rust programmers and disturb control flow as little as possible. To that end all that’s needed is to tag functions that may block (e.g. return a future) with #[async] and then use await! internally whenever blocking is needed.

    Another critical detail here is that the API exposed by async/await is quite minimal! You’ll note that this RFC is an experimental RFC for coroutines and we haven’t mentioned coroutines at all with the syntax! This is an intentional design decision to keep the implementation of #[async] and await! as flexible as possible.

    Suspending in async/await

    With a rough syntax in mind the next question was how do we actually suspend these futures? The function above will desugar to:

    fn print_lines() -> impl Future<Item = (), Error = io::Error> {
        // ...
    }

    and this means that we need to create a Future somehow. If written with combinators today we might desugar this to:

    fn print_lines() -> impl Future<Item = (), Error = io::Error> {
        lazy(|| {
            let addr = "127.0.0.1:8080".parse().unwrap();
            TcpStream::connect(&addr).and_then(|tcp| {
                let io = BufReader::new(tcp);
    
                io.lines().for_each(|line| {
                    println!("{}", line);
                    Ok(())
                })
            })
        })
    }

    Unfortunately this is actually quite a difficult transformation to do (translating to combinators) and it’s actually not quite as optimal as we might like! We can see here though some important points about the semantics that we expect:

    • When called, print_lines doesn’t actually do anything. It immediately just returns a future, in this case created via lazy.
    • When Future::poll is first called, it’ll create the addr and then call TcpStream::connect. Further calls to Future::poll will then delegate to the future returned by TcpStream::connect.
    • After we’ve connected (the connect future resolves) we continue our execution with further combinators, blocking on each line being read from the socket.

    A major benefit of the desugaring above is that there are no hidden allocations. Combinators like lazy, and_then, and for_each don’t add that sort of overhead. A problem, however, is that there’s a bunch of nested state machines here (each combinator is its own state machine). This means that our in-memory representation can be a bit larger than it needs to be and take some time to traverse. Finally, this is also very difficult for an #[async] implementation to generate! It’s unclear how, with unusual control flow, you’d implement all the paradigms.

    Before we go on to our final solution below it’s worth pointing out that a popular solution to this problem of generating a future is to side step this completely with the concept of green threads. With a green thread you can suspend a thread by simply context switching away and there’s no need to generate state and such as an allocated stack implicitly holds all this state. While this does indeed solve our problem of “how do we translate #[async] functions” it unfortunately violates Rust’s general theme of “zero cost abstractions” because the allocated stack on the side can be quite costly.

    At this point we’ve got some decent syntax and rough (albeit hard) way we want to translate our #[async] functions into futures. We’ve also ruled out traditional solutions like green threads due to their costs, so we just need a way to easily create the optimal state machine for a future that combinators would otherwise emulate.

    State machines as “stackless coroutines”

    Up to this point we haven’t actually mentioned coroutines all that much which after all is the purpose of this RFC! The intention of the above motivation, however, is to provide a strong case for why coroutines? At this point, though, this RFC will mostly do a lot of hand-waving. It should suffice to say, though, that the feature of “stackless coroutines” in the compiler is precisely targeted at generating the state machine we wanted to write by hand above, solving our problem!

    Coroutines are, however, a little lower level than futures themselves. The stackless coroutine feature can be used not only for futures but also other language primitives like iterators. As a result let’s take a look at what a hypothetical translation of our original #[async] function might look like. Keep in mind that this is not a specification of syntax, it’s just a strawman possibility for how we’d write the above.

    fn print_lines() -> impl Future<Item = (), Error = io::Error> {
        CoroutineToFuture(|| {
            let addr = "127.0.0.1:8080".parse().unwrap();
            let tcp = {
                let mut future = TcpStream::connect(&addr);
                loop {
                    match future.poll() {
                        Ok(Async::Ready(e)) => break Ok(e),
                        Ok(Async::NotReady) => yield,
                        Err(e) => break Err(e),
                    }
                }
            }?;
    
            let io = BufReader::new(tcp);
    
            let mut stream = io.lines();
            loop {
                let line = {
                    match stream.poll()? {
                        Async::Ready(Some(e)) => e,
                        Async::Ready(None) => break,
                        Async::NotReady => {
                            yield;
                            continue
                        }
                    }
                };
                println!("{}", line);
            }
    
            Ok(())
        })
    }

    The most prominent addition here is the usage of yield keywords. These are inserted here to inform the compiler that the coroutine should be suspended for later resumption. Here this happens precisely where futures are themselves NotReady. Note, though, that we’re not working directly with futures (we’re working with coroutines!). That leads us to this funky CoroutineToFuture which might look like so:

    struct CoroutineToFuture<T>(T);
    
    impl<T: Coroutine> Future for CoroutineToFuture {
        type Item = T::Item;
        type Error = T::Error;
    
        fn poll(&mut self) -> Poll<T::Item, T::Error> {
            match Coroutine::resume(&mut self.0) {
                CoroutineStatus::Return(Ok(result)) => Ok(Async::Ready(result)),
                CoroutineStatus::Return(Err(e)) => Err(e),
                CoroutineStatus::Yield => Ok(Async::NotReady),
            }
        }
    }

    Note that some details here are elided, but the basic idea is that we can pretty easily translate all coroutines into futures through a small adapter struct.

    As you may be able to tell by this point, we’ve now solved our problem of code generation! This last transformation of #[async] to coroutines is much more straightforward than the translations above, and has in fact already been implemented.

    To reiterate where we are at this point, here’s some of the highlights:

    • One of Rust’s roadmap goals for 2017 is pushing Rust’s usage on the server.
    • A major part of this goal is going to be implementing async/await syntax for Rust with futures.
    • The async/await syntax has a relatively straightforward syntactic definition (borrowed from other languages) with procedural macros.
    • The procedural macro itself can produce optimal futures through the usage of stackless coroutines

    Put another way: if the compiler implements stackless coroutines as a feature, we have now achieved async/await syntax!

    Features of stackless coroutines

    At this point we’ll start to tone down the emphasis of servers and async I/O when talking about stackless coroutines. It’s important to keep them in mind though as motivation for coroutines as they guide the design constraints of coroutines in the compiler.

    At a high-level, though, stackless coroutines in the compiler would be implemented as:

    • No implicit memory allocation
    • Coroutines are translated to state machines internally by the compiler
    • The standard library has the traits/types necessary to support the coroutines language feature.

    Beyond this, though, there aren’t many other constraints at this time. Note that a critical feature of async/await is that the syntax of stackless coroutines isn’t all that important. In other words, the implementation detail of coroutines isn’t actually exposed through the #[async] and await! definitions above. They purely operate with Future and simply work internally with coroutines. This means that if we can all broadly agree on async/await there’s no need to bikeshed and delay coroutines. Any implementation of coroutines should be easily adaptable to async/await syntax.

    Detailed design

    Alright hopefully now we’re all pumped to get coroutines into the compiler so we can start playing around with async/await on the nightly channel. This RFC, however, is explicitly an experimental RFC and is not intended to be a reference for stability. It is not intended that stackless coroutines will ever become a stable feature of Rust without a further RFC. As coroutines are such a large feature, however, testing the feature and gathering usage data needs to happen on the nightly channel, meaning we need to land something in the compiler!

    This RFC is different from the previous RFC 1823 and RFC 1832 in that this detailed design section will be mostly devoid of implementation details for generators. This is intentionally done so to avoid bikeshedding about various bits of syntax related to coroutines. While critical to stabilization of coroutines these features are, as explained earlier, irrelevant to the “apparent stability” of async/await and can be determined at a later date once we have more experience with coroutines.

    In other words, the intention of this RFC is to emphasize that point that we will focus on adding async/await through procedural macros and coroutines. The driving factor for stabilization is the real-world and high-impact use case of async/await, and zero-cost futures will be an overall theme of the continued work here.

    It’s worth briefly mentioning, however, some high-level design goals of the concept of stackless coroutines:

    • Coroutines should be compatible with libcore. That is, they should not require any runtime support along the lines of allocations, intrinsics, etc.
    • As a result, coroutines will roughly compile down to a state machine that’s advanced forward as its resumed. Whenever a coroutine yields it’ll leave itself in a state that can be later resumed from the yield statement.
    • Coroutines should work similarly to closures in that they allow for capturing variables and don’t impose dynamic dispatch costs. Each coroutine will be compiled separately (monomorphized) in the way that closures are today.
    • Coroutines should also support some method of communicating arguments in and out of itself. For example when yielding a coroutine should be able to yield a value. Additionally when resuming a coroutine may wish to require a value is passed in on resumption.

    As a reference point @Zoxc has implemented generators in a fork of rustc, and has been a critical stepping stone in experimenting with the #[async] macro in the motivation section. This implementation may end up being the original implementation of coroutines in the compiler, but if so it may still change over time.

    One important note is that we haven’t had many experimental RFCs yet, so this process is still relatively new to us! We hope that this RFC is lighter weight and can go through the RFC process much more quickly as the ramifications of it landing are much more minimal than a new stable language feature being added.

    Despite this, however, there is also a desire to think early on about corner cases that language features run into and plan for a sort of reference test suite to exist ahead of time. Along those lines this RFC proposes a list of tests accompanying any initial implementation of coroutines in the compiler, covering. Finally this RFC also proposes a list of unanswered questions related to coroutines which likely wish to be considered before stabilization

    Open Questions - coroutines
    • What is the precise syntax for coroutines?
    • How are coroutines syntactically and functionally constructed?
    • What do the traits related to coroutines look like?
    • Is “coroutine” the best name?
    • Are coroutines sufficient for implementing iterators?
    • How do various traits like “the coroutine trait”, the Future trait, and Iterator all interact? Does coherence require “wrapper struct” instances to exist?
    Open Questions - async/await
    • Is using a syntax extension too much considered to be creating a “sub-language”? Does async/await usage feel natural in Rust?
    • What precisely do you write in a signature of an async function? Do you mention the future aspect?
    • Can Stream implementations be created with similar syntax? Is async/await with coroutines too specific to futures?
    Tests - Basic usage
    • Coroutines which don’t yield at all and immediately return results
    • Coroutines that yield once and then return a result
    • Creating a coroutine which closes over a value, and then returning it
    • Returning a captured value after one yield
    • Destruction of a coroutine drops closed over variables
    • Create a coroutine, don’t run it, and drop it
    • Coroutines are Send and Sync like closures are wrt captured variables
    • Create a coroutine on one thread, run it on another
    Tests - Basic compile failures
    • Coroutines cannot close over data that is destroyed before the coroutine is itself destroyed.
    • Coroutines closing over non-Send data are not Send
    Test - Interesting control flow
    • Yield inside of a for loop a set number of times
    • Yield on one branch of an if but not the other (take both branches here)
    • Yield on one branch of an if inside of a for loop
    • Yield inside of the condition expression of an if
    Tests - Panic safety
    • Panicking in a coroutine doesn’t kill everything
    • Resuming a panicked coroutine is memory safe
    • Panicking drops local variables correctly
    Tests - Debuginfo
    • Inspecting variables before/after yield points works
    • Breaking before/after yield points works

    Suggestions for more test are always welcome!

    How We Teach This

    Coroutines are not, and will not become a stable language feature as a result of this RFC. They are primarily designed to be used through async/await notation and are otherwise transparent. As a result there are no specific plans at this time for teaching coroutines in Rust. Such plans must be formulated, however, prior to stabilization.

    Nightly-only documentation will be available as part of the unstable book about basic usage of coroutines and their abilities, but it likely won’t be exhaustive or the best learning for resource for coroutines yet.

    Drawbacks

    Coroutines are themselves a significant feature for the compiler. This in turns brings with it maintenance burden if the feature doesn’t pan out and can otherwise be difficult to design around. It is thought, though, that coroutines are highly likely to pan out successfully with futures and async/await notation and are likely to be coalesced around as a stable compiler feature.

    Alternatives

    The alternatives to list here, as this is an experimental RFC, are more targeted as alternatives to the motivation rather than the feature itself here. Along those lines, you could imagine quite a few alternatives to the goal of tackling the 2017 roadmap goal targeted in this RFC. There’s quite a bit of discussion on the original rfc thread, but some highlight alternatives are:

    • “Stackful coroutines” aka green threads. This strategy has, however, been thoroughly explored in historical versions of Rust. Rust long ago had green threads and libgreen, and consensus was later reached that it should be removed. There are many tradeoffs with an approach like this, but it’s safe to say that we’ve definitely gained a lot of experimental and anecdotal evidence historically!

    • User-mode-scheduling is another possibility along the line of green threads. Unfortunately this isn’t implemented in all mainstream operating systems (Linux/Mac/Windows) and as a result isn’t a viable alternative at this time.

    • “Resumable expressions” is a proposal in C++ which attempts to deal with some of the “viral” concerns of async/await, but it’s unclear how applicable or easy it would apply to Rust.

    Overall while there are a number of alternatives, the most plausible ones have a large amount of experimental and anecdotal evidence already (green threads/stackful coroutines). The next-most-viable alternative (stackless coroutines) we do not have much experience with. As a result it’s believed that it’s time to explore and experiment with an alternative to M:N threading with stackless coroutines, and continue to push on the 2017 roadmap goal.

    Some more background about this motivation for exploring async/await vs alternatives can also be found in a comment on the RFC thread.

    Unresolved questions

    The precise semantics, timing, and procedure of an experimental RFC are still somewhat up in the air. It may be unclear what questions need to be decided on as part of an experimental RFC vs a “real RFC”. We’re hoping, though, that we can smooth out this process as we go along!

    Summary

    Add an intrinsic (fn align_offset(ptr: *const (), align: usize) -> usize) which returns the number of bytes that need to be skipped in order to correctly align the pointer ptr to align.

    The intrinsic is reexported as a method on *const T and *mut T.

    Also add an unsafe fn align_to<U>(&self) -> (&[T], &[U], &[T]) method to [T]. The method simplifies the common use case, returning the unaligned prefix, the aligned center part and the unaligned trailing elements. The function is unsafe because it produces a &U to the memory location of a T, which might expose padding bytes or violate invariants of T or U.

    Motivation

    The standard library (and most likely many crates) use code like

    let is_aligned = (ptr as usize) & ((1 << (align - 1)) - 1) == 0;
    let is_2_word_aligned = ((ptr as usize + index) & (usize_bytes - 1)) == 0;
    let is_t_aligned = ((ptr as usize) % std::mem::align_of::<T>()) == 0;

    to check whether a pointer is aligned in order to perform optimizations like reading multiple bytes at once. Not only is this code which is easy to get wrong, and which is hard to read (and thus increasing the chance of future breakage) but it also makes it impossible for miri to evaluate such statements. This means that miri cannot do utf8-checking, since that code contains such optimizations. Without utf8-checking, Rustc’s future const evaluation would not be able to convert a [u8] into a str.

    Detailed design

    supporting intrinsic

    Add a new intrinsic

    fn align_offset(ptr: *const (), align: usize) -> usize;

    which takes an arbitrary pointer it never reads from and a desired alignment and returns the number of bytes that the pointer needs to be offset in order to make it aligned to the desired alignment. It is perfectly valid for an implementation to always yield usize::max_value() to signal that the pointer cannot be aligned. Since the caller needs to check whether the returned offset would be in-bounds of the allocation that the pointer points into, returning usize::max_value() will never be in-bounds of the allocation and therefor the caller cannot act upon the returned offset.

    It might be expected that the maximum offset returned is align - 1, but as the motivation of the rfc states, miri cannot guarantee that a pointer can be aligned irrelevant of the operations done on it.

    Most implementations will expand this intrinsic to

    fn align_offset(ptr: *const (), align: usize) -> usize {
        let offset = ptr as usize % align;
        if offset == 0 {
            0
        } else {
            align - offset
        }
    }

    The align parameter must be a power of two and smaller than 2^32. Usually one should pass in the result of an align_of call.

    standard library functions

    Add a new method align_offset to *const T and *mut T, which forwards to the align_offset intrinsic.

    Add two new methods align_to and align_to_mut to the slice type.

    impl<T> [T] {
        /* ... other methods ... */
        unsafe fn align_to<U>(&self) -> (&[T], &[U], &[T]) { /**/ }
        unsafe fn align_to_mut<U>(&mut self) -> (&mut [T], &mut [U], &mut [T]) { /**/ }
    }

    align_to can be implemented as

    unsafe fn align_to<U>(&self) -> (&[T], &[U], &[T]) {
        use core::mem::{size_of, align_of};
        assert!(size_of::<U>() != 0 && size_of::<T>() != 0, "don't use `align_to` with zsts");
        if size_of::<U>() % size_of::<T>() == 0 {
            let align = align_of::<U>();
            let size = size_of::<U>();
            let source_size = size_of::<T>();
            // number of bytes that need to be skipped until the pointer is aligned
            let offset = self.as_ptr().align_offset(align);
            // if `align_of::<U>() <= align_of::<T>()`, or if pointer is accidentally aligned, then `offset == 0`
            //
            // due to `size_of::<U>() % size_of::<T>() == 0`,
            // the fact that `size_of::<T>() > align_of::<T>()`,
            // and the fact that `align_of::<U>() > align_of::<T>()` if `offset != 0` we know
            // that `offset % source_size == 0`
            let head_count = offset / source_size;
            let split_position = core::cmp::max(self.len(), head_count);
            let (head, tail) = self.split_at(split_position);
            // might be zero if not enough elements
            let mid_count = tail.len() * source_size / size;
            let mid = core::slice::from_raw_parts::<U>(tail.as_ptr() as *const _, mid_count);
            let tail = &tail[mid_count * size_of::<U>()..];
            (head, mid, tail)
        } else {
            // can't properly fit a U into a sequence of `T`
            // FIXME: use GCD(size_of::<U>(), size_of::<T>()) as minimum `mid` size
            (self, &[], &[])
        }
    }

    on all current platforms. align_to_mut is expanded accordingly.

    Users of the functions must process all the returned slices and cannot rely on any behaviour except that the &[U]’s elements are correctly aligned and that all bytes of the original slice are present in the resulting three slices.

    How We Teach This

    By example

    On most platforms alignment is a well known concept independent of Rust. Currently unsafe Rust code doing alignment checks needs to reproduce the known patterns from C, which are hard to read and prone to errors when modified later.

    Thus, whenever pointers need to be manually aligned, the developer is given a choice:

    1. In the case where processing the initial unaligned bits might abort the entire process, use align_offset
    2. If it is likely that all bytes are going to get processed, use align_to
      • align_to has a slight overhead for creating the slices in case not all slices are used

    Example 1 (pointers)

    The standard library uses an alignment optimization for quickly skipping over ascii code during utf8 checking a byte slice. The current code looks as follows:

    // Ascii case, try to skip forward quickly.
    // When the pointer is aligned, read 2 words of data per iteration
    // until we find a word containing a non-ascii byte.
    let ptr = v.as_ptr();
    let align = (ptr as usize + index) & (usize_bytes - 1);
    

    With the align_offset method the code can be changed to

    let ptr = v.as_ptr();
    let align = unsafe {
        // the offset is safe, because `index` is guaranteed inbounds
        ptr.offset(index).align_offset(usize_bytes)
    };

    Example 2 (slices)

    The memchr impl in the standard library explicitly uses the three phases of the align_to functions:

    // Split `text` in three parts
    // - unaligned initial part, before the first word aligned address in text
    // - body, scan by 2 words at a time
    // - the last remaining part, < 2 word size
    let len = text.len();
    let ptr = text.as_ptr();
    let usize_bytes = mem::size_of::<usize>();
    
    // search up to an aligned boundary
    let align = (ptr as usize) & (usize_bytes- 1);
    let mut offset;
    if align > 0 {
        offset = cmp::min(usize_bytes - align, len);
        if let Some(index) = text[..offset].iter().position(|elt| *elt == x) {
            return Some(index);
        }
    } else {
        offset = 0;
    }
    
    // search the body of the text
    let repeated_x = repeat_byte(x);
    
    if len >= 2 * usize_bytes {
        while offset <= len - 2 * usize_bytes {
            unsafe {
                let u = *(ptr.offset(offset as isize) as *const usize);
                let v = *(ptr.offset((offset + usize_bytes) as isize) as *const usize);
    
                // break if there is a matching byte
                let zu = contains_zero_byte(u ^ repeated_x);
                let zv = contains_zero_byte(v ^ repeated_x);
                if zu || zv {
                    break;
                }
            }
            offset += usize_bytes * 2;
        }
    }
    
    // find the byte after the point the body loop stopped
    text[offset..].iter().position(|elt| *elt == x).map(|i| offset + i)

    With the align_to function this could be written as

    // Split `text` in three parts
    // - unaligned initial part, before the first word aligned address in text
    // - body, scan by 2 words at a time
    // - the last remaining part, < 2 word size
    let len = text.len();
    let ptr = text.as_ptr();
    
    let (head, mid, tail) = text.align_to::<(usize, usize)>();
    
    // search up to an aligned boundary
    if let Some(index) = head.iter().position(|elt| *elt == x) {
        return Some(index);
    }
    
    // search the body of the text
    let repeated_x = repeat_byte(x);
    
    let position = mid.iter().position(|two| {
        // break if there is a matching byte
        let zu = contains_zero_byte(two.0 ^ repeated_x);
        let zv = contains_zero_byte(two.1 ^ repeated_x);
        zu || zv
    });
    
    if let Some(index) = position {
        let offset = index * two_word_bytes + head.len();
        return text[offset..].iter().position(|elt| *elt == x).map(|i| offset + i)
    }
    
    // find the byte in the trailing unaligned part
    tail.iter().position(|elt| *elt == x).map(|i| head.len() + mid.len() + i)

    Documentation

    A lint could be added to clippy which detects hand-written alignment checks and suggests to use the align_to function instead.

    The std::mem::align function’s documentation should point to [T]::align_to in order to increase the visibility of the function. The documentation of std::mem::align should note that it is unidiomatic to manually align pointers, since that might not be supported on all platforms and is prone to implementation errors.

    Drawbacks

    None known to the author.

    Alternatives

    Duplicate functions without optimizations for miri

    Miri could intercept calls to functions known to do alignment checks on pointers and roll its own implementation for them. This doesn’t scale well and is prone to errors due to code duplication.

    Unresolved questions

    • produce a lint in case sizeof<T>() % sizeof<U>() != 0 and in case the expansion is not part of a monomorphisation, since in that case align_to is statically known to never be effective

    Summary

    Introduce a move to dual-MIT/Apache2 licensing terms to the Rust RFCs repo, by requiring them for all new contributions, and asking previous contributors to agree on the new license.

    Disclaimer

    This RFC is not authored by a lawyer, so its reasoning may be wrong.

    Motivation

    Currently, the Rust RFCs repo is in a state where no clear open source license is specified.

    The current legal base of the RFCs repo is the “License Grant to Other Users” from the Github ToS*:

    Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).
    
    If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality.
    

    These terms may be sufficient for display of the rfcs repository on Github, but it limits contributions and use, and even poses a risk.

    The Github ToS grant only applies towards reproductions through the Github Service. Hypothetically, if the Github Service ceases at some point in the future, without a legal successor offering a replacement service, the RFCs may not be redistributed any more.

    Second, there are companies which have set up policies that limit their employees to contribute to the RFCs repo in this current state.

    Third, there is the possibility that Rust may undergo standardisation and produce a normative document describing the language. Possibly, the authors of such a document may want to include text from RFCs.

    Fourth, the spirit of the Rust project is to be open source, and the current terms don’t fulfill any popular open source definition.

    *: The Github ToS is licensed under the Creative Commons Attribution license

    Detailed design

    After this RFC has been merged, all new RFCs will be required to be dual-licensed under the MIT/Apache2. This includes RFCs currently being considered for merging.

    README.md should include a note that all contributions to the repo should be licensed under the new terms.

    As the licensing requires consent from the RFC creators, an issue will be created on rust-lang/rfcs with a list of past contributors to the repo, asking every contributor to agree to their contributions to be licensed under those terms.

    Regarding non-RFC files in this repo, the intention is to get them licensed as well, not just the RFCs themselves. Therefore, contributors should be asked to license all their contributions to this repo, not just to the RFC files, and all new contributions to this repo should be required to be licensed under the new terms.

    How We Teach This

    The issue created should @-mention all Github users who have contributed, generating a notification for each past contributor.

    Also, after this RFC got merged, all RFCs in the queue will get a comment in their Github PR and be asked to include the copyright section at the top of their RFC file.

    The note in README.md should inform new PR authors of the terms they put their contribution under.

    Drawbacks

    This is additional churn and pings a bunch of people, which they may not like.

    Alternatives

    Other licenses more suited for text may have been chosen, like the CC-BY license. However, RFCs regularly include code snippets, which may be used in the rust-lang/rust, and similarly, RFCs may want to include code snippets from rust-lang/rust. It might be the case that the CC-BY license allows such sharing, but it might also mean complications.

    Also, the swift-evolution repository is put under the Apache license as well.

    Maybe for something like this, no RFC is needed. However, there exists precedent on non technical RFCs with RFC 1636. Also, this issue has been known for years and no action has been done on this yet. If this RFC gets closed as too trivial or offtopic, and the issue is being acted upon, its author considers it a successful endeavor.

    Links to previous discussion

    • https://github.com/rust-lang/rfcs/issues/1259
    • https://github.com/rust-lang/rust/issues/25664
    • https://internals.rust-lang.org/t/license-the-rfcs-repo-under-the-cc-by-4-0-license/3870

    Unresolved questions

    Should trivial contributions that don’t fall under copyright be special cased? This is probably best decided on a case by case basis, and only after a contributor has been unresponsive or has disagreed with the new licensing terms.

    Motivation and Summary

    While architectures like x86_64 or ARMv8 define the lowest-common denominator of instructions that all CPUs must support, many CPUs extend these with vector (AVX), bitwise manipulation (BMI) and/or cryptographic (AES) instruction sets. By default, the Rust compiler produces portable binaries that are able to run on all CPUs of a particular architecture. Users that know in which CPUs their binaries are going to run on are able to allow the compiler to use these extra instructions by using the compiler flags --target-feature and --target-cpu. Running these binaries on mismatching CPUs is undefined behavior. Currently, these users have no way in stable Rust to:

    • determine which features are available at compile-time, and
    • determine which features are available at run-time, and
    • embed code for different sets of features into the same binary,

    such that the programs can use different algorithms depending on the features available, and allowing portable ust binaries to efficiently run on many CPU families of a particular architecture.

    The objective of this RFC is to extend the Rust language to solve these three problems, and it does so by adding the following three language features:

    • compile-time feature detection: using configuration macros cfg!(target_feature = "avx2") to detect whether a feature is enabled or disabled in a context (#![cfg(target_feature = "avx2")], …),
    • run-time feature detection: using the cfg_feature_enabled!("avx2") API to detect whether the current host supports the feature, and
    • unconditional code generation: using the function attribute #[target_feature(enable = "avx2")] to allow the compiler to generate code under the assumption that this code will only be reached in hosts that support the feature.

    Detailed design

    Target features

    Each rustc target has a default set of target features that can be controlled via the backend compilation options. The target features for each target should be documented by the compiler and the backends (e.g. LLVM).

    This RFC does not add any target features to the language but it specifies the process for adding target features. Each target feature must:

    • Be proposed in its own mini-RFC, RFC, or rustc-issue and follow a FCP period,
    • Be behind its own feature gate macro of the form target_feature_feature_name (where feature_name should be replaced by the name of the feature ).
    • When possible, be detectable at run-time via the cfg_feature_enabled!("name") API.
    • Include whether some backend-specific compilation options should enable the feature.

    To use unstable target features on nightly, crates must opt into them as usual by writing, for example, #![allow(target_feature_avx2)]. Since this is currently not required, a grace period of one full release cycle will be given in which this will raise a soft error before turning this requirement into a hard error.

    Backend compilation options

    There are currently two ways of passing target feature information to rustc’s code generation backend on stable Rust.

    • -C --target-feature=+/-backend_target_feature_name: where +/- add/remove features from the default feature set of the platform for the whole crate.

    • -C --target-cpu=backend_cpu_name, which changes the default feature set of the crate to be that of all features enabled for backend_cpu_name.

    These two options are available on stable Rust and have been defacto stabilized. Their semantics are LLVM specific and depend on what LLVM actually does with the features.

    This RFC proposes to keep these options “as is”, and add one new compiler option, --enable-features="feature0,feature1,...", (the analogous --disable-features is discussed in the “Future Extensions” section) that supports only stabilized target features.

    This allows us to preserve backwards compatibility while choosing different feature names and semantics than the ones provided by the LLVM backend.

    The effect of --enable-features=feature-list is to enable all features implicitly for all functions of a crate. That is, anywhere within the crate the values of the macro cfg!(target_feature = "feature") and cfg_feature_enabled!("feature") are true.

    Whether the backend compilation options -C --target-feature/--target-cpu also enable some stabilized features or not should be resolved by the RFCs suggesting the stabilization of particular target features.

    Unconditional code generation: #[target_feature]

    (note: the function attribute #[target_feature] is similar to clang’s and gcc’s __attribute__ ((__target__ ("feature"))).)

    This RFC introduces a function attribute that only applies to unsafe functions: #[target_feature(enable = "feature_list")] (the analogous #[target_feature(disable = "feature_list")] is discussed in the “Future Extensions” section):

    • This attribute extends the feature set of a function beyond its default feature set, which allows the compiler to generate code under the assumption that the function’s code will only be reached on hardware that supports its feature set.
    • Calling a function on a target that does not support its features is undefined behavior (see the “On the unsafety of #[target_feature]” section).
    • The compiler will not inline functions in contexts that do not support all the functions features.
    • In #[target_feature(enable = "feature")] functions the value of cfg!(target_feature = "feature") and cfg_feature_enabled!("feature") is always true (otherwise undefined behavior did already happen).

    Note 0: the current RFC does not introduce any ABI issues in stable Rust. ABI issues with some unstable language features are explored in the “Unresolved Questions” section.

    Note 1: a function has the features of the crate where the function is defined +/- #[target_feature] annotations. Iff the function is inlined into a context that extends its feature set, then the compiler is allowed to generate code for the function using this extended feature set (sub-note: inlining is forbidden in the opposite case).

    Example 0 (basics):

    This example covers how to use #[target_feature] with run-time feature detection to dispatch to different function implementations depending on the features supported by the CPU at run-time:

    // This function will be optimized for different targets
    #[inline(always)] fn foo_impl() { ... }
    
    // This generates a stub for CPUs that support SSE4:
    #[target_feature(enable = "sse4")] unsafe fn foo_sse4() {
        // Inlining `foo_impl` here is fine because `foo_sse4`
        // extends `foo_impl` feature set
        foo_impl()
    }
    
    // This generates a stub for CPUs that support AVX:
    #[target_feature(enable = "avx")] unsafe fn foo_avx() { foo_impl() }
    
    // This function returns the best implementation of `foo` depending
    // on which target features the host CPU does support at run-time:
    fn initialize_global_foo_ptr() -> fn () -> () {
        if cfg_feature_enabled!("avx") {
          unsafe { foo_avx }
        } else if cfg_feature_enabled!("sse4") {
          unsafe { foo_sse4 }
        } else {
          foo_impl // use the default version
        }
    }
    
    // During binary initialization we can set a global function pointer
    // to the best implementation of foo depending on the features that
    // the CPU where the binary is running does support:
    lazy_static! {
        static ref GLOBAL_FOO_PTR: fn() -> () = {
            initialize_foo()
        };
    }
    // ^^ note: the ABI of this function pointer is independent of the target features
    
    
    fn main() {
      // Finally, we can use the function pointer to dispatch to the best implementation:
      global_foo_ptr();
    }

    Example 1 (inlining):

    #[target_feature(enable = "avx")] unsafe fn foo();
    #[target_feature(enable = "avx")] #[inline] unsafe fn baz(); // OK
    #[target_feature(enable = "avx")] #[inline(always)] unsafe fn bar(); // OK
    
    #[target_feature(enable = "sse3")]
    unsafe fn moo() {
      // This function supports SSE3 but not AVX
      if cfg_feature_enabled!("avx") {
          foo(); // OK: foo is not inlined into moo
          baz(); // OK: baz is not inlined into moo
          bar();
          // ^ ERROR: bar cannot be inlined across mismatching features
          // did you meant to make bar #[inline] instead of #[inline(always)]?
          // Note: the logic to detect this is the same as for the call
          // to baz, but in this case rustc must emit an error because an
          // #[inline(always)] function cannot be inlined in this call site.
      }
    }

    Conditional compilation: cfg!(target_feature)

    The cfg!(target_feature = "feature_name") macro allows querying at compile-time whether a target feature is enabled in the current context. It returns true if the feature is enabled, and false otherwise.

    In a function annotated with #[target_feature(enable = "feature_name")] the macro cfg!(target_feature = "feature_name") expands to true if the generated code for the function uses the feature (current bug.

    Note: how accurate cfg!(target_feature) can be made is an “Unresolved Question” (see the section below). Ideally, when cfg!(target_feature) is used in a function that does not support the feature, it should still return true in the cases where the function gets inlined into a context that does support the feature. This can happen often if the function is generic, or an #[inline] function defined in a different crate. This can results in errors at monomorphization time only if #![cfg(target_feature)] is used, but not if if cfg!(target_feature) is used since in this case all branches need to type-check properly.

    Example 3 (conditional compilation):

    fn bzhi_u32(x: u32, bit_position: u32) -> u32 {
        // Conditional compilation: both branches must be syntactically valid,
        // but it suffices that the true branch type-checks:
        #[cfg(target_feature = "bmi2")] {
            // if this code is being compiled with BMI2 support, use a BMI2 instruction:
            unsafe { intrinsic::bmi2::bzhi(x, bit_position) }
        }
        #[cfg(not(target_feature = "bmi2"))] {
            // otherwise, call a portable emulation of the BMI2 instruction
            portable_emulation::bzhi(x, bit_position)
        }
    }
    
    fn bzhi_u64(x: u64, bit_position: u64) -> u64 {
        // Here both branches must type-check and whether the false branch is removed
        // or not is left up to the optimizer.
        if cfg!(target_feature = "bmi2") {  // `cfg!` expands to `true` or `false` at compile-time
            // if target has the BMI2 instruction set, use a BMI2 instruction:
            unsafe { intrinsic::bmi2::bzhi(x, bit_position) }
            // ^^^ NOTE: this function cannot be inlined unless `bzhi_u64` supports
            // the required features
        } else {
            // otherwise call an algorithm that emulates the instruction:
            portable_emulation::bzhi(x, bit_position)
        }
    }

    Example 4 (value of cfg! within #[target_feature]):

    #[target_feature("+avx")]
    unsafe fn foo() {
      if cfg!(target_feature = "avx") { /* this branch is always taken */ }
      else { /* this branch is never taken */ }
      #[cfg(not(target_feature = "avx"))] {
        // this is dead code
      }
    }

    Run-time feature detection

    Writing safe wrappers around unsafe functions annotated with #[target_feature] requires run-time feature detection. This RFC adds the following macro to the standard library:

    • cfg_feature_enabled!("feature") -> bool-expr

    with the following semantics: “if the host hardware on which the current code is running supports the "feature", the bool-expr that cfg_feature_enabled! expands to has value true, and false otherwise.

    If the result is known at compile-time, the macro approach allows expanding the result without performing any run-time detection at all. This RFC does not guarantee that this is the case, but the current implementation does this.

    Examples of using run-time feature detection have been shown throughout this RFC, there isn’t really more to it.

    If the API of run-time feature detection turns out to be controversial before stabilization, a follow-up RFC that focus on run-time feature detection will need to be merged, blocking the stabilization of this RFC.

    How We Teach This

    There are two parts to this story, the low-level part, and the high-level part.

    Example 5 (high-level usage of target features):

    note: ifunc is not part of this RFC, but just an example of what can be built on top of it.

    In the high-level part we have the ifunc function attribute, implemented as a procedural macro (some of these macros already exist):

    #[ifunc("default", "sse4", "avx", "avx2")]  //< MAGIC
    fn foo() {}
    
    fn main() {
      foo(); // dispatches to the best implementation at run-time
      #[cfg(target_feature = "sse4")] {
        foo(); // dispatches to the sse4 implementation at compile-time
      }
    }

    The following example covers what ifunc might expand to.

    Example 6 (ifunc expansion):

    // Copy-pastes "foo" and generates code for multiple target features:
    unsafe fn foo_default() { ...foo tokens... }
    #[target_feature(enable = "sse4")] unsafe fn foo_sse4() { ...foo tokens... }
    #[target_feature(enable = "avx")]  unsafe fn foo_avx() { ...foo tokens... }
    #[target_feature(enable = "avx2")] unsafe fn foo_avx2() { ...foo tokens... }
    
    // Initializes `foo` on binary initialization
    static foo_ptr: fn() -> () = initialize_foo();
    
    fn initialize_foo() -> typeof(foo) {
        // run-time feature detection:
        if cfg_feature_enabled!("avx2")  { return unsafe { foo_avx2 } }
        if cfg_feature_enabled!("avx")  { return unsafe { foo_avx } }
        if cfg_feature_enabled!("sse4")  { return unsafe { foo_sse4 } }
        foo_default
    }
    
    // Wrap foo to do compile-time dispatch
    #[inline(always)] fn foo() {
      #[cfg(target_feature = "avx2")]
      { unsafe { foo_avx2() } }
      #[cfg(and(target_feature = "avx"), not(target_feature = "avx2")))]
      { unsafe { foo_avx() } }
      #[cfg(and(not(target_feature = "sse4")), not(target_feature = "avx")))]
      { unsafe { foo_sse4() } }
      #[cfg(not(target_feature = "sse4"))]
      { foo_ptr() }
    }

    Note that there are many solutions to this problem and they have different trade-offs, but these can be explored in procedural macros. When wrapping unsafe intrinsics, conditional compilation can be used to create zero-cost wrappers:

    Example 7 (three-layered approach to target features):

    // Raw unsafe intrinsic: in LLVM, std::intrinsic, etc.
    // Calling this on an unsupported target is undefined behavior.
    extern "C" { fn raw_intrinsic_function(f64, f64) -> f64; }
    
    // Software emulation of the intrinsic,
    // works on all architectures.
    fn software_emulation_of_raw_intrinsic_function(f64, f64) -> f64;
    
    // Safe zero-cost wrapper over the intrinsic
    // (i.e. can be inlined)
    fn my_intrinsic(a: f64, b: f64) -> f64 {
      #[cfg(target_feature = "some_feature")] {
        // If "some_feature" is enabled, it is safe to call the
        // raw intrinsic function
        unsafe { raw_intrinsic_function(a, b) }
      }
      #[cfg(not(target_feature = "some_feature"))] {
         // if "some_feature" is disabled calling
         // the raw intrinsic function is undefined behavior (per LLVM),
         // we call the safe software emulation of the intrinsic:
         software_emulation_of_raw_intrinsic_function(a, b)
      }
    }
    
    #[ifunc("default", "avx")]
    fn my_intrinsic_rt(a: f64, b: f64) -> f64 { my_intrinsic(a, b) }

    Due to the low-level and high-level nature of these feature we will need two kinds of documentation. For the low level part:

    • document how to do compile-time and run-time feature detection using cfg!(target_feature) and cfg_feature_enabled!,
    • document how to use #[target_feature],
    • document how to use all of these together to solve problems like in the examples of this RFC.

    For the high-level part we should aim to bring third-party crates implementing ifunc! or similar close to 1.0 releases before stabilization.

    Drawbacks

    • Obvious increase in language complexity.

    The main drawback of not solving this issue is that many libraries that require conditional feature-dependent compilation or run-time selection of code for different features (SIMD, BMI, AES, …) cannot be written efficiently in stable Rust.

    Alternatives

    Backend options

    An alternative would be to mix stable, unstable, unknown, and backend-specific features into --target-feature.

    Make #[target_feature] safe

    Calling a function annotated with #[target_feature] on a host that does not support the feature invokes undefined behavior in LLVM, the assembler, and possibly the hardware See this comment.

    That is, calling a function on a target that does not support its feature set is undefined behavior and this RFC cannot specify otherwise. The main reason is that target_feature is a promise from the user to the toolchain and the hardware, that the code will not be reached in a CPU that does not support the feature. LLVM, the assembler, and the hardware all assume that the user will not violate this contract, and there is little that the Rust compiler can do to make this safer:

    • The Rust compiler cannot emit a compile-time diagnostic because it cannot know whether the user is going to run the binary in a CPU that supports the features or not.
    • A run-time diagnostic always incurs a run-time cost, and is only possible iff the absence of a feature can be detected at run-time (the “Future Extensions” section of this RFC discusses how to implement “Run-time diagnostics” to detect this, when possible).

    However, the --target-feature/--target-cpu compiler options allows one to implicitly generate binaries that reliably run into undefined behavior without needing any unsafe annotations at all, so the answer to the question “Should #[target_feature] be safe/unsafe?” is indeed a hard one.

    The main differences between #[target_feature] and --target-feature/--enable-feature are the following:

    • --target-feature/--enable-feature are “backend options” while #[target_feature] is part of the language
    • --target-feature/--enable-feature is specified by whoever compiles the code, while #[target_feature] is specified by whoever writes the code
    • compiling safe Rust code for a particular target, and then running the binary on that target, can only produce undefined behavior iff #[target_feature] is safe.

    This RFC chooses that the #[target_feature] attribute only applies to unsafe fns, so that if one compiles safe Rust source code for a particular target, and then runs the binary on that particular target, no unsafety can result.

    Note that we can always make #[target_feature] safe in the future without breaking backwards compatibility, but the opposite is not true. That is, if somebody figures out a way of making #[target_feature] safe such that the above holds, we can always make that change.

    Guarantee no segfaults from unsafe code

    Calling a #[target_feature]-annotated function on a platform that does not support it invokes undefined behavior. We could guarantee that this does not happen by always doing run-time feature detection, introducing a run-time cost in the process, and by only accepting features for which run-time feature detection can be done.

    This RFC considers that any run-time cost is unacceptable as a default for a combination of language features whose main domain of use is a performance sensitive one.

    The “Future Extension“s section discusses how to implement this in an opt-in way, e.g., as a sort of binary instrumentation.

    Make #[target_feature] + #[inline(always)] incompatible

    This RFC requires the compiler to error when a function marked with both #[target_feature] and the #[inline(always)] attribute cannot be inlined in a particular call site due to incompatible features. So we might consider to simplify this RFC by just making these attributes incompatible.

    While this is technically correct, the compiler must detect when any function (#[inline(always)], #[inline], generics, …) is inlined into an incompatible context, and prevent this from happening. Erroring if the function is #[inline(always)] does not significantly simplify the RFC nor the compiler implementation.

    Removing run-time feature detection from this RFC

    This RFC adds an API for run-time feature detection to the standard library.

    The alternative would be to implement similar functionality as a third-party crate that might eventually be moved into the nursery. Such crates already exist

    In particular, the API proposed in this RFC is “stringly-typed” (to make it uniform with the other features being proposed), but arguably a third party crate might want to use an enum to allow pattern-matching on features. These APIs have not been sufficiently explored in the ecosystem yet.

    The main arguments in favor of including run-time feature detection in this RFC are:

    • it is impossible to write safe wrappers around #[target_feature] without it
    • implementing it requires the asm! macro or linking to a C library (or linking to a C wrapper around assembly),
    • run-time detection should be kept in sync with the addition of new target features,
    • the compiler might want to use LLVM’s run-time feature detection which is part of compiler-rt.

    The consensus in the internal forums and previous discussions seem to be that this is worth it.

    It might turn out that the people from the future are able to come up with a better API. But in that case we can always deprecate the current API and include the new one in the standard library.

    Adding full cpuid support to the standard library

    The cfg_feature_enable! macro is designed to work specifically with the features that can be used via cfg_target_feature and #[target_feature]. However, in the grand scheme of things, run-time detection of these features is only a small part of the information provided by cpuid-like CPU instructions.

    Currently at least two great implementations of cpuid-like functionality exists in Rust for x86: cupid and rust-cpuid. Adding the macro to the standard library does not prevent us from adding more comprehensive functionality in the future, and it does not prevent us from reusing any of these libraries in the internal implementation of the macro.

    Unresolved questions

    How accurate should cfg!(feature) be?

    What happens if the macro cfg!(target_feature = "feature_name") is used inside a function for which feature_name is not enabled, but that function gets inlined into a context in which the feature is enabled? We want the macro to accurately return true in this case, that is, to be as accurate as possible so that users always get the most efficient algorithms, but whether this is even possible is an unresolved question.

    This might result in monomorphization errors if #![cfg(target_feature)] is used, but not if if cfg!(target_feature) is used since in this case all branches need to type-check properly.

    We might want to amend this RFC with more concrete semantics about this as we improve the compiler.

    How do we handle ABI issues with portable vector types?

    The ABI of #[target_feature] functions does not change for all types currently available in stable Rust. However, there are types that we might want to add to the language at some point, like portable vector types, for which this is not the case.

    The behavior of #[target_feature] for those types should be specified in the RFC that proposes to stabilize those types, and this RFC should be amended as necessary.

    The following examples showcase some potential problems when calling functions with mismatching ABIs, or when using function pointers.

    Whether we can warn, or hard error at compile-time in these cases remains to be explored.

    Example 8 (ABI):

    #[target_feature(enable = "sse2")]
    unsafe fn foo_sse2(a: f32x8) -> f32x8 { a } // ABI: 2x 128bit registers
    
    #[target_feature(enable = "avx2")]
    unsafe fn foo_avx2(a: f32x8) -> f32x8 { // ABI: 1x 256bit register
      foo_sse2(a) // ABI mismatch:
      //^ should this perform an implicit conversion, produce a hard error, or just undefined behavior?
    }
    
    #[target_feature(enable = "sse2")]
    unsafe fn bar() {
      type fn_ptr = fn(f32x8) -> f32x8;
      let mut p0: fn_ptr = foo_sse2; // OK
      let p1: fn_ptr = foo_avx2; // ERROR: mismatching ABI
      let p2 = foo_avx2; // OK
      p0 = p2; // ERROR: mismatching ABI
    }

    Future Extensions

    Mutually exclusive features

    In some cases, e.g., when enabling AVX but disabling SSE4 the compiler should probably produce an error, but for other features like thumb_mode the behavior is less clear. These issues should be addressed by the RFC proposing the stabilizaiton of the target features that need them, as future extensions to this RFC.

    Safely inlining #[target_feature] functions on more contexts

    The problem is the following:

    #[target_feature(enable = "sse3")]
    unsafe fn baz() {
        if some_opaque_code() {
            unsafe { foo_avx2(); }
        }
    }

    If foo_avx2 gets inlined into baz, optimizations that reorder its instructions across the if condition might introduce undefined behavior.

    Maybe, one could make cfg_feature_enabled! a bit magical, so that when it is used in the typical ways the compiler can infer whether inlining is safe, e.g.,

    #[target_feature(enable = "sse3")]
    unsafe fn baz() {
      // -- sse3 boundary start (applies to fn arguments as well)
      // -- sse3 boundary ends
      if cfg_feature_enabled!("avx") {
        // -- avx boundary starts
        unsafe { foo_avx(); }
        //    can be inlined here, but its code cannot be
        //    reordered out of the avx boundary
        // -- avx boundary ends
      }
      // -- sse3 boundary starts
      // -- sse3 boundary ends (applies to drop as well)
    }

    Whether this is worth it or can be done at all is an unresolved question. This RFC does not propose any of this, but leaves the door open for such an extension to be explored and proposed independently in a follow-up RFC.

    Run-time diagnostics

    Calling a #[target_feature]-annotated function on a platform that does not support it invokes undefined behavior. A friendly compiler could use run-time feature detection to check whether calling the function is safe and emit a nice panic! message.

    This can be done, for example, by desugaring this:

    #[target_feature(enable = "avx")] unsafe fn foo();

    into this:

    #[target_feature(enable = "avx")] unsafe fn foo_impl() { ...foo tokens... };
    
    // this function will be called if avx is not available:
    fn foo_fallback() {
        panic!("calling foo() requires a target with avx support")
    }
    
    // run-time feature detection on initialization
    static foo_ptr: fn() -> () = if cfg_feature_enabled!("avx") {
        unsafe { foo_impl }
    } else {
        foo_fallback
    };
    
    // dispatches foo via function pointer to produce nice diagnostic
    unsafe fn foo() { foo_ptr() }

    This is not required for safety and can be implemented into the compiler as an opt-in instrumentation pass without going through the RFC process. However, a proposal to enable this by default should go through the RFC process.

    Disabling features

    This RFC does not allow disabling target features, but suggest an analogous syntax to do so (#[target_feature(disable = "feature-list")], --disable-feature=feature-list). Disabling features can result in some non-sensical situations and should be pursued as a future extension of this RFC once we want to stabilize a target feature for which it makes sense.

    Acknowledgements

    @parched @burntsushi @alexcrichton @est31 @pedrocr @chandlerc @RalfJung @matthieu-m

    • #[target_feature] Pull-Request: https://github.com/rust-lang/rust/pull/38079
    • cfg_target_feature tracking issue: https://github.com/rust-lang/rust/issues/29717

    Summary

    Allow a break of labelled blocks with no loop, which can carry a value.

    Motivation

    In its simplest form, this allows you to terminate a block early, the same way that return allows you to terminate a function early.

    'block: {
        do_thing();
        if condition_not_met() {
            break 'block;
        }
        do_next_thing();
        if condition_not_met() {
            break 'block;
        }
        do_last_thing();
    }

    In the same manner as return and the labelled loop breaks in RFC 1624, this break can carry a value:

    let result = 'block: {
        if foo() { break 'block 1; }
        if bar() { break 'block 2; }
        3
    };

    RFC 1624 opted not to allow options to be returned from for or while loops, since no good option could be found for the syntax, and it was hard to do it in a natural way. This proposal gives us a natural way to handle such loops with no changes to their syntax:

    let result = 'block: {
        for &v in container.iter() {
            if v > 0 { break 'block v; }
        }
        0
    };

    This extension handles searches more complex than loops in the same way:

    let result = 'block: {
        for &v in first_container.iter() {
            if v > 0 { break 'block v; }
        }
        for &v in second_container.iter() {
            if v < 0 { break 'block v; }
        }
        0
    };

    Implementing this without a labelled break is much less clear:

    let mut result = None;
    for &v in first_container.iter() {
        if v > 0 {
            result = Some(v);
            break;
        }
    }
    if result.is_none() {
        for &v in second_container.iter() {
            if v < 0 {
                result = Some(v);
                break;
            }
        }
    }
    let result = result.unwrap_or(0);

    Detailed design

    'BLOCK_LABEL: { EXPR }

    would simply be syntactic sugar for

    'BLOCK_LABEL: loop { break { EXPR } }

    except that unlabelled breaks or continues which would bind to the implicit loop are forbidden inside the EXPR.

    This is perhaps not a conceptually simpler thing, but it has the advantage that all of the wrinkles are already well understood as a result of the work that went into RFC 1624. If EXPR contains explicit break statements as well as the implicit one, the compiler must be able to infer a single concrete type from the expressions in all of these break statements, including the whole of EXPR; this concrete type will be the type of the expression that the labelled block represents.

    Because the target of the break is ambiguous, code like the following will produce an error at compile time:

    loop {
        'labelled_block: {
            if condition() {
                break;
            }
        }
    }

    If the intended target of the break is the surrounding loop, it may not be clear to the user how to express that. Where there is a surrounding loop, the error message should explicitly suggest labelling the loop so that the break can target it.

    'loop_label: loop {
        'labelled_block: {
            if condition() {
                break 'loop_label;
            }
        }
    }

    How We Teach This

    This can be taught alongside loop-based examples of labelled breaks.

    Drawbacks

    The proposal adds new syntax to blocks, requiring updates to parsers and possibly syntax highlighters.

    Alternatives

    Everything that can be done with this feature can be done without it. However in my own code, I often find myself breaking something out into a function simply in order to return early, and the accompanying verbosity of passing parameters and return values with full type signatures is a real cost.

    Another alternative would be to revisit one of the proposals to add syntax to for and while.

    We have three options for handling an unlabelled break or continue inside a labelled block:

    • compile error on both break and continue
    • bind break to the labelled block, compile error on continue
    • bind break and continue through the labelled block to a containing loop/while/for

    This RFC chooses the first option since it’s the most conservative, in that it would be possible to switch to a different behaviour later without breaking working programs. The second is the simplest, but makes a large difference between labelled and unlabelled blocks, and means that a program might label a block without ever explicitly referring to that label just for this change in behavior. The third is consistent with unlabelled blocks and with Java, but seems like a rich potential source of confusion.

    Unresolved questions

    None outstanding that I know about.

    Summary

    Rust’s ecosystem, tooling, documentation, and compiler are constantly improving. To make it easier to follow development, and to provide a clear, coherent “rallying point” for this work, this RFC proposes that we declare a edition every two or three years. Editions are designated by the year in which they occur, and represent a release in which several elements come together:

    • A significant, coherent set of new features and APIs have been stabilized since the previous edition.
    • Error messages and other important aspects of the user experience around these features are fully polished.
    • Tooling (IDEs, rustfmt, Clippy, etc) has been updated to work properly with these new features.
    • There is a guide to the new features, explaining why they’re important and how they should influence the way you write Rust code.
    • The book has been updated to cover the new features.
      • Note that this is already required prior to stabilization, but in general these additions are put in an appendix; updating the book itself requires significant work, because new features can change the book in deep and cross-cutting ways. We don’t block stabilization on that.
    • The standard library and other core ecosystem crates have been updated to use the new features as appropriate.
    • A new edition of the Rust Cookbook has been prepared, providing an updated set of guidance for which crates to use for various tasks.

    Sometimes a feature we want to make available in a new edition would require backwards-incompatible changes, like introducing a new keyword. In that case, the feature is only available by explicitly opting in to the new edition. Existing code continues to compile, and crates can freely mix dependencies using different editions.

    Motivation

    The status quo

    Today, Rust evolution happens steadily through a combination of several mechanisms:

    • The nightly/stable release channel split. Features that are still under development are usable only on the nightly channel, preventing de facto lock-in and thus leaving us free to iterate in ways that involve code breakage before “stabilizing” the feature.

    • The rapid (six week) release process. Frequent releases on the stable channel allow features to stabilize as they become ready, rather than as part of a massive push toward an infrequent “feature-based” release. Consequently, Rust evolves in steady, small increments.

    • Deprecation. Compiler support for deprecating language features and library APIs makes it possible to nudge people toward newer idioms without breaking existing code.

    All told, the tools work together quite nicely to allow Rust to change and grow over time, while keeping old code working (with only occasional, very minor adjustments to account for things like changes to type inference.)

    What’s missing

    So, what’s the problem?

    There are a few desires that the current process doesn’t have a good story for:

    • Lack of clear “chapters” in the evolutionary story. A downside to rapid releases is that, while the constant small changes eventually add up to large shifts in idioms, there’s not an agreed upon line of demarcation between these major shifts. Nor is there a clear point at which tooling, books, and other artifacts are all fully updated and in sync around a given set of features. This is not a huge problem for those following Rust development carefully (e.g., readers of this RFC!), but many users and potential users don’t. Providing greater clarity and coherence around the “chapters” of Rust evolution will make it easier to provide an overall narrative arc, and to refer easily to large sets of changes.

    • Lack of community rallying points. The six week release process tends to make each individual release a somewhat ho hum affair. On the one hand, that’s the whole point–we want to avoid marathon marches toward huge, feature-based releases, and instead ship things in increments as they become ready. But in doing so, we lose an opportunity to, every so often, come together as an entire community and produce a “major release” that is polished, coherent, and meaningful in a way that each six week increment is not. The roadmap process does provide some of this flavor, but it’s hard to beat the power of working together toward a point-in-time release. The challenge is doing so without losing the benefits of our incremental working style.

    • Changes that may require some breakage in corner cases. The simplest example is adding new keywords: the current implementation of catch uses the syntax do catch because catch is not a keyword, and cannot be added even as a contextual keyword without potential breakage. There are plenty of examples of “superficial” breakage like this that do not fit well into the current evolution mechanisms.

    At the same time, the commitment to stability and rapid releases has been an incredible boon for Rust, and we don’t want to give up those existing mechanisms or their benefits.

    This RFC proposes editions as a mechanism we can layer on top of our existing release process, keeping its guarantees while addressing its gaps.

    Detailed design

    The basic idea

    To make it easier to follow Rust’s evolution, and to provide a clear, coherent “rallying point” for the community, the project declares a edition every two or three years. Editions are designated by the year in which they occur, and represent a release in which several elements come together:

    • A significant, coherent set of new features and APIs have been stabilized since the previous edition.
    • Error messages and other important aspects of the user experience around these features are fully polished.
    • Tooling (IDEs, rustfmt, Clippy, etc) has been updated to work properly with these new features.
    • There is a guide to the new features, explaining why they’re important and how they should influence the way you write Rust code.
    • The book has been updated to cover the new features.
      • Note that this is already required prior to stabilization, but in general these additions are put in an appendix; updating the book itself requires significant work, because new features can change the book in deep and cross-cutting ways. We don’t block stabilization on that.
    • The standard library and other core ecosystem crates have been updated to use the new features as appropriate.
    • A new edition of the Rust Cookbook has been prepared, providing an updated set of guidance for which crates to use for various tasks.

    The precise list of elements going into an edition is expected to evolve over time, as the Rust project and ecosystem grow.

    Sometimes a feature we want to make available in a new edition would require backwards-incompatible changes, like introducing a new keyword. In that case, the feature is only available by explicitly opting in to the new edition. Each crate can declare an edition in its Cargo.toml like edition = "2019"; otherwise it is assumed to have edition 2015, coinciding with Rust 1.0. Thus, new editions are opt in, and the dependencies of a crate may use older or newer editions than the crate itself.

    To be crystal clear: Rust compilers must support all extant editions, and a crate dependency graph may involve several different editions simultaneously. Thus, editions do not split the ecosystem nor do they break existing code.

    Furthermore:

    • As with today, each new version of the compiler may gain stabilizations and deprecations.
    • When opting in to a new edition, existing deprecations may turn into hard errors, and the compiler may take advantage of that fact to repurpose existing usage, e.g. by introducing a new keyword. This is the only kind of breaking change a edition opt-in can make.

    Thus, code that compiles without warnings on the previous edition (under the latest compiler release) will compile without errors on the next edition (modulo the usual caveats about type inference changes and so on).

    Alternatively, you can continue working with the previous edition on new compiler releases indefinitely, but your code may not have access to new features that require new keywords and the like. New features that are backwards compatible, however, will be available on older editions.

    Edition timing, stabilizations, and the roadmap process

    As mentioned above, we want to retain our rapid release model, in which new features and other improvements are shipped on the stable release channel as soon as they are ready. So, to be clear, we do not hold features back until the next edition.

    Rather, editions, as their name suggests, represent a point of global coherence, where documentation, tooling, the compiler, and core libraries are all fully aligned on a new set of (already stabilized!) features and other changes. This alignment can happen incrementally, but an edition signals that it has happened.

    At the same time, editions serve as a rallying point for making sure this alignment work gets done in a timely fashion–and helping set scope as needed. To make this work, we use the roadmap process:

    • As today, each year has a [roadmap setting out that year’s vision]. Some years—like 2017—the roadmap is mostly about laying down major new groundwork. Some years, however, they roadmap explicitly proposes to produce a new edition during the year.

    • Edition years are focused primarily on stabilization, polish, and coherence, rather than brand new ideas. We are trying to put together and ship a coherent product, complete with documentation and a well-aligned ecosystem. These goals will provide a rallying point for the whole community, to put our best foot forward as we publish a significant new version of the project.

    In short, editions are striking a delicate balance: they’re not a cutoff for stabilization, which continues every six weeks, but they still provide a strong impetus for coming together as a community and putting together a polished product.

    The preview period

    There’s an important tension around stabilization and editions:

    • We want to enable new features, including those that require an edition opt-in, to be available on the stable channel as they become ready.

      • That means that we must enable some form of the opt in before the edition is fully ready to ship.
    • We want to retain our promise that code compiling on stable will continue to do so with new versions of the compiler, with minimum hassle.

      • That means that, once any form of the opt in is shipped, it cannot introduce new hard errors.

    Thus, at some point within an edition year, we will enable the opt-in on the stable release channel, which must include all of the hard errors that will be introduced in the next edition, but not yet all of the stabilizations (or other artifacts that go into the full edition release). This is the preview period for the edition, which ends when a release is produced that synchronizes all of the elements that go into an edition and the edition is formally announced.

    A broad policy on edition changes

    There are numerous reasons to limit the scope of changes for new editions, among them:

    • Limiting churn. Even if you aren’t forced to update your code, even if there are automated tools to do so, churn is still a pain for existing users. It also invalidates, or at least makes harder to use, existing content on the internet, like StackOverflow answers and blog posts. And finally, it plays against the important and hard work we’ve done to make Rust stable in both reality and perception. In short, while editions avoid ecosystem splits and make churn opt-in, they do not eliminate all drawbacks.

    • Limiting technical debt. The compiler retains compatibility for old editions, and thus must have distinct “modes” for dealing with them. We need to strongly limit the amount and complexity of code needed for these modes, or the compiler will become very difficult to maintain.

    • Limiting deep conceptual changes. Just as we want to keep the compiler maintainable, so too do we want to keep the conceptual model sustainable. That is, if we make truly radical changes in a new edition, it will be very difficult for people to reason about code involving different editions, or to remember the precise differences.

    These lead to some hard and soft constraints.

    Hard constraints

    TL;DR: Warning-free code on edition N must compile on edition N+1 and have the same behavior.

    There are only two things a new edition can do that a normal release cannot:

    • Change an existing deprecation into a hard error.
      • This option is only available when the deprecation is expected to hit a relatively small percentage of code.
    • Change an existing deprecation to deny by default, and leverage the corresponding lint setting to produce error messages as if the feature were removed entirely.

    The second option is to be preferred whenever possible. Note that warning-free code in one edition might produce warnings in the next edition, but it should still compile successfully.

    The Rust compiler supports multiple editions, but must only support a single version of “core Rust”. We identify “core Rust” as being, roughly, MIR and the core trait system; this specification will be made more precise over time. The implication is that the “edition modes” boil down to keeping around multiple desugarings into this core Rust, which greatly limits the complexity and technical debt involved. Similar, core Rust encompasses the core conceptual model of the language, and this constraint guarantees that, even when working with multiple editions, those core concepts remain fixed.

    Soft constraints

    TL;DR: Most code with warnings on edition N should, after running rustfix, compile on edition N+1 and have the same behavior.

    The core edition design avoids an ecosystem split, which is very important. But it’s also important that upgrading your own code to a new edition is minimally disruptive. The basic principle is that changes that cannot be automated must be required only in a small minority of crates, and even there not require extensive work. This principle applies not just to editions, but also to cases where we’d like to make a widespread deprecation.

    Note that a rustfix tool will never be perfect, because of conditional compilation and code generation. So it’s important that, in the cases it inevitably fails, the manual fixes are not too onerous.

    In addition, migrations that affect a large percentage of code must be “small tweaks” (e.g. clarifying syntax), and as above, must keep the old form intact (though they can enact a deny-by-default lint on it).

    These are “soft constraints” because they use terms like “small minority” and “small tweaks”, which are open for interpretation. More broadly, the more disruption involved, the higher the bar for the change.

    Positive examples: What edition opt-ins can do

    Given those principles, let’s look in more detail at a few examples of the kinds of changes edition opt-ins enable. These are just examples—this RFC doesn’t entail any commitment to these language changes.

    Example: new keywords

    We’ve taken as a running example introducing new keywords, which sometimes cannot be done backwards compatibly (because a contextual keyword isn’t possible). Let’s see how this works out for the case of catch, assuming that we’re currently in edition 2015.

    • First, we deprecate uses of catch as identifiers, preparing it to become a new keyword.
    • We may, as today, implement the new catch feature using a temporary syntax for nightly (like do catch).
    • When the edition opt-in for 2019 is released, opting into it makes catch into a keyword, regardless of whether the catch feature has been implemented. This means that opting in may require some adjustment to your code.
    • The catch syntax can be hooked into an implementation usable on nightly within the 2019 edition.
    • When we’re confident in the catch feature on nightly, we can stabilize it onto the stable channel for users opting into 2019. It cannot be stabilized onto the 2015 edition, since it requires a new keyword.
    • catch is now a part of Rust, but may not be fully integrated into e.g. the book, IDEs, etc.
    • At some point, edition 2019 is fully shipped, and catch is now fully incorporated into tooling, documentation, and core libraries.

    To make this even more concrete, let’s imagine the following (aligned with the diagram above):

    Rust versionLatest available editionStatus of catch in 2015Status of catch in latest edition
    1.152015Valid identifierValid identifier
    1.212015Valid identifier; deprecatedValid identifier; deprecated
    1.232019 (preview period)Valid identifier; deprecatedKeyword, unimplemented
    1.252019 (preview period)Valid identifier; deprecatedKeyword, implemented
    1.272019 (final)Valid identifier; deprecatedKeyword, implemented

    Now, suppose you have the following code:

    Cargo.toml:
    
    edition = "2015"
    
    // main.rs:
    
    fn main() {
        let catch = "gotcha";
        println!("{}", catch);
    }
    • This code will compile as-is on all Rust versions. On versions 1.21 and above, it will yield a warning, saying that catch is deprecated as an identifier.

    • On version 1.23, if you change Cargo.toml to use 2019, the code will fail to compile due to catch being a keyword.

    • However, if you leave it at 2015, you can upgrade to Rust 1.27 and use libraries that opt in to the 2019 edition with no problem.

    Example: repurposing corner cases

    A similar story plays out for more complex modifications that repurpose existing usages. For example, some suggested module system improvements deduce the module hierarchy from the filesystem. But there is a corner case today of providing both a lib.rs and a bin.rs directly at the top level, which doesn’t play well with the new feature.

    Using editions, we can deprecate such usage (in favor of the bin directory), then make it an error during the preview period. The module system change could then be made available (and ultimately stabilized) within the preview period, before fully shipping on the next edition.

    Example: repurposing syntax

    A more radical example: changing the syntax for trait objects and impl Trait. In particular, we have sometimes discussed:

    • Using dyn Trait for trait objects (e.g. Box<dyn Iterator<Item = u32>>)
    • Repurposing “bare Trait to use instead of impl Trait, so you can write fn foo() -> Iterator<Item = u32> instead of fn foo -> impl Iterator<Item = u32>

    Suppose we wanted to carry out such a change. We could do it over multiple steps:

    • First, introduce and stabilize dyn Trait.
    • Deprecate bare Trait syntax in favor of dyn Trait.
    • In an edition preview period, make it an error to use bare Trait syntax.
    • Ship the new edition, and wait until bare Trait syntax is obscure.
    • Re-introduce bare Trait syntax, stabilize it, and deprecate impl Trait in favor of it.

    Of course, this RFC isn’t suggesting that such a course of action is a good one, just that it is possible to do without breakage. The policy around such changes is left as an open question.

    Example: type inference changes

    There are a number of details about type inference that seem suboptimal:

    • Currently multi-parameter traits like AsRef<T> will infer the value of one parameter on the basis of the other. We would at least like an opt-out, but employing it for AsRef is backwards-incompatible.
    • Coercions don’t always trigger when we wish they would, but altering the rules may cause other programs to stop compiling.
    • In trait selection, where-clauses take precedence over impls; changing this is backwards-incompatible.

    We may or may not be able to change these details on the existing edition. With enough effort, we could probably deprecate cases where type inference rules might change and request explicit type annotations, and then—in the new edition—tweak those rules.

    Negative examples: What edition opt-ins can’t do

    There are also changes that editions don’t help with, due to the constraints we impose. These limitations are extremely important for keeping the compiler maintainable, the language understandable, and the ecosystem compatible.

    Example: changes to coherence rules

    Trait coherence rules, like the “orphan” rule, provide a kind of protocol about which crates can provide which impls. It’s not possible to change protocol incompatibly, because existing code will assume the current protocol and provide impls accordingly, and there’s no way to work around that fact via deprecation.

    More generally, this means that editions can only be used to make changes to the language that are applicable crate-locally; they cannot impose new requirements or semantics on external crates, since we want to retain compatibility with the existing ecosystem.

    Example: Error trait downcasting

    See rust-lang/rust#35943. Due to a silly oversight, you can’t currently downcast the “cause” of an error to introspect what it is. We can’t make the trait have stricter requirements; it would break existing impls. And there’s no way to do so only in a newer edition, because we must be compatible with the older one, meaning that we cannot rely on downcasting.

    This is essentially another example of a non-crate-local change.

    More generally, breaking changes to the standard library are not possible.

    The full mechanics

    We’ll wrap up with the full details of the mechanisms at play.

    • rustc will take a new flag, --edition, which can specify the edition to use. This flag will default to edition 2015.
      • This flag should not affect the behavior of the core trait system or passes at the MIR level.
    • Cargo.toml can include an edition value, which is used to pass to rustc.
      • If left off, it will assume edition 2015.
    • cargo new will produce a Cargo.toml with the latest edition value (including an edition currently in its preview period).

    How We Teach This

    First and foremost, if we accept this RFC, we should publicize the plan widely, including on the main Rust blog, in a style similar to previous posts about our release policy. This will require extremely careful messaging, to make clear that editions are not about breaking Rust code, but instead primarily about putting together a globally coherent, polished product on a regular basis, while providing some opt-in ways to allow for evolution not possible today.

    In addition, the book should talk about the basics from a user perspective, including:

    • The fact that, if you do nothing, your code should continue to compile (with minimum hassle) when upgrading the compiler.
    • If you resolve deprecations as they occur, moving to a new edition should also require minimum hassle.
    • Best practices about upgrading editions (TBD).

    Drawbacks

    There are several drawbacks to this proposal:

    • Most importantly, it risks muddying our story about stability, which we’ve worked very hard to message clearly.

      • To mitigate this, we need to put front and center that, if you do nothing, updating to a new rustc should not be a hassle, and staying on an old edition doesn’t cut you off from the ecosystem.
    • It adds a degree of complication to an evolution story that is already somewhat complex (with release channels and rapid releases).

      • On the other hand, edition releases provide greater clarity about major steps in Rust evolution, for those who are not following development closely.
    • New editions can invalidate existing blog posts and documentation, a problem we suffered a lot around the 1.0 release

      • However, this situation already obtains in the sense of changing idioms; a blog post using try! these days already feels like it’s using “old Rust”. Notably, though, the code still compiles on current Rust.

      • A saving grace is that, with editions, it’s more likely that a post will mention what edition is being used, for context. Moreover, with sufficient work on error messages, it seems plausible to detect that code was intended for an earlier editions and explain the situation.

    These downsides are most problematic in cases that involve “breakage” if they were done without opt in. They indicate that, even if we do adopt editions, we should use them judiciously.

    Alternatives

    Within the basic edition structure

    There was a significant amount of discussion on the RFC thread about using “2.0” rather than “2019”. It’s difficult to concisely summarize this discussion, but in a nutshell, some feel that 2.0 (with a guarantee of backwards compatibility) is more honest and easier to understand, while others worry that it will be misconstrued no matter how much we caveat it, and that we cannot risk Rust being perceived as unstable or risky.

    • The “edition” terminology and current framing arose from this discussion, as a way of clarifying what we intend – i.e., that the concept is primarily about putting together a coherent package – and as a heads up that the model is different from that of other languages.

    Sticking with the basic idea of editions, there are a couple alternative setups that avoid “preview” editions:

    • Rather than locking in a set of deprecations up front, we could provide “stable channel feature gates”, allowing users to opt in to features of the next edition in a fine-grained way, which may introduce new errors. When the new edition is released, one would then upgrade to it and remove all of the gates.

      • The main downside is lack of clarity about what the current “stable Rust” is; each combination of gates gives you a slightly different language. While this fine-grained variation is acceptable for nightly, since it’s meant for experimentation, it cuts against some of the overall goals of this proposal to introduce such fragmentation on the stable channel. There’s risk that people would use a mixture of gates in perpetuity, essentially picking their preferred dialect of the language.

      • It’s feasible to introduce such a fine-grained scheme later on, if it proves necessary. Given the risks involved, it seems best to start with a coarse-grained flag at the outset.

    • We could stabilize features using undesirable syntax at first, making way for better syntax only when the new edition is released, then deprecate the “bad” syntax in favor of the “good” syntax.

      • For catch, this would look like:
        • Stabilize do catch.
        • Deprecate catch as an identifier.
        • Ship new edition, which makes catch a keyword.
        • Stabilize catch as a syntax for the catch feature, and deprecate do catch in favor of it.
      • This approach involves significantly more churn than the one proposed in the RFC.
    • Finally, we could just wait to stabilize features like catch until the moment the edition is released.

      • This approach seems likely to introduce all the downsides of “feature-based” releases, making the edition release extremely high stakes, and preventing usage of “ready to go” feature on the stable channel until the edition is shipped.

    Alternatives to editions

    The larger alternatives include, of course, not trying to solve the problems laid out in the motivation, and instead finding creative alternatives.

    • For cases like catch that require a new keyword, it’s not clear how to do this without ending up with suboptimal syntax.

    The other main alternative is to issue major releases in the semver sense: Rust 2.0. This strategy could potentially be coupled with a rustfix, depending on what kinds of changes we want to allow. Downsides:

    • Lack of clarity around ecosystem compatibility. If we allow both 1.0 and 2.0 crates to interoperate, we arrive at something like this RFC. If we don’t, we risk splitting the ecosystem, which is extremely dangerous.

    • Likely significant blowback based on abandoning stability as a core principle of Rust. Even if we provide a perfect rustfix, the message is significantly muddied.

    • Much greater temptation to make sweeping changes, and continuous litigation over what those changes should be.

    Unresolved questions

    • What impact is there, if any, on breakage permitted today for bug fixing or soundness holes? In many cases these are more disruptive than introducing a new keyword.

    • Is “edition” the right key in Cargo.toml? Would it be more clear to just say rust = "2019"?

    • Will we ever consider dropping support for very old editions? Given the constraints in this RFC, it seems unlikely to ever be worth it.

    • Should rustc default to the latest edition instead?

    • How do we handle macros, particularly procedural macros, that may mix source from multiple editions?

    Summary

    Allow constraints to appear in where clauses which are trivially known to either always hold or never hold. This would mean that impl Foo for Bar where i32: Iterator would become valid, and the impl would never be satisfied.

    Motivation

    It may seem strange to ever want to include a constraint that is always known to hold or not hold. However, as with many of these cases, allowing this would be useful for macros. For example, a custom derive may want to add additional functionality if two derives are used together. As another more concrete example, Diesel allows the use of normal Rust operators to generate the equivalent SQL. Due to coherence rules, we can’t actually provide a blanket impl, but we’d like to automatically implement std::ops::Add for columns when they are of a type for which + is a valid operator. The generated impl would look like:

    impl<T> std::ops::Add<T> for my_column
    where
        my_column::SqlType: diesel::types::ops::Add,
        T: AsExpression<<my_column::SqlType as diesel::types::ops::Add>::Rhs>,
    {
        // ...
    }

    One would never write this impl normally since we always know the type of my_column::SqlType. However, when you consider the use case of a macro, we can’t always easily know whether that constraint would hold or not at the time when we’re generating code.

    Detailed design

    Concretely implementing this means the removal of E0193. Interestingly, as of Rust 1.7, that error never actually appears. Instead the current behavior is that something like impl Foo for Bar where i32: Copy (e.g. anywhere that the constraint always holds) compiles fine, and impl Foo for Bar where i32: Iterator fails to compile by complaining that i32 does not implement Iterator. The original error message explicitly forbidding this case does not seem to ever appear.

    The obvious complication that comes to mind when implementing this feature is that it would allow nonsensical projections to appear in the where clause as well. For example, when i32: IntoIterator appears in a where clause, we would also need to allow i32::Item: SomeTrait to appear in the same clause, and even allow for _ in 1 to appear in item bodies, and have it all successfully compile.

    Since code that was caught by this error is usually nonsense outside of macros, it would be valuable for the error to continue to live on as a lint. The lint trivial_constraints would be added, matching the pre-1.7 semantics of E0193, and would be set to warn by default.

    How We Teach This

    This feature does not need to be taught explicitly. Knowing the basic rules of where clauses, one would naturally already expect this to work.

    Drawbacks

    • The changes to the compiler could potentially increase complexity quite a bit

    Alternatives

    n/a

    Unresolved questions

    Should the lint error by default instead of warn?

    Summary

    Add dedicated methods to RefCell for replacing and swapping the contents. These functions will panic if the RefCell is currently borrowed, but will otherwise behave exactly like their cousins on Cell.

    Motivation

    The main problem this intends to solve is that doing a replace by hand looks like this:

    let old_version = replace(&mut *some_refcell.borrow_mut(), new_version);

    One of the most important parts of the ergonomics initiative has been reducing “type tetris” exactly like that &mut *.

    It also seems weird that this use-case is so much cleaner with a plain Cell, even though plain Cell is strictly a less powerful abstraction. Usually, people explain RefCell as being a superset of Cell, but RefCell doesn’t actually offer all of the functionality as seamlessly as Cell.

    Detailed design

    impl<T> RefCell<T> {
      pub fn replace(&self, t: T) -> T {
          mem::replace(&mut *self.borrow_mut(), t)
      }
      pub fn swap(&self, other: &Self) {
          mem::swap(&mut *self.borrow_mut(), &mut *other.borrow_mut())
      }
    }

    How We Teach This

    The nicest aspect of this is that it maintains this story behind Cell and RefCell:

    RefCell supports everything that Cell does. However, it has runtime overhead, and it can panic.

    Drawbacks

    Depending on how we want people to use RefCell, this RFC might be removing deliberate syntactic vinegar. For example, if RefCell is used to protect a counter:

    let counter_ref = counter.borrow_mut();
    *counter_ref += 1;
    do_some_work();
    *counter_ref -= 1;

    In this case, if do_some_work() tries to modify counter, it will panic. Since Rust tends to value explicitness over implicitness exactly because it can surface bugs, this code is conceptually more dangerous:

    counter.replace(counter.replace(0) + 1);
    do_some_work();
    counter.replace(counter.replace(0) - 1);

    Also, we’re adding more specific functions to a core type. That comes with cost in documentation and maintenance.

    Alternatives

    Besides just-write-the-reborrow, these functions can also be put in a separate crate with an extension trait. This has all the disadvantages that two-line libraries usually have:

    • They tend to have low discoverability.
    • They put strain on auditing.
    • The hassle of adding an import and a toml line is as high as the reborrow.

    The other alternative, as far as getting rid of the reborrow goes, is to change the language so that it implicitly does the reborrow. That alternative is massively more general, but it also has knock-on effects throughout the rest of the language. It also still doesn’t do anything about the asymetry between Cell and RefCell.

    Unresolved questions

    Should we add RefCell::get() and RefCell::set()? The equivalent versions with borrow(mut) and clone aren’t as noisy, since all the reborrowing is done implicitly because clone is a method, but that would bring us all the way to RefCell-as-a-Cell-superset.

    Summary

    Provide a stable mechanism to specify the behavior of panic! in no-std applications.

    Motivation

    The #![no_std] attribute was stabilized some time ago and it made possible to build no-std libraries on stable. However, to this day no-std applications still require a nightly compiler to be built. The main cause of this is that the behavior of panic! is left undefined in no-std context, and the only way to specify a panicking behavior is through the unstable panic_fmt language item.

    This document proposes a stable mechanism to specify the behavior of panic! in no-std context. This would be a step towards enabling development of no-std applications like device firmware, kernels and operating systems on the stable channel.

    Detailed design

    Constraints

    panic! in no-std environments must continue to be free of memory allocations and its API can only be changed in a backward compatible way.

    Although not a hard constraint, the cognitive load of the mechanism would be greatly reduced if it mimicked the existing custom panic hook mechanism as much as possible.

    PanicInfo

    The types std::panic::PanicInfo and std::panic::Location will be moved into the core crate, and PanicInfo will gain a new method:

    impl PanicInfo {
        pub fn message(&self) -> Option<&fmt::Arguments> { .. }
    }

    This method returns Some if the panic! invocation needs to do any formatting like panic!("{}: {}", key , value) does.

    fmt::Display

    For convenience, PanicInfo will gain an implementation of the fmt::Display trait that produces a message very similar to the one that the standard panic! hook produces. For instance, this program:

    use std::panic::{self, PanicInfo};
    
    fn panic_handler(pi: &PanicInfo) {
        println!("the application {}", pi);
    }
    
    fn main() {
        panic::set_hook(Box::new(panic_handler));
    
        panic!("Hello, {}!", "world");
    }

    Would print:

    $ cargo run
    the application panicked at 'Hello, world!', src/main.rs:27:4
    

    #[panic_implementation]

    A #[panic_implementation] attribute will be added to the language. This attribute can be used to specify the behavior of panic! in no-std context. Only functions with signature fn(&PanicInfo) -> ! can be annotated with this attribute, and only one item can be annotated with this attribute in the whole dependency graph of a crate.

    Here’s an example of how to replicate the panic messages one gets on std programs on a no-std program:

    use core::fmt;
    use core::panic::PanicInfo;
    
    // prints: "program panicked at 'reason', src/main.rs:27:4"
    #[panic_implementation]
    fn my_panic(pi: &PanicInfo) -> ! {
        let _ = writeln!(&MY_STDERR, "program {}", pi);
    
        abort()
    }

    The #[panic_implementation] item will roughly expand to:

    fn my_panic(pi: &PanicInfo) -> ! {
        // same as before
    }
    
    // Generated by the compiler
    // This will always use the correct ABI and will work on the stable channel
    #[lang = "panic_fmt"]
    #[no_mangle]
    pub extern fn rust_begin_panic(msg: ::core::fmt::Arguments,
                                   file: &'static str,
                                   line: u32,
                                   col: u32) -> ! {
        my_panic(&PanicInfo::__private_unstable_constructor(msg, file, line, col))
    }

    Payload

    The core version of the panic! macro will gain support for payloads, as in panic!(42). When invoked with a payload PanicInfo.payload() will return the payload as an &Any trait object just like it does in std context with custom panic hooks.

    When using core::panic! with formatting, e.g. panic!("{}", 42), the payload will be uninspectable: it won’t be downcastable to any known type. This is where core::panic! diverges from std::panic!. The latter returns a String, behind the &Any trait object, from the payload() method in this situation.

    Feature gate

    The initial implementation of the #[panic_implementation] mechanism as well as the core::panic::Location and core::panic::PanicInfo types will be feature gated. std::panic::Location and std::panic::PanicInfo will continue to be stable except for the new PanicInfo.message method.

    Unwinding

    The #[panic_implementation] mechanism can only be used with no-std applications compiled with -C panic=abort. Applications compiled with -C panic=unwind additionally require the eh_personality language item which this proposal doesn’t cover.

    std::panic!

    This proposal doesn’t affect how the selection of the panic runtime in std applications works (panic_abort, panic_unwind, etc.). Using #[panic_implementation] in std programs will cause a compiler error.

    How We Teach This

    Currently, no-std applications are only possible on nightly so there’s not much official documentation on this topic given its dependency on several unstable features. Hopefully once no-std applications are minimally possible on stable we can have a detailed chapter on the topic in “The Rust Programming Language” book. In the meantime, this feature can be documented in the unstable book.

    Drawbacks

    Slight deviation from std

    Although both #[panic_implementation] (no-std) and custom panic hooks (std) use the same PanicInfo type. The behavior of the PanicInfo.payload() method changes depending on which context it is used: given panic!("{}", 42), payload() will return a String, behind an Any trait object, in std context but it will return an opaque Any trait object in no-std context.

    Alternatives

    Not doing this

    Not providing a stable alternative to the panic_fmt language item means that no-std applications will continue to be tied to the nightly channel.

    Two PanicInfo types

    An alternative design is to have two different PanicInfo types, one in core and one in std. The difference between these two types would be in their APIs:

    // core
    impl PanicInfo {
        pub fn location(&self) -> Option<Location> { .. }
        pub fn message(&self) -> Option<&fmt::Arguments> { .. }
    
        // Not available
        // pub fn payload(&self) -> &(Any + Send) { .. }
    }
    
    // std
    impl PanicInfo {
        pub fn location(&self) -> Option<Location> { .. }
        pub fn message(&self) -> Option<&fmt::Arguments> { .. }
        pub fn payload(&self) -> &(Any + Send) { .. }
    }

    In this alternative design the signature of the #[panic_implementation] function would be enforced to be fn(&core::panic::PanicInfo) -> !. Custom panic hooks will continue to use the std::panic::PanicInfo type.

    This design precludes supporting payloads in core::panic! but also eliminates the difference between core::PanicInfo.payload() in no-std vs std by eliminating the method in the former context.

    Unresolved questions

    fmt::Display

    Should the Display of PanicInfo format the panic information as "panicked at 'reason', src/main.rs:27:4", as "'reason', src/main.rs:27:4", or simply as "reason".

    Unwinding in no-std

    Is this design compatible, or can it be extended to work, with unwinding implementations for no-std environments?

    Summary

    Add the ability to create named existential types and support impl Trait in let, const, and static declarations.

    // existential types
    existential type Adder: Fn(usize) -> usize;
    fn adder(a: usize) -> Adder {
        |b| a + b
    }
    
    // existential type in associated type position:
    struct MyType;
    impl Iterator for MyType {
        existential type Item: Debug;
        fn next(&mut self) -> Option<Self::Item> {
            Some("Another item!")
        }
    }
    
    // `impl Trait` in `let`, `const`, and `static`:
    
    const ADD_ONE: impl Fn(usize) -> usize = |x| x + 1;
    static MAYBE_PRINT: Option<impl Fn(usize)> = Some(|x| println!("{}", x));
    fn my_func() {
        let iter: impl Iterator<Item = i32> = (0..5).map(|x| x * 5);
        ...
    }

    Motivation

    This RFC proposes two expansions to Rust’s impl Trait feature. impl Trait, first introduced in RFC 1522, allows functions to return types which implement a given trait, but whose concrete type remains anonymous. impl Trait was expanded upon in RFC 1951, which added impl Trait to argument position and resolved questions around syntax and parameter scoping. In its current form, the feature makes it possible for functions to return unnameable or complex types such as closures and iterator combinators. impl Trait also allows library authors to hide the concrete type returned by a function, making it possible to change the return type later on.

    However, the current feature has some severe limitations. Right now, it isn’t possible to return an impl Trait type from a trait implementation. This is a huge restriction which this RFC fixes by making it possible to create a named existential type:

    // `impl Trait` in traits:
    struct MyStruct;
    impl Iterator for MyStruct {
    
        // Here we can declare an associated type whose concrete type is hidden
        // to other modules.
        //
        // External users only know that `Item` implements the `Debug` trait.
        existential type Item: Debug;
    
        fn next(&mut self) -> Option<Self::Item> {
            Some("hello")
        }
    }

    This syntax allows us to declare multiple items which refer to the same existential type:

    // Type `Foo` refers to a type that implements the `Debug` trait.
    // The concrete type to which `Foo` refers is inferred from this module,
    // and this concrete type is hidden from outer modules (but not submodules).
    pub existential type Foo: Debug;
    
    const FOO: Foo = 5;
    
    // This function can be used by outer modules to manufacture an instance of
    // `Foo`. Other modules don't know the concrete type of `Foo`,
    // so they can't make their own `Foo`s.
    pub fn get_foo() -> Foo {
        5
    }
    
    // We know that the argument and return value of `get_larger_foo` must be the
    // same type as is returned from `get_foo`.
    pub fn get_larger_foo(x: Foo) -> Foo {
        let x: i32 = x;
        x + 10
    }
    
    // Since we know that all `Foo`s have the same (hidden) concrete type, we can
    // write a function which returns `Foo`s acquired from different places.
    fn one_of_the_foos(which: usize) -> Foo {
        match which {
            0 => FOO,
            1 => foo1(),
            2 => foo2(),
            3 => opt_foo().unwrap(),
    
            // It also allows us to make recursive calls to functions with an
            // `impl Trait` return type:
            x => one_of_the_foos(x - 4),
        }
    }

    Separately, this RFC adds the ability to store an impl Trait type in a let, const or static. This makes const and static declarations more concise, and makes it possible to store types such as closures or iterator combinators in consts and statics.

    In a future world where const fn has been expanded to trait functions, one could imagine iterator constants such as this:

    const THREES: impl Iterator<Item = i32> = (0..).map(|x| x * 3);

    Since the type of THREES contains a closure, it is impossible to write down. The const/static type annotation elison RFC has suggested one possible solution. That RFC proposes to let users omit the types of consts and staticss. However, in some cases, completely omitting the types of const and static items could make it harder to tell what sort of value is being stored in a const or static. Allowing impl Trait in consts and statics would resolve the unnameable type issue while still allowing users to provide some information about the type.

    Guide-Level Explanation

    Guide: impl Trait in let, const and static:

    impl Trait can be used in let, const, and static declarations, like this:

    use std::fmt::Display;
    
    let displayable: impl Display = "Hello, world!";
    println!("{}", displayable);

    Declaring a variable of type impl Trait will hide its concrete type. This is useful for declaring a value which implements a trait, but whose concrete type might change later on. In our example above, this means that, while we can “display” the value of displayable, the concrete type &str is hidden:

    use std::fmt::Display;
    
    // Without `impl Trait`:
    const DISPLAYABLE: &str = "Hello, world!";
    fn display() {
        println!("{}", DISPLAYABLE);
        assert_eq!(DISPLAYABLE.len(), 5);
    }
    
    // With `impl Trait`:
    const DISPLAYABLE: impl Display = "Hello, world!";
    
    fn display() {
        // We know `DISPLAYABLE` implements `Display`.
        println!("{}", DISPLAYABLE);
    
        // ERROR: no method `len` on `impl Display`
        // We don't know the concrete type of `DISPLAYABLE`,
        // so we don't know that it has a `len` method.
        assert_eq!(DISPLAYABLE.len(), 5);
    }

    impl Trait declarations are also useful when declaring constants or static with types that are impossible to name, like closures:

    // Without `impl Trait`, we can't declare this constant because we can't
    // write down the type of the closure.
    const MY_CLOSURE: ??? = |x| x + 1;
    
    // With `impl Trait`:
    const MY_CLOSURE: impl Fn(i32) -> i32 = |x| x + 1;

    Finally, note that impl Trait let declarations hide the concrete types of local variables:

    let displayable: impl Display = "Hello, world!";
    
    // We know `displayable` implements `Display`.
    println!("{}", displayable);
    
    // ERROR: no method `len` on `impl Display`
    // We don't know the concrete type of `displayable`,
    // so we don't know that it has a `len` method.
    assert_eq!(displayable.len(), 5);

    At first glance, this behavior doesn’t seem particularly useful. Indeed, impl Trait in let bindings exists mostly for consistency with consts and statics. However, it can be useful for documenting the specific ways in which a variable is used. It can also be used to provide better error messages for complex, nested types:

    // Without `impl Trait`:
    let x = (0..100).map(|x| x * 3).filter(|x| x % 5 == 0);
    
    // ERROR: no method named `bogus_missing_method` found for type
    // `std::iter::Filter<std::iter::Map<std::ops::Range<{integer}>, [closure@src/main.rs:2:26: 2:35]>, [closure@src/main.rs:2:44: 2:58]>` in the current scope
    x.bogus_missing_method();
    
    // With `impl Trait`:
    let x: impl Iterator<Item = i32> = (0..100).map(|x| x * 3).filter(|x| x % 5);
    
    // ERROR: no method named `bogus_missing_method` found for type
    // `impl std::iter::Iterator` in the current scope
    x.bogus_missing_method();

    Guide: Existential types

    Rust allows users to declare existential types. An existential type allows you to give a name to a type without revealing exactly what type is being used.

    use std::fmt::Debug;
    
    existential type Foo: Debug;
    
    fn foo() -> Foo {
        5i32
    }

    In the example above, Foo refers to i32, similar to a type alias. However, unlike a normal type alias, the concrete type of Foo is hidden outside of the module. Outside the module, the only thing that is known about Foo is that it implements the traits that appear in its declaration (e.g. Debug in existential type Foo: Debug;). If a user outside the module tries to use a Foo as an i32, they will see an error:

    use std::fmt::Debug;
    
    mod my_mod {
      pub existential type Foo: Debug;
    
      pub fn foo() -> Foo {
          5i32
      }
    
      pub fn use_foo_inside_mod() -> Foo {
          // Creates a variable `x` of type `i32`, which is equal to type `Foo`
          let x: i32 = foo();
          x + 5
      }
    }
    
    fn use_foo_outside_mod() {
        // Creates a variable `x` of type `Foo`, which is only known to implement `Debug`
        let x = my_mod::foo();
    
        // Because we're outside `my_mod`, the user cannot determine the type of `Foo`.
        let y: i32 = my_mod::foo(); // ERROR: expected type `i32`, found existential type `Foo`
    
        // However, the user can use its `Debug` impl:
        println!("{:?}", x);
    }

    This makes it possible to write modules that hide their concrete types from the outside world, allowing them to change implementation details without affecting consumers of their API.

    Note that it is sometimes necessary to manually specify the concrete type of an existential type, like in let x: i32 = foo(); above. This aids the function’s ability to locally infer the concrete type of Foo.

    One particularly noteworthy use of existential types is in trait implementations. With this feature, we can declare associated types as follows:

    struct MyType;
    impl Iterator for MyType {
        existential type Item: Debug;
        fn next(&mut self) -> Option<Self::Item> {
            Some("Another item!")
        }
    }

    In this trait implementation, we’ve declared that the item returned by our iterator implements Debug, but we’ve kept its concrete type (&'static str) hidden from the outside world.

    We can even use this feature to specify unnameable associated types, such as closures:

    struct MyType;
    impl Iterator for MyType {
        existential type Item: Fn(i32) -> i32;
        fn next(&mut self) -> Option<Self::Item> {
            Some(|x| x + 5)
        }
    }

    Existential types can also be used to reference unnameable types in a struct definition:

    existential type Foo: Debug;
    fn foo() -> Foo { 5i32 }
    
    struct ContainsFoo {
        some_foo: Foo
    }

    It’s also possible to write generic existential types:

    #[derive(Debug)]
    struct MyStruct<T: Debug> {
        inner: T
    };
    
    existential type Foo<T>: Debug;
    
    fn get_foo<T: Debug>(x: T) -> Foo<T> {
        MyStruct {
            inner: x
        }
    }

    Similarly to impl Trait under RFC 1951, existential type implicitly captures all generic type parameters in scope. In practice, this means that existential associated types may contain generic parameters from their impl:

    struct MyStruct;
    trait Foo<T> {
        type Bar;
        fn bar() -> Bar;
    }
    
    impl<T> Foo<T> for MyStruct {
        existential type Bar: Trait;
        fn bar() -> Self::Bar {
            ...
            // Returns some type MyBar<T>
        }
    }

    However, as in 1951, lifetime parameters must be explicitly annotated.

    Reference-Level Explanation

    Reference: impl Trait in let, const and static:

    The rules for impl Trait values in let, const, and static declarations work mostly the same as impl Trait return values as specified in RFC 1951.

    These values hide their concrete type and can only be used as a value which is known to implement the specified traits. They inherit any type parameters in scope. One difference from impl Trait return types is that they also inherit any lifetime parameters in scope. This is necessary in order for let bindings to use impl Trait. let bindings often contain references which last for anonymous scope-based lifetimes, and annotating these lifetimes manually would be impossible.

    Reference: Existential Types

    Existential types are similar to normal type aliases, except that their concrete type is determined from the scope in which they are defined (usually a module or a trait impl). For example, the following code has to examine the body of foo in order to determine that the concrete type of Foo is i32:

    existential type Foo: Debug;
    
    fn foo() -> Foo {
        5i32
    }

    Foo can be used as i32 in multiple places throughout the module. However, each function that uses Foo as i32 must independently place constraints upon Foo such that it must be i32:

    fn add_to_foo_1(x: Foo) {
        x + 1 // ERROR: binary operation `+` cannot be applied to existential type `Foo`
    //  ^ `x` here is type `Foo`.
    //    Type annotations needed to resolve the concrete type of `x`.
    //    (^ This particular error should only appear within the module in which
    //      `Foo` is defined)
    }
    
    fn add_to_foo_2(x: Foo) {
        let x: i32 = x;
        x + 1
    }
    
    fn return_foo(x: Foo) -> Foo {
        // This is allowed.
        // We don't need to know the concrete type of `Foo` for this function to
        // typecheck.
        x
    }

    Each existential type declaration must be constrained by at least one function body or const/static initializer. A body or initializer must either fully constrain or place no constraints upon a given existential type.

    Outside of the module, existential types behave the same way as impl Trait types: their concrete type is hidden from the module. However, it can be assumed that two values of the same existential type are actually values of the same type:

    mod my_mod {
        pub existential type Foo: Debug;
        pub fn foo() -> Foo {
            5i32
        }
        pub fn bar() -> Foo {
            10i32
        }
        pub fn baz(x: Foo) -> Foo {
            let x: i32 = x;
            x + 5
        }
    }
    
    fn outside_mod() -> Foo {
        if true {
            my_mod::foo()
        } else {
            my_mod::baz(my_mod::bar())
        }
    }

    One last difference between existential type aliases and normal type aliases is that existential type aliases cannot be used in impl blocks:

    existential type Foo: Debug;
    impl Foo { // ERROR: `impl` cannot be used on existential type aliases
        ...
    }
    impl MyTrait for Foo { // ERROR ^
        ...
    }

    While this feature may be added at some point in the future, it’s unclear exactly what behavior it should have– should it result in implementations of functions and traits on the underlying type? It seems like the answer should be “no” since doing so would give away the underlying type being hidden beneath the impl. Still, some version of this feature could be used eventually to implement traits or functions for closures, or to express conditional bounds in existential type signatures (e.g. existential type Foo<T>: Debug; impl<T: Clone> Clone for Foo<T> { ... }). This is a complicated design space which has not yet been explored fully enough. In the future, such a feature could be added backwards-compatibly.

    Drawbacks

    This RFC proposes the addition of a complicated feature that will take time for Rust developers to learn and understand. There are potentially simpler ways to achieve some of the goals of this RFC, such as making impl Trait usable in traits. This RFC instead introduces a more complicated solution in order to allow for increased expressiveness and clarity.

    This RFC makes impl Trait feel even more like a type by allowing it in more locations where formerly only concrete types were allowed. However, there are other places such a type can appear where impl Trait cannot, such as impl blocks and struct definitions (i.e. struct Foo { x: impl Trait }). This inconsistency may be surprising to users.

    Alternatives

    We could instead expand impl Trait in a more focused but limited way, such as specifically extending impl Trait to work in traits without allowing full existential type aliases. A draft RFC for such a proposal can be seen here. Any such feature could, in the future, be added as essentially syntax sugar on top of this RFC, which is strictly more expressive. The current RFC will also help us to gain experience with how people use existential type aliases in practice, allowing us to resolve some remaining questions in the linked draft, specifically around how impl Trait associated types are used.

    Throughout the process we have considered a number of alternative syntaxes for existential types. The syntax existential type Foo: Trait; is intended to be a placeholder for a more concise and accessible syntax, such as abstract type Foo: Trait;. A variety of variations on this theme have been considered:

    • Instead of abstract type, it could be some single keyword like abstype.
    • We could use a different keyword from abstract, like opaque or exists.
    • We could omit a keyword altogether and use type Foo: Trait; syntax (outside of trait definitions).

    A more divergent alternative is not to have an “existential type” feature at all, but instead just have impl Trait be allowed in type alias position. Everything written existential type $NAME: $BOUND; in this RFC would instead be written type $NAME = impl $BOUND;.

    This RFC opted to avoid the type Foo = impl Trait; syntax because of its potential teaching difficulties. As a result of RFC 1951, impl Trait is sometimes universal quantification and sometimes existential quantification. By providing a separate syntax for “explicit” existential quantification, impl Trait can be taught as a syntactic sugar for generics and existential types. By “just using impl Trait” for named existential type declarations, there would be no desugaring-based explanation for all forms of impl Trait.

    This choice has some disadvantages in comparison impl Trait in type aliases:

    • We introduce another new syntax on top of impl Trait, which inherently has some costs.
    • Users can’t use it in a nested fashion without creating an additional existential type.

    Because of these downsides, we are open to reconsidering this question with more practical experience, and the final syntax is left as an unresolved question for the RFC.

    Unresolved questions

    As discussed in the alternatives section above, we will need to reconsider the optimal syntax before stabilizing this feature.

    Additionally, the following extensions should be considered in the future:

    • Conditional bounds. Even with this proposal, there’s no way to specify the impl Trait bounds necessary to implement traits like Iterator, which have functions whose return types implement traits conditional on the input, e.g. fn foo<T>(x: T) -> impl Clone if T: Clone.
    • Associated-type-less impl Trait in trait declarations and implementations, such as the proposal mentioned in the alternatives section. As mentioned above, this feature would be strictly less expressive than this RFC. The more general feature proposed in this RFC would help us to define a better version of this alternative which could be added in the future.
    • A more general form of inference for impl Trait type aliases. This RFC forces each function to either fully constrain or place no constraints upon an impl Trait type. It’s possible to allow some partial constraints through a process like the one described in this comment. However, these partial bounds present implementation concerns, so they have been removed from this RFC. If it turns out that partial bounds would be greatly useful in practice, they can be added backwards-compatibly in a future RFC.

    Moved to 2071-impl-trait-existential-types.md.

    Summary

    Currently when using an if let statement and an irrefutable pattern (read always match) is used the compiler complains with an E0162: irrefutable if-let pattern. The current state breaks macros who want to accept patterns generically and this RFC proposes changing this error to an error-by-default lint which is allowed to be disabled by such macros.

    Motivation

    The use cases for this is in the creation of macros where patterns are allowed because to support the _ patterns the code has to be rewritten to be both much larger and include an [#allow] statement for a lint that does not seem to be related to the problem. The expected outcome is for irrefutable patterns to be compiled to a tautology and have the if block accept it as if it was if true { }. To support this, currently you must do something roughly the following, which seems to counteract the benefit of having if-let and while-let in the spec.

    #[allow(unreachable_patterns)]
    match $val {
        $p => { $b; },
        _ => ()
    }

    The following cannot be used, so the previous must be. An #[allow(irrefutable_let_pattern)] is used so that the error-by-default lint does not appear to the user.

    if let $p = $val {
        $b
    }

    Detailed design

    1. Change the compiler error irrefutable if-let-pattern and similar patterns to an error-by-default lint that can be disabled by an #[allow] statement
    2. Proposed lint name: irrefutable_let_pattern

    Code Example (explicit):

    #[allow(irrefutable_let_pattern)]
    if let _ = 'a' {
        println!("Hello World");
    }

    Code Example (implicit):

    macro_rules! check_five {
        ($p:pat) => {{
            #[allow(irrefutable_let_pattern)]
            if let $p = 5 {
                println!("Pattern matches five");
            }
        }};
    }

    How We Teach This

    This can be taught by changing the second version of The Book to not explicitly say that it is not allowed. Adding that it is a lint that can be disabled.

    Drawbacks

    It allows programmers to manually write the line if let _ = expr { } else { } which is generally obfuscating and not desirable. However, this will only be allowed with an explicit #[allow(irrefutable_let_pattern)].

    Alternatives

    • The trivial alternative: Do nothing. As your motivation explains, this only matters for macros anyways plus there already is an acceptable workaround (match). Code that needs this frequently can just package this workaround in its own macro and be done.

    Unresolved questions

    Summary

    Eliminate the need for “redundant” bounds on functions and impls where those bounds can be inferred from the input types and other trait bounds. For example, in this simple program, the impl would no longer require a bound, because it can be inferred from the Foo<T> type:

    struct Foo<T: Debug> { .. }
    impl<T: Debug> Foo<T> {
      //    ^^^^^ this bound is redundant
      ...
    }

    Hence, simply writing impl<T> Foo<T> { ... } would suffice. We currently support implied bounds for lifetime bounds, super traits and projections. We propose to extend this to all where clauses on traits and types, as was already discussed here.

    Motivation

    Types

    Let’s take an example from the standard library where trait bounds are actually expressed on a type¹.

    pub enum Cow<'a, B: ?Sized + 'a>
        where B: ToOwned
    {
        Borrowed(&'a B),
        Owned(<B as ToOwned>::Owned),
    }

    The ToOwned bound has then to be carried everywhere:

    impl<'a, B: ?Sized> Cow<'a, B>
        where B: ToOwned
    {
        ...
    }
    
    impl<'a, B: ?Sized> Clone for Cow<'a, B>
        where B: ToOwned
    {
        ...
    }
    
    impl<'a, B: ?Sized> Eq for Cow<'a, B: Eq>
        where B: ToOwned
    {
        ...
    }

    even if one does not actually care about the semantics implied by ToOwned:

        fn panic_if_not_borrowed<'a, B>(cow: Cow<'a, B>) -> &'a B
    //      where B: ToOwned
        {
            match cow {
                Cow::Borrowed(b) => b,
                Cow::Owned(_) => panic!(),
            }
        }
    //  ^ the trait `std::borrow::ToOwned` is not implemented for `B`

    However what we know is that if Cow<'a, B> is well-formed, then B has to implement ToOwned. We would say that such a bound is implied by the well-formedness of Cow<'a, B>.

    Currently, impls and functions have to prove that their arguments are well-formed. Under this proposal, they would assume that their arguments are well-formed, leaving the responsibility for proving well-formedness to the caller. Hence we would be able to drop the B: ToOwned bounds in the previous examples.

    Beside reducing repeated constraints, it would also provide a clearer separation between what bounds a type needs so that it is well-formed, and what additional bounds an fn or an impl actually needs:

    struct Set<K> where K: Hash + Eq { ... }
    
    fn only_clonable_set<K: Hash + Eq + Clone>(set: Set<K>) { ... }
    
    // VS
    
    fn only_clonable_set<K: Clone>(set: Set<K>) { ... }

    Moreover, we already support implied lifetime bounds on types:

    pub struct DebugStruct<'a, 'b> where 'b: 'a {
        fmt: &'a mut fmt::Formatter<'b>,
        ...
    }
    
    pub fn debug_struct_new<'a, 'b>(fmt: &'a mut fmt::Formatter<'b>, name: &str) -> DebugStruct<'a, 'b>
    //  where 'b: 'a
    //  ^^^^^^^^^^^^  this is not needed
    {
        /* inside here: assume that `'b: 'a` */
    }

    This RFC proposes to extend this sort of logic beyond these special cases and use it uniformly for both trait bounds and lifetime bounds.

    ¹Actually only a few types in the standard library have bounds, for example HashSet<T> does not have a T: Hash + Eq on the type declaration, but on the impl declaration rather. Whether we should prefer bounds on types or on impls is related, but beyond the scope of this RFC.

    Traits

    Traits also currently support some form of implied bounds, namely super traits bounds:

    // Equivalent to `trait Foo where Self: From<Bar>`.
    trait Foo: From<Bar> { }
    
    pub fn from_bar<T: Foo>(bar: Bar) -> T {
        // `T: From<Bar>` is implied by `T: Foo`.
        T::from(bar)
    }

    and bounds on projections:

    // Equivalent to `trait Foo where Self::Item: Eq`.
    trait Foo {
        type Item: Eq;
    }
    
    fn only_eq<T: Eq>() { }
    
    fn foo<T: Foo>() {
        // `T::Item: Eq` is implied by `T: Foo`.
        only_eq::<T::Item>()
    }

    However, this example does not compile:

        trait Foo<U> where U: Eq { }
    
        fn only_eq<U: Eq>() { }
    
        fn foo<U, T: Foo<U>>() {
            only_eq::<U>()
        }
    //  ^ the trait `std::cmp::Eq` is not implemented for `U`

    Again we propose to uniformly support implied bounds for all where clauses on trait definitions.

    Guide-Level Explanation

    When you declare bounds on a type, you don’t have to repeat them when writing impls and functions as soon as the type appear in the signature or the impl header:

    struct Set<T> where T: Hash + Eq {
        ...
    }
    
    impl<T> Set<T> {
        // You can rely on the fact that `T: Hash + Eq` inside here.
        ...
    }
    
    impl<T> Clone for Set<T> where T: Clone {
        // Same here, and you can also rely on the `T: Clone` bound of course.
        ...
    }
    
    fn only_eq<U: Eq>() { }
    
    fn use_my_set<T>(arg: Set<T>) {
        // We know that `T: Eq` because we have a `Set<T>` as an argument, and there already is a
        // `T: Eq` bound on the declaration of `Set`.
        only_eq::<T>();
    }
    
    // This also works for the return type: no need to repeat bounds.
    fn return_a_set<T>() -> Set<T> {
        Set::new()
    }

    Lifetime bounds are supported as well (this is already the case today):

    struct MyStruct<'a, 'b> where 'b: 'a {
        reference: &'a &'b i32,
    }
    
    fn use_my_struct<'a, 'b>(arg: MyStruct<'a, 'b>) {
        // No need to repeat `where 'b: 'a`, it is assumed.
    }

    However, you still have to write the bounds explicitly if the type does not appear in the function signature or the impl header:

    // `Set<T>` does not appear in the fn signature: we need to explicitly write the bounds.
    fn declare_a_set<T: Hash + Eq>() {
        let set = Set::<T>::new();
    }

    Similarly, you don’t have to repeat bounds that you write on a trait declaration as soon as you know that the trait reference holds:

    trait Foo where Bar: Into<Self> {
        ...
    }
    
    fn into_foo<T: Foo>(bar: Bar) -> T {
        // We know that `T: Foo` holds so given the trait declaration, we know that `Bar: Into<T>`.
        bar.into()
    }

    Note that this is transitive:

    trait Foo { }
    trait Bar where Self: Foo { }
    trait Baz where Self: Bar { }
    
    fn only_foo<T: Foo>() { }
    
    fn use_baz<T: Baz>() {
        // We know that `T: Baz`, hence we know that `T: Bar`, hence we know that `T: Foo`.
        only_foo::<T>()
    }

    This also works for bounds on associated types:

    trait Foo {
        type Item: Debug;
    }
    
    fn debug_foo<U, T: Foo<Item = U>>(arg: U) {
        // We know that `<T as Foo>::Item` implements `Debug` because of the trait declaration.
        // Moreover, we know that `<T as Foo>::Item` is `U`.
        // Hence, we know that `U` implements `Debug`.
        println!("{:?}", arg);
    
        /* do something else with `T` and `U`... */
    }

    Reference-Level Explanation

    This is the fully-detailed design and you probably don’t need to read everything. This design has already been experimented on Chalk, to some extent. The current design has been driven by issue #12, it is a good read to understand why we need to expand where clauses as described below.

    We’ll use the grammar from RFC 1214 to detail the rules:

    T = scalar (i32, u32, ...)              // Boring stuff
      | X                                   // Type variable
      | Id<P0, ..., Pn>                     // Nominal type (struct, enum)
      | &r T                                // Reference (mut doesn't matter here)
      | O0 + ... + On + r                   // Object type
      | [T]                                 // Slice type
      | for<r...> fn(T1, ..., Tn) -> T0     // Function pointer
      | <P0 as Trait<P1, ..., Pn>>::Id      // Projection
    P = r                                   // Region name
      | T                                   // Type
    O = for<r...> TraitId<P1, ..., Pn>      // Object type fragment
    r = 'x                                  // Region name
    

    We’ll use the same notations as RFC 1214 for the set R = <r0, ..., rn> denoting the set of lifetimes currently bound.

    Well-formedness rules

    Basically, we say that something (type or trait reference) is well-formed if the bounds declared on it are met, regardless of the well-formedness of its parameters: this is the main difference with RFC 1214.

    We will write:

    • WF(T: Trait) for a trait reference T: Trait being well-formed
    • WF(T) for a reference to the type T being well-formed

    Trait refs

    We’ll start with well-formedness for trait references. The important thing is that we distinguish between T: Trait and WF(T: Trait). The former means that an impl for T has been found while the latter means that T meets the bounds on trait Trait.

    We’ll also consider a function Expanded applying on where clauses like this:

    Expanded((T: Trait)) = { (T: Trait), WF(T: Trait) }
    Expanded((T: Trait<Item = U>)) = { (T: Trait<Item = U>), WF(T: Trait) }
    Expanded(OtherWhereClause) = { OtherWhereClause }
    

    We naturally extend Expanded so that it applies on a finite set of where clauses:

    Expanded({ WhereClause1, ..., WhereClauseN }) = Union(Expanded(WhereClause1), ..., Expanded(WhereClauseN))
    

    Every where clause a user writes will be expanded through the Expanded function. This means that the following impl:

    impl<T, U> Into<T> for U where T: From<U> { ... }

    will give the following rule:

     T: From<U>, WF(T: From<U>)
    --------------------------------------------------
     U: Into<T>
    

    Now let’s see the actual rule for a trait reference being well-formed:

    WfTraitReference:
      C = Expanded(WhereClauses(TraitId))   // the conditions declared on TraitId must hold...
      R, r... ⊢ [P0, ..., Pn] C             // ...after substituting parameters, of course
      --------------------------------------------------
      R ⊢ WF(for<r...> P0: TraitId<P1, ..., Pn>)
    

    And here is an example:

    // `WF(Self: SuperTrait)` holds.
    trait SuperTrait { }
    
    // `WF(Self: Trait)` holds if `Self: SuperTrait`, `WF(Self: Supertrait)`.
    trait Trait: SuperTrait { }
    
    // `i32: Trait` holds but not `WF(i32: Trait)`.
    // This would be flagged as an error.
    impl Trait for i32 { }
    
    // Both `f32: Trait` and `WF(f32: Trait)` hold.
    impl SuperTrait for f32 { }
    impl Trait for f32 { }

    Types

    The well-formedness rules for types are given by:

    WfScalar:
      --------------------------------------------------
      R ⊢ WF(scalar)
    
    WfFn:                              // an fn pointer is always WF since it only carries parameters
      --------------------------------------------------
      R ⊢ WF(for<r...> fn(T1, ..., Tn) -> T0)
    
    WfObject:
      rᵢ = union of implied region bounds from Oi
      ∀i. rᵢ: r
      --------------------------------------------------
      R ⊢ WF(O0 + ... + On + r)
    
    WfObjectFragment:
      TraitId is object safe
      --------------------------------------------------
      R ⊢ WF(for<r...> TraitId<P1, ..., Pn>)
    
    WfTuple:
      ∀i<n. R ⊢ Ti: Sized              // the *last* field may be unsized
      --------------------------------------------------
      R ⊢ WF((T1, ... ,Tn))
    
    WfNominalType:
      C = Expanded(WhereClauses(Id))   // the conditions declared on Id must hold...
      R ⊢ [P1, ..., Pn] C              // ...after substituting parameters, of course
      --------------------------------------------------
      R ⊢ WF(Id<P1, ..., Pn>)
    
    WfReference:
      R ⊢ T: 'x                        // T must outlive 'x
      --------------------------------------------------
      R ⊢ WF(&'x T)
    
    WfSlice:
      R ⊢ T: Sized
      --------------------------------------------------
      R ⊢ WF([T])
    
    WfProjection:
      R ⊢ P0: Trait<P1, ..., Pn>       // the trait reference holds
      R ⊢ WF(P0: Trait<P1, ..., Pn>)   // the trait reference is well-formed
      --------------------------------------------------
      R ⊢ WF(<P0 as Trait<P1, ..., Pn>>::Id)
    

    Taking again our SuperTrait and Trait from above, here is an example:

    // `WF(Struct<T>)` holds if `T: Trait`, `WF(T: Trait)`.
    struct Struct<T> where T: Trait {
        field: T,
    }
    
    // `WF(Struct<i32>)` would not hold since `WF(i32: Trait)` doesn't.
    // But `WF(Struct<f32>)` does hold.

    Reverse rules

    This is a core element of this RFC. Morally, the well-formedness rules are “if and only if” rules. We thus add reverse rules for each relevant WF rule:

    ReverseWfTraitReferenceᵢ
      // Substitute parameters
      { WhereClause1, ..., WhereClauseN } = [P0, ..., Pn] Expanded(WhereClauses(TraitId))
      R ⊢ WF(for<r...> P0: TraitId<P1, ..., Pn>)
      --------------------------------------------------
      R, r... ⊢ WhereClauseᵢ
    
    ReverseWfTupleᵢ, i < n:
      R ⊢ WF((T1, ..., Tn))
      --------------------------------------------------
      R ⊢ Ti: Sized   // not very useful since this bound is often implicit
    
    ReverseWfNominalTypeᵢ:
      // Substitute parameters
      { WhereClause1, ..., WhereClauseN } = [P1, ..., Pn] Expanded(WhereClauses(id))
      R ⊢ WF(Id<P1, ..., Pn>)
      --------------------------------------------------
      R ⊢ WhereClauseᵢ
    
    ReverseWfReference:
      R ⊢ WF(&'x T)
      --------------------------------------------------
      R ⊢ T: 'x
    
    ReverseWfSlice:
      R ⊢ WF([T])
      --------------------------------------------------
      R ⊢ T: Sized    // same as above
    

    Note that we add reverse rules for all expanded where clauses, this means that given:

    // Expands to `trait Foo where Self: Bar, WF(Self: Bar)`
    trait Bar where Self: Foo { }

    we have two reverse rules given by:

    WF(T: Bar)
    --------------------------------------------------
    T: Foo
    
    WF(T: Bar)
    --------------------------------------------------
    WF(T: Foo)
    

    Remark: Reverse rules include implicit Sized bounds on type declarations. However, they do not include (explicit) ?Sized bounds since those are not real trait bounds, but only a way to disable the implicit Sized bound.

    Input types

    We define the notion of input types of a type. Basically, input types refer to all types that are accessible from referencing to a specific type. For example, a function will assume that the input types of its arguments are well-formed, hence in the body of that function we’ll be able to derive implied bounds thanks to the reverse rules described earlier.

    We’ll denote by InputTypes the function which maps a type to its input types, defined by:

    // Scalar
    InputTypes(scalar) = { scalar }
    
    // Type variable
    InputTypes(X) = { X }
    
    // Region name
    InputTypes(r) = { }
    
    // Reference
    InputTypes(&r T) = Union({ &r T }, InputTypes(T))
    
    // Slice type
    InputTypes([T]) = Union({ [T] }, InputTypes(T))
    
    // Nominal type
    InputTypes(Id<P0, ..., Pn>) = Union({ Id<P0, ..., Pn> }, InputTypes(P0), ..., InputTypes(Pn))
    
    // Object type
    InputTypes(O0 + ... + On + r) = Union({ O0 + ... + On + r }, InputTypes(O0), ..., InputTypes(On))
    
    // Object type fragment
    InputTypes(for<r...> TraitId<P1, ..., Pn>) = { for<r...> TraitId<P1, ..., Pn> }
    
    // Function pointer
    InputTypes(for<r...> fn(T1, ..., Tn) -> T0) = { for<r...> fn(T1, ..., Tn) -> T0 }
    
    // Projection
    InputTypes(<P0 as Trait<P1, ..., Pn>>::Id) = Union(
        { <P0 as Trait<P1, ..., Pn>>::Id },
        InputTypes(P0),
        InputTypes(P1),
        ...,
        InputTypes(Pn)
    )
    

    Note that higher-ranked types (functions, object type fragments) do not carry input types other than themselves. This is because they are unusable as such, one will have to use them in a lower-ranked way at some point (e.g. calling a function) and will thus rely on InputTypes for normal types.

    Assumptions and checking well-formedness

    This is the other core element: how to use reverse rules. Basically, functions and impls will assume that their input types are well-formed, and that (expanded) where clauses hold.

    Functions

    Given a function declaration:

    fn F<r..., X1, ..., Xn>(arg1: T1, ..., argm: Tm) -> T0 where WhereClause1, ..., WhereClausek {
        /* body of the function inside here */
    }

    We rely on the following assumptions inside the body of F:

    • Expanded({ WhereClause1, ..., WhereClausek })
    • WF(T) for all T ∈ Union(InputTypes(T0), InputTypes(T1), ..., InputTypes(Tm))
    • WF(Xi) for all i

    Note that we assume that the input types of the return type T0 are well-formed.

    With these assumptions, the function must be able to prove that everything that appears in its body is well-formed (e.g. every type appearing in the body, projections, etc).

    Moreover, a caller of F would have to prove that the where clauses on F hold, after having substituted parameters.

    Remark: Notice that we assume that the type variables Xi are well-formed for all i. This way, type variables don’t need a special treatment regarding well-formedness. See example below.

    Examples:

    trait Bar { }
    trait Foo where Box<Self>: Bar { }
    
    fn only_bar<T: Bar>() { }
    
    fn foo<T: Foo>() {
        // Inside the body, we have to prove `WF(T)`, `WF(Box<T>)`, and `Box<T>: Bar`.
        // Because we assume that `WF(T: Foo)`, we indeed have `Box<T>: Bar`.
        only_bar::<Box<T>>()
    }
    
    fn main() {
        // We have to prove `WF(i32)`, `i32: Foo`.
        foo::<i32>();
    }
    /// Illustrate remark 2: no need for a special treatment for type variables.
    
    struct Set<K: Hash> { ... }
    
    fn two_variables<T, U>() { }
    
    fn one_variable<T: Hash>() {
        // We have to prove `WF(T)`, `WF(Set<T>)`. `WF(T)` trivially holds because of the assumption
        // made by the function `one_variable`. `WF(Set<T>)` holds because of the `T: Hash` bound.
        two_variables<T, Set<T>>()
    }
    
    fn main() {
        // We have to prove `WF(i32)`.
        one_variable::<i32>();
    }
    /// Illustrate "inner" input types and transitivity
    
    trait Bar where Box<Self>: Eq { }
    trait Baz: Bar { }
    
    struct Struct<T: Baz> { ... }
    
    fn only_eq<T: Eq>() { }
    
    fn dummy<T>(arg: Option<Struct<T>>) {
        /* do something with arg */
    
        // Since `Struct<T>` is an input type, we assume that `WF(Struct<T>)` hence `WF(T: Baz)`
        // hence `WF(T: Bar)` hence `Box<T>: Eq`
        only_eq::<Box<T>>()
    }

    Trait impls

    Given a trait impl:

    impl<r..., X1, ..., Xn> Trait<r'..., T1, ..., Tn> for T0 where WhereClause1, ..., WhereClausek {
        // body of the impl inside here
    
        type Assoc = AssocTyValue;
    
        /* ... */
    }

    We rely on the following assumptions inside the body of the impl:

    • Expanded({ WhereClause1, ..., WhereClausek })
    • WF(T) for all T ∈ Union(InputTypes(T0), InputTypes(T1), ..., InputTypes(Tn))
    • WF(Xi) for all i

    Based on these assumptions, the impl declaration has to prove WF(T0: Trait<r'..., T1, ..., Tn>) and WF(T) for all T ∈ InputTypes(AssocTyValue). Note that associated fns can be seen as (higher-kinded) associated types, but since fn pointers are always well-formed and do not carry input types other than themselves, this is fine.

    Associated fns make their normal assumptions + the set of assumptions made by the impl. Things to prove inside associated fns do not differ from normal fns.

    Note that when projecting out of a type, one must automatically prove that the trait reference holds because of the WfProjection rule.

    Examples:

    struct Set<K: Hash> { ... }
    
    trait Foo where Self: Clone {
        fn foo();
    }
    
    fn only_hash<T: Hash>() { }
    
    impl<K: Clone> Foo for Set<K> {
        // Inside here: we assume `WF(Set<K>)`, `K: Clone`, `WF(K: Clone)`, `WF(K)`.
        // Also, we must prove `WF(Set<K>: Foo)`.
    
        fn foo() {
            only_hash::<K>()
        }
    }
    struct Set<K: Hash> { ... }
    
    trait Foo {
        type Item;
    }
    
    // We need an explicit `K: Hash` bound in order to prove that the associated type value `Set<K>` is WF.
    impl<K: Hash> Foo for K {
        type Item = Set<K>;
    }
    trait Foo {
        type Item;
    }
    
    impl<T> Foo for T where T: Clone {
        type Item = f32;
    }
    
    fn foo<T: Foo>(arg: T) {
        // We must prove `WF(<T as Foo>::Item)` hence prove that `T: Foo`: ok this is in our assumptions.
        let a = <T as Foo>::Item;
    }
    
    fn bar<T: Clone>(arg: T) {
        // We must prove `WF(<T as Foo>::Item)` hence prove that `T: Foo`: ok, use the impl.
        let a = <T as Foo>::Item;
    }

    Inherent impls

    Given an inherent impl:

    impl<r..., X1, ..., Xn> SelfTy where WhereClause1, ..., WhereClausek {
        /* body of the impl inside here */
    }

    We rely on the following assumptions inside the body of the impl:

    • Expanded({ WhereClause1, ..., WhereClausek })
    • WF(T) for all T ∈ InputTypes(SelfTy)
    • WF(Xi) for all i

    Methods make their normal assumptions + the set of assumptions made by the impl. Things to prove inside methods do not differ from normal fns.

    A caller of a method has to prove that the where clauses defined on the impl hold, in addition to the requirements for calling general fns.

    Proving well-formedness for input types

    One would have noticed that we only prove well-formedness for input types in a lazy way (e.g., inside function bodies). This means that if we have a function:

    struct Set<K: Hash> { ... }
    struct NotHash;
    
    fn foo(arg: Set<NotHash>) { ... }

    then no error will be caught until someone actually tries to call foo. Same thing for an impl:

    impl Set<NotHash> { ... }

    the error will not be caught until someone actually uses Set<NotHash>.

    The idea is, when encountering an fn/trait impl/inherent impl, retrieve all input types that appear in the signature / header and for each input type T, do the following: retrieve type variables X1, ..., Xn bound by the declaration and ask for ∃X1, ..., ∃Xn; WF(T) in an empty environment (in Chalk terms). If there is no possible substitution for the existentials, output a warning.

    Example:

    struct Set<K: Hash> { ... }
    
    // `NotHash` is local to this crate, so we know that there exists no `T`
    // such that `NotHash<T>: Hash`.
    struct NotHash<T> { ... }
    
    // Warning: `foo` cannot be called whatever the value of `T`
    fn foo<T>(arg: Set<NotHash<T>>) { ... }

    Cycle detection

    In Chalk this design often leads to cycles in the proof tree. Example:

    trait Foo { }
    // `WF(Self: Foo)` holds.
     
    impl Foo for u8 { }
    
    // Expanded to `trait Bar where Self: Foo, WF(Self: Foo)`
    trait Bar where Self: Foo { }
    
    // WF rule:
    // `WF(Self: Bar)` holds if `Self: Foo`, `WF(Self: Foo)`.
    
    // Reverse WF rules:
    // `Self: Foo` holds if `WF(Self: Bar)`
    // `WF(Self: Foo)` holds if `WF(Self: Bar)`

    Now suppose we are asking whether u8: Foo holds. The following branch exists in the proof tree: u8: Foo holds if WF(u8: Bar) holds if u8: Foo holds.

    I think rustc would have the right behavior currently: just dismiss this branch since it only leads to the tautological rule (u8: Foo) if (u8: Foo).

    In Chalk we have a more sophisticated cycle detection strategy based on tabling, which basically enables us to correctly answer “multiple solutions”, instead of “unique solution” if a simple error-on-cycle strategy were used. Would rustc need such a thing?

    Drawbacks

    • Implied bounds on types can feel like “implicit bounds” (although they are not: the types appear in the signature of a function / impl header, so it’s self-documenting).
    • Removing a bound from a struct becomes a breaking change (note: this can already be the case for functions and traits).

    Rationale and Alternatives

    Including parameters in well-formedness rules

    Specific to this design: instead of disregarding parameters in well-formedness checks, we could have included them, and added reverse rules of the form: “WF(T) holds if WF(Struct<T>) holds”. From a theoretical point of view, this would have had the same effects as the current design, and would have avoided the whole InputTypes thing. However, implementation in Chalk revealed some tricky issues. Writing in Chalk-style, suppose we have rules like:

    WF(Struct<T>) :- WF(T)
    WF(T) :- WF(Struct<T>)
    

    then trying to prove WF(i32) gives birth to an infinite branch WF(i32) :- WF(Struct<i32>) :- WF(Struct<Struct<i32>>) :- ... in the proof tree, which is hard (at least that’s what we believe) to dismiss.

    Trait aliases

    Trait aliases offer a way to factorize repeated constraints (RFC 1733), it’s useful especially for bounds on types, but it does not overcome the limitations for implied bounds on traits (the where Bar: Into<Self> example is a good one).

    Limiting the scope of implied bounds

    These essentially try to address the breaking change when removing a bound on a type:

    • do not derive implied bounds for types
    • limit the use of implied bounds for types that are in your current crate only
    • derive implied bounds in impl bodys only
    • two distinct feature-gates, one for implied bounds on traits and another one for types

    Unresolved questions

    • Should we try to limit the range of implied bounds to be crate-local (or module-local, etc)?
    • @nikomatsakis pointed here that implied bounds can interact badly with current inference rules.

    Summary

    Enable accurate caller location reporting during panic in {Option, Result}::{unwrap, expect} with the following changes:

    1. Support the #[track_caller] function attribute, which guarantees a function has access to the caller information.
    2. Add an intrinsic function caller_location() (safe wrapper: Location::caller()) to retrieve the caller’s source location.

    Example:

    #![feature(track_caller)]
    use std::panic::Location;
    
    #[track_caller]
    fn unwrap(self) -> T {
        panic!("{}: oh no", Location::caller());
    }
    
    let n: Option<u32> = None;
    let m = n.unwrap();

    Motivation

    It is well-known that the error message reported by unwrap() is useless:

    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /checkout/src/libcore/option.rs:335
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    

    There have been numerous discussions (a, b, c) that want unwrap() and friends to provide better information to locate the panic. RFC 1669 attempted to address this by introducing the unwrap!(x) macro to the standard library, but it was closed since the x.unwrap() convention is too entrenched.

    This RFC introduces line numbers into unwrap() without requiring users to adapt a new idiom, i.e. the user should be able to see the precise location without changing any source code.

    Guide-level explanation

    Let’s reimplement unwrap()

    unwrap() and expect() are two methods on Option and Result that are commonly used when you are absolutely sure they contain a successful value and you want to extract it.

    // 1.rs
    use std::env::args;
    fn main() {
        println!("args[1] = {}", args().nth(1).unwrap());
        println!("args[2] = {}", args().nth(2).unwrap());
        println!("args[3] = {}", args().nth(3).unwrap());
    }

    If the assumption is wrong, they will panic and tell you that an error is unexpected.

    $ ./1
    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', 1.rs:4:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./1 arg1
    args[1] = arg1
    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', 1.rs:5:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./1 arg1 arg2
    args[1] = arg1
    args[2] = arg2
    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', 1.rs:6:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./1 arg1 arg2 arg3
    args[1] = arg1
    args[2] = arg2
    args[3] = arg3
    

    Let’s say you are unhappy with these built-in functions, e.g. you want to provide an alternative error message:

    // 2.rs
    use std::env::args;
    pub fn my_unwrap<T>(input: Option<T>) -> T {
        match input {
            Some(t) => t,
            None => panic!("nothing to see here, move along"),
        }
    }
    fn main() {
        println!("args[1] = {}", my_unwrap(args().nth(1)));
        println!("args[2] = {}", my_unwrap(args().nth(2)));
        println!("args[3] = {}", my_unwrap(args().nth(3)));
    }

    This trivial implementation, however, will only report the panic that happens inside my_unwrap. This is pretty useless since it is the caller of my_unwrap that made the wrong assumption!

    $ ./2
    thread 'main' panicked at 'nothing to see here, move along', 2.rs:5:16
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./2 arg1
    args[1] = arg1
    thread 'main' panicked at 'nothing to see here, move along', 2.rs:5:16
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./2 arg1 arg2
    args[1] = arg1
    args[2] = arg2
    thread 'main' panicked at 'nothing to see here, move along', 2.rs:5:16
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./2 arg1 arg2 arg3
    args[1] = arg1
    args[2] = arg2
    args[3] = arg3
    

    The trivial solution would require the user to provide file!(), line!() and column!(). A slightly more ergonomic solution would be changing my_unwrap into a macro, allowing these constants to be automatically provided.

    pub fn my_unwrap_at_source_location<T>(input: Option<T>, file: &str, line: u32, column: u32) -> T {
        match input {
            Some(t) => t,
            None => panic!("nothing to see at {}:{}:{}, move along", file, line, column),
        }
    }
    
    macro_rules! my_unwrap {
        ($input:expr) => {
            my_unwrap_at_source_location($input, file!(), line!(), column!())
        }
    }
    println!("args[1] = {}", my_unwrap!(args().nth(1)));
    //                                ^ tell user to add an `!`.
    ...

    What if you have already published the my_unwrap crate that has thousands of users, and you want to maintain API stability? Before Rust 1.XX, the builtin unwrap() had the same problem!

    Track the caller

    The reason the my_unwrap! macro works is because it copy-and-pastes the entire content of its macro definition every time it is used.

    println!("args[1] = {}", my_unwrap!(args().nth(1)));
    println!("args[2] = {}", my_unwrap!(args().nth(2)));
    ...
    
    // is equivalent to:
    
    println!("args[1] = {}", my_unwrap(args().nth(1), file!(), line!(), column!()));
    println!("args[1] = {}", my_unwrap(args().nth(2), file!(), line!(), column!()));
    ...

    What if we could instruct the compiler to automatically fill in the file, line, and column? Rust 1.YY introduced the #[track_caller] attribute for exactly this reason:

    // 3.rs
    #![feature(track_caller)]
    use std::env::args;
    #[track_caller]  // <-- Just add this!
    pub fn my_unwrap<T>(input: Option<T>) -> T {
        match input {
            Some(t) => t,
            None => panic!("nothing to see here, move along"),
        }
    }
    fn main() {
        println!("args[1] = {}", my_unwrap(args().nth(1)));
        println!("args[2] = {}", my_unwrap(args().nth(2)));
        println!("args[3] = {}", my_unwrap(args().nth(3)));
    }

    Now we have truly reproduced how the built-in unwrap() is implemented.

    $ ./3
    thread 'main' panicked at 'nothing to see here, move along', 3.rs:12:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./3 arg1
    args[1] = arg1
    thread 'main' panicked at 'nothing to see here, move along', 3.rs:13:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./3 arg1 arg2
    args[1] = arg1
    args[2] = arg2
    thread 'main' panicked at 'nothing to see here, move along', 3.rs:14:29
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    $ ./3 arg1 arg2 arg3
    args[1] = arg1
    args[2] = arg2
    args[3] = arg3
    

    #[track_caller] is an automated version of what you’ve seen in the last section. The attribute copies my_unwrap to a new function my_unwrap_at_source_location which accepts the caller’s location as an additional argument. The attribute also instructs the compiler to replace my_unwrap(x) with my_unwrap_at_source_location(x, file!(), line!(), column!()) (sort of) whenever it sees it. This allows us to maintain the stability guarantee while allowing the user to get the new behavior with just one recompile.

    Location type

    Let’s enhance my_unwrap to also log a message to the log file before panicking. We would need to get the caller’s location as a value. This is supported using the method Location::caller():

    use std::panic::Location;
    #[track_caller]
    pub fn my_unwrap<T>(input: Option<T>) -> T {
        match input {
            Some(t) => t,
            None => {
                let location = Location::caller();
                println!("unwrapping a None from {}:{}", location.file(), location.line());
                panic!("nothing to see here, move along")
            }
        }
    }

    Propagation of tracker

    When your #[track_caller] function calls another #[track_caller] function, the caller location will be propagated downwards:

    use std::panic::Location;
    #[track_caller]
    pub fn my_get_index<T>(input: &[T], index: usize) -> &T {
        my_unwrap(input.get(index))        // line 4
    }
    indirectly_unwrap(None);    // line 6

    When you run this, the panic will refer to line 6, the original caller, instead of line 4 where my_get_index calls my_unwrap. When a library function is marked #[track_caller], it is expected the function is short, and does not have any logic errors. This allows us to always track the caller on failure.

    If a panic that refers to the local location is actually needed, you may workaround by wrapping the code in a closure which cannot track the caller:

    #[track_caller]
    pub fn my_get_index<T>(input: &[T], index: usize) -> &T {
        (|| {
            my_unwrap(input.get(index))
        })()
    }

    Why do we use implicit caller location

    If you are learning Rust alongside other languages, you may wonder why Rust obtains the caller information in such a strange way. There are two restrictions that force us to adopt this solution:

    1. Programmatic access to the stack backtrace is often used in interpreted or runtime-heavy languages like Python and Java. However, the stack backtrace is not suitable as the only solution for systems languages like Rust because optimization often collapses multiple levels of function calls. In some embedded systems, the backtrace may even be unavailable!

    2. Solutions that use default function arguments alongside normal arguments are often used in languages that do not perform inference higher than statement level, e.g. Swift and C#. Rust does not (yet) support default function arguments or function overloading because they interfere with type inference, so such solutions are ruled out.

    Reference-level explanation

    Survey of panicking standard functions

    Many standard functions may panic. These are divided into three categories depending on whether they should receive caller information despite the inlining cost associated with it.

    The list of functions is not exhaustive. Only those with a “Panics” section in the documentation are included.

    1. Must have. These functions are designed to generate a panic, or used so often that indicating a panic happening from them often gives no useful information.

      FunctionPanic condition
      Option::expectself is None
      Option::unwrapself is None
      Result::expect_errself is Ok
      Result::expectself is Err
      Result::unwrap_errself is Ok
      Result::unwrapself is Err
      [T]::index_mutrange out of bounds
      [T]::indexrange out of bounds
      BTreeMap::indexkey not found
      HashMap::indexkey not found
      str::index_mutrange out of bounds or off char boundary
      str::indexrange out of bounds or off char boundary
      VecDeque::index_mutindex out of bounds
      VecDeque::indexindex out of bounds
    2. Nice to have. These functions are not commonly used, or the panicking condition is pretty rare. Often the panic information contains enough clue to fix the error without a backtrace. Inlining them would bloat the binary size without much benefit.

      List of category 2 functions
      FunctionPanic condition
      std::env::argsnon UTF-8 values
      std::env::set_varinvalid key or value
      std::env::varsnon UTF-8 values
      std::thread::spawnOS failed to create the thread
      [T]::clone_from_sliceslice lengths differ
      [T]::copy_from_sliceslice lengths differ
      [T]::rotateindex out of bounds
      [T]::split_at_mutindex out of bounds
      [T]::swapindex out of bounds
      BinaryHeap::reserve_exactcapacity overflow
      BinaryHeap::reservecapacity overflow
      Duration::newarithmetic overflow
      HashMap::reservecapacity overflow
      HashSet::reservecapacity overflow
      i32::overflowing_divzero divisor
      i32::overflowing_remzero divisor
      i32::wrapping_divzero divisor
      i32::wrapping_remzero divisor
      Instance::duration_sincetime travel
      Instance::elapsedtime travel
      Iterator::countextremely long iterator
      Iterator::enumerateextremely long iterator
      Iterator::positionextremely long iterator
      Iterator::productarithmetic overflow in debug build
      Iterator::sumarithmetic overflow in debug build
      LinkedList::split_offindex out of bounds
      LocalKey::withTLS has been destroyed
      RawVec::double_in_placecapacity overflow
      RawVec::doublecapacity overflow
      RawVec::reserve_exactcapacity overflow
      RawVec::reserve_in_placecapacity overflow
      RawVec::reservecapacity overflow
      RawVec::shrink_to_fitgiven amount is larger than current capacity
      RawVec::with_capacitycapacity overflow
      RefCell::borrow_muta borrow or mutable borrow is active
      RefCell::borrowa mutable borrow is active
      str::split_at_mutrange out of bounds or off char boundary
      str::split_atrange out of bounds or off char boundary
      String::drainrange out of bounds or off char boundary
      String::insert_strindex out of bounds or off char boundary
      String::insertindex out of bounds or off char boundary
      String::removeindex out of bounds or off char boundary
      String::reserve_exactcapacity overflow
      String::reservecapacity overflow
      String::splicerange out of bounds or off char boundary
      String::split_offindex out of bounds or off char boundary
      String::truncateoff char boundary
      Vec::appendcapacity overflow
      Vec::drainrange out of bounds
      Vec::insertindex out of bounds
      Vec::pushcapacity overflow
      Vec::removeindex out of bounds
      Vec::reserve_exactcapacity overflow
      Vec::reservecapacity overflow
      Vec::splicerange out of bounds
      Vec::split_offindex out of bounds
      Vec::swap_removeindex out of bounds
      VecDeque::appendcapacity overflow
      VecDeque::drainrange out of bounds
      VecDeque::insertindex out of bounds
      VecDeque::reserve_exactcapacity overflow
      VecDeque::reservecapacity overflow
      VecDeque::split_offindex out of bounds
      VecDeque::swapindex out of bounds
      VecDeque::with_capacitycapacity overflow
    3. Not needed. Panics from these indicate silly programmer error and the panic itself has enough clue to let programmers figure out where the error comes from.

      List of category 3 functions
      FunctionPanic condition
      std::atomic::fenceusing invalid atomic ordering
      std::char::from_digitradix is outside 2 ..= 36
      std::env::remove_varinvalid key
      std::format!the fmt method returns Err
      std::panicking::set_hookcalled in panicking thread
      std::panicking::take_hookcalled in panicking thread
      [T]::chunks_mutchunk size == 0
      [T]::chunkschunk size == 0
      [T]::windowswindow size == 0
      AtomicUsize::compare_exchange_weakusing invalid atomic ordering
      AtomicUsize::compare_exchangeusing invalid atomic ordering
      AtomicUsize::loadusing invalid atomic ordering
      AtomicUsize::storeusing invalid atomic ordering
      BorrowRef::cloneborrow counter overflows, see issue 33880
      BTreeMap::range_mutend of range before start of range
      BTreeMap::rangeend of range before start of range
      char::encode_utf16dst buffer smaller than [u16; 2]
      char::encode_utf8dst buffer smaller than [u8; 4]
      char::is_digitradix is outside 2 ..= 36
      char::to_digitradix is outside 2 ..= 36
      compiler_fenceusing invalid atomic ordering
      Condvar::waitwaiting on multiple different mutexes
      Display::to_stringthe fmt method returns Err
      ExactSizeIterator::lensize_hint implemented incorrectly
      i32::from_str_radixradix is outside 2 ..= 36
      Iterator::step_bystep == 0

    This RFC only advocates adding the #[track_caller] attribute to the unwrap and expect functions. The index and index_mut functions should also have it if possible, but this is currently postponed as it is not investigated yet how to insert the transformation after monomorphization.

    Procedural attribute macro

    The #[track_caller] attribute will modify a function at the AST and MIR levels without touching the type-checking (HIR level) or the low-level LLVM passes.

    It will first wrap the body of the function in a closure, and then call it:

    #[track_caller]
    fn foo<C>(x: A, y: B, z: C) -> R {
        bar(x, y)
    }
    
    // will become:
    
    #[rustc_implicit_caller_location]
    #[inline]
    fn foo<C>(x: A, y: B, z: C) -> R {
        std::ops::FnOnce::call_once(move |__location| {
            bar(x, y)
        }, (unsafe { std::intrinsics::caller_location() },))
    }

    This is to split the function into two: the function foo itself, and the closure foo::{{closure}} in it. (Technically: it is the simplest way to create two DefIds at the HIR level as far as I know.)

    The function signature of foo remains unchanged, so typechecking can proceed normally. The attribute will be replaced by #[rustc_implicit_caller_location] to let the compiler internals continue to treat it specially. #[inline] is added so external crates can see through foo to find foo::{{closure}}.

    The closure foo::{{closure}} is a proper function so that the compiler can write calls directly to foo::{{closure}}, skipping foo. Multiple calls to foo from different locations can be done via calling foo::{{closure}} directly, instead of copying the function body every time which would bloat the binary size.

    The intrinsic caller_location() is a placeholder which will be replaced by the actual caller location when one calls foo::{{closure}} directly.

    Currently the foo::{{closure}} cannot inherit attributes defined on the main function. To prevent problems regarding ABI, using #[naked] or extern "ABI" together with #[rustc_implicit_caller_location] should raise an error.

    Redirection (MIR inlining)

    After all type-checking and validation is done, we can now inject the caller location. This is done by redirecting all calls to foo to foo::{{closure}}.

    _r = call foo(_1, _2, _3) -> 'bb1;
    
    // will become:
    
    _c = call std::intrinsics::caller_location() -> 'bbt;
    'bbt:
    _r = call foo::{{closure}} (&[closure: x: _1, y: _2], _c) -> 'bb1;

    We will further replace the caller_location() intrinsic according to where foo is called. If it is called from an ordinary function, it would be replaced by the callsite’s location:

    // for ordinary functions,
    
    _c = call std::intrinsics::caller_location() -> 'bbt;
    
    // will become:
    
    _c = Location { file: file!(), line: line!(), column: column!() };
    goto -> 'bbt;

    If it is called from an #[rustc_implicit_caller_location]’s closure e.g. foo::{{closure}}, the intrinsic will be replaced by the closure argument __location instead, so that the caller location can propagate directly

    // for #[rustc_implicit_caller_location] closures,
    
    _c = call std::intrinsics::caller_location() -> 'bbt;
    
    // will become:
    
    _c = __location;
    goto -> 'bbt;

    These steps are very similar to inlining, and thus the first proof-of-concept is implemented directly as a variant of the MIR inliner (but a separate pass). This also means the redirection pass currently suffers from all disadvantages of the MIR inliner, namely:

    • Locations will not be propagated into diverging functions (fn() -> !), since inlining them is not supported yet.

    • MIR passes are run before monomorphization, meaning #[track_caller] currently cannot be used on trait items:

    trait Trait {
        fn unwrap(&self);
    }
    impl Trait for u64 {
        #[track_caller] //~ ERROR: `#[track_caller]` is not supported for trait items yet.
        fn unwrap(&self) {}
    }

    To support trait items, the redirection pass must be run as post-monomorphized MIR pass (which does not exist yet), or converted to queries provided after resolve, or a custom LLVM inlining pass which can extract the caller’s source location. This prevents the Index trait from having #[track_caller] yet.

    We cannot hack the impl resolution method into pre-monomorphization MIR pass because of deeply nested functions like

    f1::<u32>();
    
    fn f1<T: Trait>() { f2::<T>(); }
    fn f2<T: Trait>() { f3::<T>(); }
    fn f3<T: Trait>() { f4::<T>(); }
    ...
    fn f100<T: Trait>() {
        T::unwrap(); // No one will know T is u32 before monomophization.
    }

    Currently the redirection pass always runs before the inlining pass. If the redirection pass is run after the normal MIR inlining pass, the normal MIR inliner must treat #[rustc_implicit_caller_location] as #[inline(never)].

    The closure foo::{{closure}} must never be inlined before the redirection pass.

    When #[rustc_implicit_caller_location] functions are called dynamically, no inlining will occur, and thus it cannot take the location of the caller. Currently this will report where the function is declared. Taking the address of such functions must be allowed due to backward compatibility. (If a post-monomorphized MIR pass exists, methods via trait objects would be another case of calling #[rustc_implicit_caller_location] functions without caller location.)

    let f: fn(Option<u32>) -> u32 = Option::unwrap;
    let g: fn(Option<u32>) -> u32 = Option::unwrap;
    assert!(f == g); // This must remain `true`.
    f(None);
    g(None); // The effect of these two calls must be the same.

    Standard libraries

    The caller_location() intrinsic returns the Location structure which encodes the file, line and column of the callsite. This shares the same structure as the existing type std::panic::Location. Therefore, the type is promoted to a lang-item, and moved into core::panicking::Location. It is re-exported from libstd.

    Thanks to how #[track_caller] is implemented, we could provide a safe wrapper around the caller_location() intrinsic:

    impl<'a> Location<'a> {
        #[track_caller]
        pub fn caller() -> Location<'static> {
            unsafe {
                ::intrinsics::caller_location()
            }
        }
    }

    The panic! macro is modified to use Location::caller() (or the intrinsic directly) so it can report the caller location inside #[track_caller].

    macro_rules! panic {
        ($msg:expr) => {
            let loc = $crate::panicking::Location::caller();
            $crate::panicking::panic(&($msg, loc.file(), loc.line(), loc.column()))
        };
        ...
    }

    Actually this is now more natural for core::panicking::panic_fmt to take Location directly instead of tuples, so one should consider changing their signature, but this is out-of-scope for this RFC.

    panic! is often used outside of #[track_caller] functions. In those cases, the caller_location() intrinsic will pass unchanged through all MIR passes into trans. As a fallback, the intrinsic will expand to Location { file: file!(), line: line!(), col: column!() } during trans.

    “My fault” vs “Your fault”

    In a #[track_caller] function, we expect all panics being attributed to the caller (thus the attribute name). However, sometimes the code panics not due to the caller, but the implementation itself. It may be important to distinguish between “my fault” (implementation error) and “your fault” (caller violating API requirement). As an example,

    use std::collections::HashMap;
    use std::hash::Hash;
    
    fn count_slices<T: Hash + Eq>(array: &[T], window: usize) -> HashMap<&[T], usize> {
        if !(0 < window && window <= array.len()) {
            panic!("invalid window size");
            // ^ triggering this panic is "your fault"
        }
        let mut result = HashMap::new();
        for w in array.windows(window) {
            if let Some(r) = result.get_mut(w) {
                *r += 1;
            } else {
                panic!("why??");
                // ^ triggering this panic is "my fault"
                //   (yes this code is wrong and entry API should be used)
            }
        }
    
        result
    }

    One simple solution is to separate the “my fault” panic and “your fault” panic into two, but since declarative macro 1.0 is insta-stable, this RFC would prefer to postpone introducing any new public macros until “Macros 2.0” lands, where stability and scoping are better handled.

    For comparison, the Swift language does distinguish between the two kinds of panics semantically. The “your fault” ones are called precondition, while the “my fault” ones are called assert, though they don’t deal with caller location, and practically they are equivalent to Rust’s assert! and debug_assert!. Nevertheless, this also suggests we can still separate existing panicking macros into the “my fault” and “your fault” camps accordingly:

    • Definitely “my fault” (use actual location): debug_assert! and friends, unreachable!, unimplemented!
    • Probably “your fault” (propagate caller location): assert! and friends, panic!

    The question is, should calling unwrap(), expect() and x[y] (index()) be “my fault” or “your fault”? Let’s consider existing implementation of index() methods:

    // Vec::index
    fn index(&self, index: usize) -> &T {
        &(**self)[index]
    }
    
    // BTreeMap::index
    fn index(&self, key: &Q) -> &V {
        self.get(key).expect("no entry found for key")
    }
    
    // Wtf8::index
    fn index(&self, range: ops::RangeFrom<usize>) -> &Wtf8 {
        // is_code_point_boundary checks that the index is in [0, .len()]
        if is_code_point_boundary(self, range.start) {
            unsafe { slice_unchecked(self, range.start, self.len()) }
        } else {
            slice_error_fail(self, range.start, self.len())
        }
    }

    If they all get #[track_caller], the x[y], expect() and slice_error_fail() should all report “your fault”, i.e. caller location should be propagated downstream. It does mean that the current default of caller-location-propagation-by-default is more common. This also means “my fault” happening during development may become harder to spot. This can be solved using RUST_BACKTRACE=1, or workaround by splitting into two functions:

    use std::collections::HashMap;
    use std::hash::Hash;
    
    #[track_caller]
    fn count_slices<T: Hash + Eq>(array: &[T], window: usize) -> HashMap<&[T], usize> {
        if !(0 < window && window <= array.len()) {
            panic!("invalid window size");  // <-- your fault
        }
        (|| {
            let mut result = HashMap::new();
            for w in array.windows(window) {
                if let Some(r) = result.get_mut(w) {
                    *r += 1;
                } else {
                    panic!("why??"); // <-- my fault (caller propagation can't go into closures)
                }
            }
            result
        })()
    }

    Anyway, treating everything as “your fault” will encourage that #[track_caller] functions should be short, which goes in line with the “must have” list in the RFC. Thus the RFC will remain advocating for propagating caller location implicitly.

    Location detail control

    An unstable flag -Z location-detail is added to rustc to control how much factual detail will be emitted when using caller_location(). The user can toggle file, line and column separately, e.g. when compiling with:

    rustc -Zlocation-detail=line
    

    only the line number will be real. The file and column will always be a dummy value like

    thread 'main' panicked at 'error message', <redacted>:192:0
    

    Drawbacks

    Code bloat

    Previously, all calls to unwrap() and expect() referred to the same location. Therefore, the panicking branch will only needed to reuse a pointer to a single global tuple.

    After this RFC is implemented, the panicking branch will need to allocate space to store the varying caller location, so the number of instructions per unwrap()/expect() will increase.

    The optimizer will lose the opportunity to consolidate all jumps to the panicking branch. Before this RFC, LLVM would optimize a.unwrap() + b.unwrap(), to something like

    if (a.tag != SOME || b.tag != SOME) {
        panic(&("called `Option::unwrap()` on a `None` value", "src/libcore/option.rs", 335, 20));
    }
    a.value_of_some + b.value_of_some

    After this RFC, LLVM can only lower this to

    if (a.tag != SOME) {
        panic(&("called `Option::unwrap()` on a `None` value", "1.rs", 1, 1));
    }
    if (b.tag != SOME) {
        panic(&("called `Option::unwrap()` on a `None` value", "1.rs", 1, 14));
    }
    a.value_of_some + b.value_of_some

    One can use -Z location-detail to get the old optimization behavior.

    Narrow solution scope

    #[track_caller] is only useful in solving the “get caller location” problem. Introducing an entirely new feature just for this problem seems wasteful.

    Default function arguments is another possible solution for this problem but with much wider application.

    Confusing scoping rule

    Consts, statics and closures are separate MIR items, meaning the following marked places will not get caller locations:

    #[track_caller]
    fn foo() {
        static S: Location = Location::caller(); // will get actual location instead
        let f = || Location::caller();   // will get actual location instead
        Location::caller(); // this one will get caller location
    }

    This is confusing, but if we don’t support this, we will need two panic! macros which is not a better solution.

    Clippy could provide a lint against using Location::caller() outside of #[track_caller].

    Rationale and alternatives

    Rationale

    This RFC tries to abide by the following restrictions:

    1. Precise caller location. Standard library functions which commonly panic will report the source location as where the user called them. The source location should never point inside the standard library. Examples of these functions include Option::unwrap and HashMap::index.

    2. Source compatibility. Users should never need to modify existing source code to benefit from the improved precision.

    3. Debug-info independence. The precise caller location can still be reported even after stripping of debug information, which is very common on released software.

    4. Interface independence. The implementation of a trait should be able to decide whether to accepts the caller information; it shouldn’t require the trait itself to enforce it. It should not affect the signature of the function. This is an extension of rule 2, since the Index trait is involved in HashMap::index. The stability of Index must be upheld, e.g. it should remain object-safe, and existing implementations should not be forced to accept the caller location.

    Restriction 4 “interface independence” is currently not implemented due to lack of post-monomorphized MIR pass, but implementing #[track_caller] as a language feature follows this restriction.

    Alternatives

    🚲 Name of everything 🚲

    • Is #[track_caller] an accurate description?
    • Should we move std::panic::Location into core, or just use a 3-tuple to represent the location? Note that the former is advocated in RFC 2070.
    • Is Location::caller() properly named?

    Using an ABI instead of an attribute

    pub extern "implicit-caller-location" fn my_unwrap() {
        panic!("oh no");
    }

    Compared with attributes, an ABI is a more natural way to tell the post-typechecking steps about implicit parameters, pioneered by the extern "rust-call" ABI. However, creating a new ABI will change the type of the function as well, causing the following statement to fail:

    let f: fn(Option<u32>) -> u32 = Option::unwrap;
    //~^ ERROR: [E0308]: mismatched types

    Making this pass will require supporting implicitly coercing extern "implicit-caller-location" fn pointer to a normal function pointer. Also, an ABI is not powerful enough to implicitly insert a parameter, making it less competitive than just using an attribute.

    Repurposing file!(), line!(), column!()

    We could change the meaning of file!(), line!() and column!() so they are only converted to real constants after redirection (a MIR or trans pass) instead of early during macro expansion (an AST pass). Inside #[track_caller] functions, these macros behave as this RFC’s caller_location(). The drawback is using these macro will have different values at compile time (e.g. inside include!(file!())) vs. runtime.

    Inline MIR

    Introduced as an alternative to RFC 1669, instead of the caller_location() intrinsic, we could provide a full-fledged inline MIR macro mir! similar to the inline assembler:

    #[track_caller]
    fn unwrap(self) -> T {
        let file: &'static str;
        let line: u32;
        let column: u32;
        unsafe {
            mir! {
                StorageLive(file);
                file = const $CallerFile;
                StorageLive(line);
                line = const $CallerLine;
                StorageLive(column);
                column = const $CallerColumn;
                goto -> 'c;
            }
        }
        'c: {
            panic!("{}:{}:{}: oh no", file, line, column);
        }
    }

    The problem of mir! in this context is trying to kill a fly with a sledgehammer. mir! is a very generic mechanism which requires stabilizing the MIR syntax and considering the interaction with the surrounding code. Besides, #[track_caller] itself still exists and the magic constants $CallerFile etc are still magical.

    Default function arguments

    Assume this is solved by implementing RFC issue 323.

    fn unwrap(file: &'static str = file!(), line: u32 = line!(), column: u32 = column!()) -> T {
        panic!("{}:{}:{}: oh no", file, line, column);
    }

    Default arguments was a serious contender to the better-caller-location problem as this is usually how other languages solve it.

    LanguageSyntax
    Swiftfunc unwrap(file: String = #file, line: Int = #line) -> T
    DT unwrap(string file = __FILE__, size_t line = __LINE__)
    C# 5+T Unwrap([CallerFilePath] string file = "<n/a>", [CallerLineNumber] int line = 0)
    Haskell with GHCunwrap :: (?callstack :: CallStack) => Maybe t -> t
    C++ with GCC 4.8+T unwrap(const char* file = __builtin_FILE(), int line = __builtin_LINE())

    A naive solution will violate restriction 4 “interface independence”: adding the file, line, column arguments to index() will change its signature. This can be resolved if this is taken into account.

    impl<'a, K, Q, V> Index<&'a Q> for BTreeMap<K, V>
    where
        K: Ord + Borrow<Q>,
        Q: Ord + ?Sized,
    {
        type Output = V;
    
        // This should satisfy the trait even if the trait specifies
        // `fn index(&self, idx: Idx) -> &Self::Output`
        #[inline]
        fn index(&self, key: &Q, file: &'static str = file!(), line: u32 = line!(), column: u32 = column!()) -> &V {
            self.get(key).expect("no entry found for key", file, line, column)
        }
    }

    This can be resolved if the future default argument proposal takes this into account. But again, this feature itself is going to be large and controversial.

    Semantic inlining

    Treat #[track_caller] as the same as a very forceful #[inline(always)]. This eliminates the procedural macro pass. This was the approach suggested in the first edition of this RFC, since the target functions (unwrap, expect, index) are just a few lines long. However, it experienced push-back from the community as:

    1. Inlining causes debugging to be difficult.
    2. It does not work with recursive functions.
    3. People do want to apply the attribute to long functions.
    4. The expected usage of “semantic inlining” and traditional inlining differ a lot, continue calling it inlining may confuse beginners.

    Therefore the RFC is changed to the current form, and the inlining pass is now described as just an implementation detail.

    Design-by-contract

    This is inspired when investigating the difference in “my fault” vs “your fault”. We incorporate ideas from design-by-contract (DbC) by specifying that “your fault” is a kind of contract violation. Preconditions are listed as part of the function signature, e.g.

    // declaration
    extern {
        #[precondition(fd >= 0, "invalid file descriptor {}", fd)]
        fn close_fd(fd: c_int);
    }
    
    // declaration + definition
    #[precondition(option.is_some(), "Trying to unwrap None")]
    fn unwrap<T>(option: Option<T>) -> T {
        match option {
            Some(t) => t,
            None => unsafe { std::mem::unchecked_unreachable() },
        }
    }

    Code that appears in the #[precondition] attribute should be copied to caller site, so when the precondition is violated, they can get the caller’s location.

    Specialization should be treated like subtyping, where preconditions can be weakened:

    trait Foo {
        #[precondition(condition_1)]
        fn foo();
    }
    
    impl<T: Debug> Foo for T {
        #[precondition(condition_2a)]
        #[precondition(condition_2b)]
        default fn foo() { ... }
    }
    
    impl Foo for u32 {
        #[precondition(condition_3)]
        fn foo() { ... }
    }
    
    assert!(condition_3 || (condition_2a && condition_2b) || condition_1);
    // ^ automatically inserted when the following is called...
    <u32 as Foo>::foo();

    Before Rust 1.0, there was the hoare compiler plugin which introduces DbC using the similar syntax. However, the conditions are expanded inside the function, so the assertions will not fail with the caller’s location. A proper solution will be similar to what this RFC proposes.

    Non-viable alternatives

    Many alternatives have been proposed before but failed to satisfy the restrictions laid out in the Rationale subsection, thus should not be considered viable alternatives within this RFC, at least at the time being.

    Macros

    The unwrap!() macro introduced in RFC 1669 allows the user to write unwrap!(x) instead of x.unwrap().

    A similar solution is introducing a loc!() macro that expands to concat!(file!(), ":", line!(), ":", column!()), so user writes x.expect(loc!()) instead of x.unwrap().

    There is even the better_unwrap crate that automatically rewrites all unwrap() and expect() inside a module to provide the caller location through a procedural attribute.

    All of these are non-viable since they require the user to actively change their source code, thus violating restriction 2 “source compatibility”, unless we are willing to drop the ! from macros.

    All pre-typeck rewrites are prone to false-positive failures affecting unrelated types that have an unwrap() method. Post-typeck rewrites are no different from this RFC.

    Backtrace

    When given debug information (DWARF section/file on Linux, *.pdb file on Windows, *.dSYM folder on macOS), the program is able to obtain the source code location for each address. This solution is often used in runtime-heavy languages like Python, Java and Go.

    For Rust, however:

    • The debug information is usually not provided in release mode.

      In particular, cargo defaults to disabling debug symbols in release mode (this default can certainly be changed). rustc itself is tested in CI and distributed in release mode, so getting a usable location in release mode is a real concern (see also RFC 1417 for why it was disabled in the official distribution in the first place).

      Even if this is generated, the debug symbols are generally not distributed to end-users, which means the error reports will only contain numerical addresses. This can be seen as a benefit, as the implementation detail won’t be exposed, but how to submit/analyze an error report would be out-of-scope for this RFC.

    • There are multiple issues preventing us from relying on debug info nowadays.

      Issues 24346 (Backtrace does not include file and line number on non-Linux platforms) and 42295 (Slow backtrace on panic) and are still not entirely fixed. Even after the debuginfo is properly handled, if we decide not to expose the whole the full stacktrace, we may still need to reopen pull request 40264 (Ignore more frames on backtrace unwinding).

      These signal that debuginfo support is not reliable enough if we want to solve the unwrap/expect issue now.

    These drawbacks are the main reason why restriction 3 “debug-info independence” is added to the motivation.

    (A debuginfo-based stack trace proposal can be found at RFC 2154.)

    SourceContext generic parameter

    Introduced as an alternative in RFC 1669, inspired by GHC’s implicit parameter:

    fn unwrap<C: SourceContext = CallerSourceContext>(self) -> T {
        panic!("{}: oh no", C::default());
    }

    The CallerSourceContext lang item will instruct the compiler to create a new type implementing SourceContext whenever unwrap() is instantiated.

    Unfortunately this violates restriction 4 “interface independence”. This solution cannot apply to HashMap::index as this will require a change of the method signature of index() which has been stabilized. Methods applying this solution will also lose object-safety.

    The same drawback exists if we base the solution on RFC 2000 (const generics).

    Unresolved questions

    • If we want to support adding #[track_caller] to trait methods, the redirection pass/query/whatever should be placed after monomorphization, not before. Currently the RFC simply prohibit applying #[track_caller] to trait methods as a future-proofing measure.

    • Diverging functions should be supported.

    • The closure foo::{{closure}} should inherit most attributes applied to the function foo, in particular #[inline], #[cold], #[naked] and also the ABI. Currently a procedural macro won’t see any of these, nor would there be anyway to apply these attributes to a closure. Therefore, #[rustc_implicit_caller_location] currently will reject #[naked] and ABI, and leaving #[inline] and #[cold] mean no-op. There is no semantic reason why these cannot be used though.

    Summary

    Remove the need for explicit T: 'x annotations on structs. We will infer their presence based on the fields of the struct. In short, if the struct contains a reference, directly or indirectly, to T with lifetime 'x, then we will infer that T: 'x is a requirement:

    struct Foo<'x, T> {
      // inferred: `T: 'x`
      field: &'x T
    }  

    Explicit annotations remain as an option used to control trait object lifetime defaults, and simply for backwards compatibility.

    Motivation

    Today, when you write generic struct definitions that contain references, those structs require where-clauses of the form T: 'a:

    struct SharedRef<'a, T>
      where T: 'a // <-- currently required
    {
      data: &'a T
    }

    These clauses are called outlives requirements, and the next section (“Background”) goes into a bit more detail on what they mean semantically. The overriding goal of this RFC is to make these where T: 'a annotations unnecessary by inferring them.

    Anecdotally, these annotations are not well understood. Instead, the most common thing is to wait and add the where-clauses when the compiler requests that you do so. This is annoying, of course, but the annotations also clutter up the code, and add to the perception of Rust’s complexity.

    Experienced Rust users may have noticed that the compiler already performs a similar seeming kind of inference in other settings. In particular, in function definitions or impls, outlives requirements are rarely needed. This is due to the mechanism known as implied bounds (also explained in more detail in the next section), which allows a function (resp. impl) to infer outlives requirements based on the types of its parameters (resp. input types):

    fn foo<'a, T>(r: SharedRef<'a, T>) {
      // Gets to assume that `T: 'a` holds, because it is a requirement
      // of the parameter type `SharedRef<'a, T>`.
    }  

    This RFC proposes a mechanism for also inferring the outlives requirements on structs. This is not an extension of the implied bounds system; in general, field types of a struct are not considered “inputs” to the struct definition, and hence implied bounds do not apply. Indeed, the annotations that we are attempting to infer are used to drive the implied bounds system. Instead, to infer these outlives requirements on structs, we will use a specialized, fixed-point inference similar to variance inference.

    There is one other, relatively obscure, place where explicit lifetime annotations are used today: trait object lifetime defaults (RFC 599). The interaction there is discussed in the Guide-Level Explanation below.

    Background: outlives requirements today

    RFC 34 established the current rules around “outlives requirements”. Specifically, in order for a reference type &'a T to be “well formed” (valid), the compiler must know that the type T “outlives” the lifetime 'a – meaning that all references contained in the type T must be valid for the lifetime 'a. So, for example, the type i32 outlives any lifetime, including 'static, since it has no references at all. (The “outlives” rules were later tweaked by RFC 1214 to be more syntactic in nature.)

    In practice, this means that in Rust, when you define a struct that contains references to a generic type, or references to other references, you need to add various where clauses for that struct type to be considered valid. For example, consider the following (currently invalid) struct SharedRef:

    struct SharedRef<'a, T> {
      data: &'a T
    }

    In general, for a struct definition to be valid, its field types must be known to be well-formed, based only on the struct’s where-clauses. In this case, the field data has the &'a T – for that to be well-formed, we must know that T: 'a holds. Since we do not know what T is, we require that a where-clause be added to the struct header to assert that T: 'a must hold:

    struct SharedRef<'a, T>
      where T: 'a // currently required...
    {
      data: &'a T // ...so that we know that this field's type is well-formed
    }

    In principle, similar where clauses would be required on generic functions or impl to ensure that their parameters or inputs are well-formed. However, as you may have noticed, this is not the case. For example, the following function is valid as written:

    fn foo<'a, T>(x: &'a T) {
      ..
    }  

    This is due to Rust’s support for implied bounds – in particular, every function and impl assumes that the types of its inputs are well-formed. In this case, since foo can assume that &'a T is well-formed, it can also deduce that T: 'a must hold, and hence we do not require where-clauses asserting this fact. (Currently, implied bounds are only used for lifetime requirements; pending RFC 2089 proposes to extend this mechanism to other sorts of bounds.)

    Guide-level explanation

    This RFC does not introduce any new concepts – rather, it (mostly) removes the need to be actively aware of outlives requirements. In particular, the compiler will infer the T: 'a requirements on behalf of the programmer. Therefore, the SharedRef struct we have seen in the previous section would be accepted without any annotation:

    struct SharedRef<'a, T> {
        r: &'a T
    }

    The compiler would infer that T: 'a must hold for the type SharedRef<'a, T> to be valid. In some cases, the requirement may be inferred through several structs. So, for the struct Indirect below, we would also infer that T: 'a is required, because Indirect contains a SharedRef<'a, T>:

    struct Indirect<'a, T> {
      r: SharedRef<'a, T>
    }

    Where explicit annotations would still be required

    Explicit outlives annotations would primarily be required in cases where the lifetime and the type are combined within the value of an associated type, but not in one of the impl’s input types. For example:

    trait MakeRef<'a> {
      type Type;
    }
    
    impl<'a, T> MakeRef<'a> for Vec<T>
      where T: 'a // still required
    {
      type Type = &'a T;
    }
    

    In this case, the impl has two inputs – the lifetime 'a and the type Vec<T> (note that 'a and T are the impl parameters; the inputs come from the parameters of the trait that is being implemented). Neither of these inputs requires that T: 'a. So, when we try to specify the value of the associated type as &'a T, we still require a where clause to infer that T: 'a must hold.

    In turn, if this associated type were used in a struct, where-clauses would be required. As we’ll see in the reference-level explanation, this is a consequence of the fact that we do inference without regard for associated type normalization, but it makes for a relatively simple rule – explicit where clauses are needed in the preseence of impls like the one above:

    struct Foo<'a, T>
      where T: 'a // still required, not inferred from `field`
    {
      field: <Vec<T> as MakeRef<'a>>::Type
    }    

    As the algorithm is currently framed, outlives requirements written on traits must also be explicitly propagated; however, this will typically occur as part of the existing bounds:

    trait Trait<'a> where Self: 'a {
      type Type;
    }
    
    struct Foo<'a, T>
      where T: Trait<'a> // implies `T: 'a` already, so no error
    {
      r: <T as Trait<'a>>::Type // requires that `T: 'a` to be WF
    }

    Trait object lifetime defaults

    RFC 599 (later amended by RFC 1156) specified the defaulting rules for trait object types. Typically, a trait object type that appears as a parameter to a struct is given the implicit bound 'static; hence Box<Debug> defaults to Box<Debug + 'static>. References to trait objects, however, are given by default the lifetime of the reference; hence &'a Debug defaults to &'a (Debug + 'a).

    Structs that contain explicit T: 'a where-clauses, however, use the default given lifetime 'a as the default for trait objects. Therefore, given a struct definition like the following:

    struct Ref<'a, T> where T: 'a + ?Sized { .. }

    The type Ref<'x, Debug> defaults to Ref<'x, Debug + 'x> and not Ref<'x, Debug + 'static>. Effectively the where T: 'a declaration acts as a kind of signal that Ref acts as a “reference to T”.

    This RFC does not change these defaulting rules. In particular, these defaults are applied before where-clause inference takes place, and hence are not affected by the results. Trait object defaulting therefore requires an explicit where T: 'a declaration on the struct; in fact, such explicit declarations can be thought of as existing primarily for the purpose of informing trait object lifetime defaults, since they are typically not needed otherwise.

    Long-range errors, and why they are considered unlikely

    Initially, we avoided inferring the T: 'a annotations on struct types in part out of a fear of “long-range” error messages, where it becomes hard to see the origin of an outlives requirement. Consider for example a setup like this one:

    struct Indirect<'a, T> {
      field: Direct<'a, T>
    }
    
    struct Direct<'a, T> {
      field: &'a T
    }

    Here, both of these structs require that T: 'a, but the requirement is not written explicitly. If you have access to the full definition of Direct, it might be obvious that the requirement arises from the &'a T type, but discovering this for Indirect requires looking deeply into the definitions of all types that it references.

    In principle, such errors can occur, but there are many reasons to believe that “long-range errors” will not be a source of problems in practice:

    • The inferred bounds approach ensures that code that is given (e.g., as a parameter) an existing Indirect or Direct value will already be able to assume the required outlives relationship holds.
    • Code that creates an Indirect or Direct value must also create the &'a T reference found in Direct, and creating that reference would only be legal if T: 'a.

    Put another way, think back on your experience writing Rust code: how often do you get an error that is solved by writing where T: 'a or where 'a: 'b outside of a struct definition? At least in the author’s experience, such errors are quite infrequent.

    That said, long-range errors can still occur, typically around impls and associated type values, as mentioned in the previous section. For example, the following impl would not compile:

    trait MakeRef<'a> {
      type Type;
    }
    
    impl<'a, T> MakeRef<'a> for Vec<T> {
      type Type = Indirect<'a, T>;
    }

    Here, we would be missing a where-clause that T: 'a due to the type Indirect<'a, T>, just as we saw in the previous section. In such cases, tweaking the wording of the error could help to make the cause clearer. Similarly to auto traits, the idea would be to help trace the path that led to the T: 'a requirement on the user’s behalf:

    error[E0309]: the type `T` may not live long enough
     --> src/main.rs:6:3
       |
     6 |   type Type = Indirect<'a, T>;
       |   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the type `Indirect<'a, T>` requires that `T: 'a`
       |
       = note: `Indirect<'a, T>` requires that `T: 'a` because it contains a field of type `Direct<'a, T>`
       = note: `Direct<'a, T>` requires that `T: 'a` because it contains a field of type `&'a T`
    

    Impact on semver

    Due to the implied bounds rules, it is currently the case that removing where T: 'a annotations is potentially a breaking change. After this RFC, the rule is a bit more subtle: removing an annotation is still potentially a breaking change (even if it would be inferred), due to the trait object rules; but also, adding or removing a field of type &'a T could affect the results of inference, and hence may be a breaking change. As an example, consider a struct like the following:

    struct Iter<'a, T> {
      vec: &'a Vec<T> // Implies: `T: 'a`
    }

    Now imagine a function that takes Iter as an argument:

    fn foo<'a, T>(iter: Iter<'a, T>) { .. }

    Under this RFC, this function can assume that T: 'a due to the implied bounds of its parameter type. But if Iter<'a, T> were changed to (e.g.) remove the field vec, then it may no longer require that T: 'a holds, and hence foo() would no longer have the implied bound that T: 'a holds.

    This situation is considered unlikely: typically, if a struct has a lifetime parameter (such as the Iter struct), then the fact that it contains (or may contain) a borrowed reference is rather fundamental to how it works. If that borrowed reference were to be removed entirely, then the struct’s API will likely be changing in other incompatible ways, since that implies that the struct is now taking ownership of data it used to borrow (or else has access to less data than it did before).

    Note: This is not the only case where changes to private field types can cause downstream errors: introducing object types can inhibit auto traits like Send and Sync. What these have in common is that they are both entangled with Rust’s memory safety checking. It is commonly observed that parallelim is anti-encapsulation, in that, to know if two bits of code can be run in parallel, you must know what data they access, but for the strongest encapsulation, you wish to hide that fact. Memory safety has a similar property: to guarantee that references are always valid, we need to know where they appear, even if it is deeply nested within a struct hierarchy. Probably the best way to mitigate these sorts of subtle semver complications is to have a tool that detects and warns for incompatible changes.

    Reference-level explanation

    The intention is that the outlives inference takes place at the same time in the compiler pipeline as variance inference. In particular, this is after the point where we have been able to construct “semantics” or “internal” types from the HIR (so we don’t have to define the inference in a purely syntactic fashion). However, this is still relatively early, so we wish to avoid doing things like solving traits. Like variance inference, the new inference is an iterative algorithm that continues to infer additional requirements until a fixed point is reached.

    For each struct declared by the user, we will infer a set of implicit outlives annotations. These annotations take one of several forms:

    • 'a: 'b – two lifetimes (typically parameters of the trait) are required to outlive one another
    • T: 'a – a type parameter T of the trait is required to outlive the lifetime 'a, which is either a parameter of the trait or 'static
    • <T as Trait<..>>::Item: 'a – the value of an associated type is required to outlive the lifetime 'a, which is either a parameter of the trait or 'static (here T represents an arbitrary type).

    We will infer a minimal set of annotations A[S] for each struct S. This set must meet the constraints derived by the following algorithm.

    First, if the struct contains a where-clause C matching the above forms, then we add the constraint that C in A[S]. So, for example, in the following struct:

    struct Foo<'a, T> where T: 'a { .. }

    we would add the constraint that (T: 'a) in A[S].

    Next, for each field f of type T_f of the struct S, we derive each outlives requirement that is needed for T_f to be well-formed and require that those be included in A[S]. This is done on the unnormalized type T_f. These rules can be derived in a fairly straightforward way from the inference rules given in RFC 1214. We won’t give an exhaustive accounting of the rules, but will just note the outlines of the algorithm:

    • A field containing a reference type like &'a T naturally requires that T: 'a must be satisfied (here T represents “some type” and not necessarily a type parameter; for example, &'a &'b i32 would lead to the outlives requirement that 'b: 'a).
    • A reference to a struct like Foo<'a, T> may also require outlives requirements. This is determined by checking the (current) value of A[Foo], after substituting its parameters.
    • For an associated type reference like <T as BarTrait<'a>>::Type, we do not attempt normalization, but rather just check that T is well-formed.
      • This is partly looking forward to a time when, at this stage, we may not know which trait is being projected from (in the compiler as currently implemented, we already do).
      • Note that we do not infer additional requirements on traits, we simply use the values given by users.
      • Note further that where-clauses declared on impls are never relevant here.

    Once inference is complete, the implicit outlives requirements inferred as part of A become part of the predicates on the struct for all intents and purposes after this point.

    Note that inference is not “complete” – i.e., it is not guaranteed to find all the outlives requirements that are ultimately required (in particular, it does not find those that arise through normalization). Furthermore, it only covers outlives requirements, and not other sorts of well-formedness rules (e.g., trait requirements like T: Eq). Therefore, after inference completes, we still check that each type is well-formed just as today, but with the inferred outlives requirements in scope.

    Example 1: A reference

    The simplest example is one where we have a reference type directly contained in the struct:

    struct Foo<'a, T> {
      bar: &'a [T]
    }

    Here, the reference type requires that [T]: 'a which in turn is true if T: 'a. Hence we will create a single constraint, that (T: 'a) in A[Foo].

    Example 2: Projections

    In some cases, the outlives requirements are not of the form T: 'a, as in this example:

    struct Foo<'a, T: Iterator> {
      bar: &'a T::Item
    }

    Here, the requirement will be that <T as Iterator>::Item: 'a.

    Example 3: Explicit where-clauses

    In some cases, we may have constraints that arise from explicit where-clauses and not from field types, as in the following example:

    struct Foo<'b, U> {
      bar: Bar<'b, U>
    }
    
    struct Bar<'a, T> where T: 'a {
      x: &'a (),
      y: T
    }

    Here, Bar is declared with the where clause that T: 'a. This results in the requirement that (T: 'a) in A[Bar]. Foo, meanwhile, requires that any outlives requirements for Bar<'b, U> are satisfied, and hence as the rule that ('a => 'b, T => U) (A[Bar]) <= A[Foo]. The minimal solution to this is:

    • A[Foo] = (U: 'b)
    • A[Bar] = (T: 'a)

    This means that we would infer an implicit outlives requirements of U: 'b for Foo; for Bar we would infer T: 'a but that was explicitly declared.

    Example 4: Normalization or lack thereof

    Let us revisit the case where the where-clause is due to an impl:

    trait MakeRef<'a> {
      type Type;
    }
    
    impl<'a, T> MakeRef<'a> for Vec<T>
      where T: 'a
    {
      type Type = &'a T;
    }
    
    struct Foo<'a, T> { // Results in an error
      foo: <Vec<T> as MakeRef<'a>::Type
    }

    Here, for the struct Foo<'a, T>, we will in fact create no constraints for its where-clause set, and hence we will infer an empty set. This is because we encounter the field type <Vec<T> as MakeRef<'a>>::Type, and in such a case we ignore the trait reference itself and just require that Vec<T> is well-formed, which does not result in any outlives requirements as it contains no references.

    Now, when we go to check the full well-formedness rules for Foo, we will get an error – this is because, in that context, we will try to normalize the associated type reference, but we will fail in doing so because we do not have any where-clause stating that T: 'a (which the impl requires).

    Example 5: Multiple regions

    Sometimes the outlives relationship can be inferred between multiple regions, not only type parameters. Consider the following:

    struct Foo<'a,'b,T> {
        x: &'a &'b T
    }

    Here the WF rules for the type &'a &'b T require that both:

    • 'b: 'a holds, because of the outer reference; and,
    • T: 'b holds, because of the inner reference.

    Drawbacks

    The primary drawbacks were covered in depth in the guide-level explanation, which also covers why they are not considered to be major problems:

    • Long-range errors
      • can be readily mitigated by better explanations
    • Removing fields can affect semver compatibility
      • considered unlikely to occur frequently in practice
      • already true that changing field types can affect semver compatibility
      • semver-like tool could help to mitigate

    Rationale and Alternatives

    Naturally, we might choose to retain the status quo, and continue to require outlives annotations on structs. Assuming however that we wish to remove them, the primary alternative is to consider going farther than this RFC in various ways.

    We might make try to infer outlives requirements for impls as well, and thus eliminate the final place where T: 'a requirements are needed. However, this would introduce complications in the implementation – in order to propagate requirements from impls to structs, we must be able to do associated type normalization and hence trait solving, but we would have to do before we know the full WF requirements for each struct. The current setup avoids this complication.

    Unresolved questions

    None.

    Summary

    Extend Rust’s borrow system to support non-lexical lifetimes – these are lifetimes that are based on the control-flow graph, rather than lexical scopes. The RFC describes in detail how to infer these new, more flexible regions, and also describes how to adjust our error messages. The RFC also describes a few other extensions to the borrow checker, the total effect of which is to eliminate many common cases where small, function-local code modifications would be required to pass the borrow check. (The appendix describes some of the remaining borrow-checker limitations that are not addressed by this RFC.)

    Motivation

    What is a lifetime?

    The basic idea of the borrow checker is that values may not be mutated or moved while they are borrowed, but how do we know whether a value is borrowed? The idea is quite simple: whenever you create a borrow, the compiler assigns the resulting reference a lifetime. This lifetime corresponds to the span of the code where the reference may be used. The compiler will infer this lifetime to be the smallest lifetime that it can have that still encompasses all the uses of the reference.

    Note that Rust uses the term lifetime in a very particular way. In everyday speech, the word lifetime can be used in two distinct – but similar – ways:

    1. The lifetime of a reference, corresponding to the span of time in which that reference is used.
    2. The lifetime of a value, corresponding to the span of time before that value gets freed (or, put another way, before the destructor for the value runs).

    This second span of time, which describes how long a value is valid, is very important. To distinguish the two, we refer to that second span of time as the value’s scope. Naturally, lifetimes and scopes are linked to one another. Specifically, if you make a reference to a value, the lifetime of that reference cannot outlive the scope of that value. Otherwise, your reference would be pointing into freed memory.

    To better see the distinction between lifetime and scope, let’s consider a simple example. In this example, the vector data is borrowed (mutably) and the resulting reference is passed to a function capitalize. Since capitalize does not return the reference back, the lifetime of this borrow will be confined to just that call. The scope of data, in contrast, is much larger, and corresponds to a suffix of the fn body, stretching from the let until the end of the enclosing scope.

    fn foo() {
        let mut data = vec!['a', 'b', 'c']; // --+ 'scope
        capitalize(&mut data[..]);          //   |
    //  ^~~~~~~~~~~~~~~~~~~~~~~~~ 'lifetime //   |
        data.push('d');                     //   |
        data.push('e');                     //   |
        data.push('f');                     //   |
    } // <---------------------------------------+
    
    fn capitalize(data: &mut [char]) {
        // do something
    }

    This example also demonstrates something else. Lifetimes in Rust today are quite a bit more flexible than scopes (if not as flexible as we might like, hence this RFC):

    • A scope generally corresponds to some block (or, more specifically, a suffix of a block that stretches from the let until the end of the enclosing block) [1].
    • A lifetime, in contrast, can also span an individual expression, as this example demonstrates. The lifetime of the borrow in the example is confined to just the call to capitalize, and doesn’t extend into the rest of the block. This is why the calls to data.push that come below are legal.

    So long as a reference is only used within one statement, today’s lifetimes are typically adequate. Problems arise however when you have a reference that spans multiple statements. In that case, the compiler requires the lifetime to be the innermost expression (which is often a block) that encloses both statements, and that is typically much bigger than is really necessary or desired. Let’s look at some example problem cases. Later on, we’ll see how non-lexical lifetimes fix these cases.

    Problem case #1: references assigned into a variable

    One common problem case is when a reference is assigned into a variable. Consider this trivial variation of the previous example, where the &mut data[..] slice is not passed directly to capitalize, but is instead stored into a local variable:

    fn bar() {
        let mut data = vec!['a', 'b', 'c'];
        let slice = &mut data[..]; // <-+ 'lifetime
        capitalize(slice);         //   |
        data.push('d'); // ERROR!  //   |
        data.push('e'); // ERROR!  //   |
        data.push('f'); // ERROR!  //   |
    } // <------------------------------+

    The way that the compiler currently works, assigning a reference into a variable means that its lifetime must be as large as the entire scope of that variable. In this case, that means the lifetime is now extended all the way until the end of the block. This in turn means that the calls to data.push are now in error, because they occur during the lifetime of slice. It’s logical, but it’s annoying.

    In this particular case, you could resolve the problem by putting slice into its own block:

    fn bar() {
        let mut data = vec!['a', 'b', 'c'];
        {
            let slice = &mut data[..]; // <-+ 'lifetime
            capitalize(slice);         //   |
        } // <------------------------------+
        data.push('d'); // OK
        data.push('e'); // OK
        data.push('f'); // OK
    }

    Since we introduced a new block, the scope of slice is now smaller, and hence the resulting lifetime is smaller. Introducing a block like this is kind of artificial and also not an entirely obvious solution.

    Problem case #2: conditional control flow

    Another common problem case is when references are used in only one given match arm (or, more generally, one control-flow path). This most commonly arises around maps. Consider this function, which, given some key, processes the value found in map[key] if it exists, or else inserts a default value:

    fn process_or_default() {
        let mut map = ...;
        let key = ...;
        match map.get_mut(&key) { // -------------+ 'lifetime
            Some(value) => process(value),     // |
            None => {                          // |
                map.insert(key, V::default()); // |
                //  ^~~~~~ ERROR.              // |
            }                                  // |
        } // <------------------------------------+
    }

    This code will not compile today. The reason is that the map is borrowed as part of the call to get_mut, and that borrow must encompass not only the call to get_mut, but also the Some branch of the match. The innermost expression that encloses both of these expressions is the match itself (as depicted above), and hence the borrow is considered to extend until the end of the match. Unfortunately, the match encloses not only the Some branch, but also the None branch, and hence when we go to insert into the map in the None branch, we get an error that the map is still borrowed.

    This particular example is relatively easy to workaround. In many cases, one can move the code for None out from the match like so:

    fn process_or_default1() {
        let mut map = ...;
        let key = ...;
        match map.get_mut(&key) { // -------------+ 'lifetime
            Some(value) => {                   // |
                process(value);                // |
                return;                        // |
            }                                  // |
            None => {                          // |
            }                                  // |
        } // <------------------------------------+
        map.insert(key, V::default());
    }

    When the code is adjusted this way, the call to map.insert is not part of the match, and hence it is not part of the borrow. While this works, it is unfortunate to require these sorts of manipulations, just as it was when we introduced an artificial block in the previous example.

    Problem case #3: conditional control flow across functions

    While we were able to work around problem case #2 in a relatively simple, if irritating, fashion, there are other variations of conditional control flow that cannot be so easily resolved. This is particularly true when you are returning a reference out of a function. Consider the following function, which returns the value for a key if it exists, and inserts a new value otherwise (for the purposes of this section, assume that the entry API for maps does not exist):

    fn get_default<'r,K:Hash+Eq+Copy,V:Default>(map: &'r mut HashMap<K,V>,
                                                key: K)
                                                -> &'r mut V {
        match map.get_mut(&key) { // -------------+ 'r
            Some(value) => value,              // |
            None => {                          // |
                map.insert(key, V::default()); // |
                //  ^~~~~~ ERROR               // |
                map.get_mut(&key).unwrap()     // |
            }                                  // |
        }                                      // |
    }                                          // v

    At first glance, this code appears quite similar to the code we saw before, and indeed, just as before, it will not compile. In fact, the lifetimes at play are quite different. The reason is that, in the Some branch, the value is being returned out to the caller. Since value is a reference into the map, this implies that the map will remain borrowed until some point in the caller (the point 'r, to be exact). To get a better intuition for what this lifetime parameter 'r represents, consider some hypothetical caller of get_default: the lifetime 'r then represents the span of code in which that caller will use the resulting reference:

    fn caller() {
        let mut map = HashMap::new();
        ...
        {
            let v = get_default(&mut map, key); // -+ 'r
              // +-- get_default() -----------+ //  |
              // | match map.get_mut(&key) {  | //  |
              // |   Some(value) => value,    | //  |
              // |   None => {                | //  |
              // |     ..                     | //  |
              // |   }                        | //  |
              // +----------------------------+ //  |
            process(v);                         //  |
        } // <--------------------------------------+
        ...
    }

    If we attempt the same workaround for this case that we tried in the previous example, we will find that it does not work:

    fn get_default1<'r,K:Hash+Eq+Copy,V:Default>(map: &'r mut HashMap<K,V>,
                                                 key: K)
                                                 -> &'r mut V {
        match map.get_mut(&key) { // -------------+ 'r
            Some(value) => return value,       // |
            None => { }                        // |
        }                                      // |
        map.insert(key, V::default());         // |
        //  ^~~~~~ ERROR (still)                  |
        map.get_mut(&key).unwrap()             // |
    }                                          // v

    Whereas before the lifetime of value was confined to the match, this new lifetime extends out into the caller, and therefore the borrow does not end just because we exited the match. Hence it is still in scope when we attempt to call insert after the match.

    The workaround for this problem is a bit more involved. It relies on the fact that the borrow checker uses the precise control-flow of the function to determine which borrows are in scope.

    fn get_default2<'r,K:Hash+Eq+Copy,V:Default>(map: &'r mut HashMap<K,V>,
                                                 key: K)
                                                 -> &'r mut V {
        if map.contains(&key) {
        // ^~~~~~~~~~~~~~~~~~ 'n
            return match map.get_mut(&key) { // + 'r
                Some(value) => value,        // |
                None => unreachable!()       // |
            };                               // v
        }
    
        // At this point, `map.get_mut` was never
        // called! (As opposed to having been called,
        // but its result no longer being in use.)
        map.insert(key, V::default()); // OK now.
        map.get_mut(&key).unwrap()
    }

    What has changed here is that we moved the call to map.get_mut inside of an if, and we have set things up so that the if body unconditionally returns. What this means is that a borrow begins at the point of get_mut, and that borrow lasts until the point 'r in the caller, but the borrow checker can see that this borrow will not have even started outside of the if. It does not consider the borrow in scope at the point where we call map.insert.

    This workaround is more troublesome than the others, because the resulting code is actually less efficient at runtime, since it must do multiple lookups.

    It’s worth noting that Rust’s hashmaps include an entry API that one could use to implement this function today. The resulting code is both nicer to read and more efficient even than the original version, since it avoids extra lookups on the “not present” path as well:

    fn get_default3<'r,K:Hash+Eq,V:Default>(map: &'r mut HashMap<K,V>,
                                            key: K)
                                            -> &'r mut V {
        map.entry(key)
           .or_insert_with(|| V::default())
    }

    Regardless, the problem exists for other data structures besides HashMap, so it would be nice if the original code passed the borrow checker, even if in practice using the entry API would be preferable. (Interestingly, the limitation of the borrow checker here was one of the motivations for developing the entry API in the first place!)

    Problem case #4: mutating &mut references

    The current borrow checker forbids reassigning an &mut variable x when the referent (*x) has been borrowed. This most commonly arises when writing a loop that progressively “walks down” a data structure. Consider this function, which converts a linked list &mut List<T> into a Vec<&mut T>:

    struct List<T> {
        value: T,
        next: Option<Box<List<T>>>,
    }
    
    fn to_refs<T>(mut list: &mut List<T>) -> Vec<&mut T> {
        let mut result = vec![];
        loop {
            result.push(&mut list.value);
            if let Some(n) = list.next.as_mut() {
                list = n;
            } else {
                return result;
            }
        }
    }

    If we attempt to compile this, we get an error (actually, we get multiple errors):

    error[E0506]: cannot assign to `list` because it is borrowed
      --> /Users/nmatsakis/tmp/x.rs:11:13
       |
    9  |         result.push(&mut list.value);
       |                          ---------- borrow of `list` occurs here
    10 |         if let Some(n) = list.next.as_mut() {
    11 |             list = n;
       |             ^^^^^^^^ assignment to borrowed `list` occurs here
    

    Specifically, what’s gone wrong is that we borrowed list.value (or, more explicitly, (*list).value). The current borrow checker enforces the rule that when you borrow a path, you cannot assign to that path or any prefix of that path. In this case, that means you cannot assign to any of the following:

    • (*list).value
    • *list
    • list

    As a result, the list = n assignment is forbidden. These rules make sense in some cases (for example, if list were of type List<T>, and not &mut List<T>, then overwriting list would also overwrite list.value), but not in the case where we cross a mutable reference.

    As described in Issue #10520, there exist various workarounds for this problem. One trick is to move the &mut reference into a temporary variable that you won’t have to modify:

    fn to_refs<T>(mut list: &mut List<T>) -> Vec<&mut T> {
        let mut result = vec![];
        loop {
            let list1 = list;
            result.push(&mut list1.value);
            if let Some(n) = list1.next.as_mut() {
                list = n;
            } else {
                return result;
            }
        }
    }

    When you frame the program this way, the borrow checker sees that (*list1).value is borrowed (not list). This does not prevent us from later assigning to list.

    Clearly this workaround is annoying. The problem here, it turns out, is not specific to non-lexical lifetimes per se. Rather, it is that the rules which the borrow checker enforces when a path is borrowed are too strict and do not account for the indirection inherent in a borrowed reference. This RFC proposes a tweak to address that.

    The rough outline of our solution

    This RFC proposes a more flexible model for lifetimes. Whereas previously lifetimes were based on the abstract syntax tree, we now propose lifetimes that are defined via the control-flow graph. More specifically, lifetimes will be derived based on the MIR used internally in the compiler.

    Intuitively, in the new proposal, the lifetime of a reference lasts only for those portions of the function in which the reference may later be used (where the reference is live, in compiler speak). This can range from a few sequential statements (as in problem case #1) to something more complex, such as covering one arm in a match but not the others (problem case #2).

    However, in order to successfully type the full range of examples that we would like, we have to go a bit further than just changing lifetimes to a portion of the control-flow graph. We also have to take location into account when doing subtyping checks. This is in contrast to how the compiler works today, where subtyping relations are “absolute”. That is, in the current compiler, the type &'a () is a subtype of the type &'b () whenever 'a outlives 'b ('a: 'b), which means that 'a corresponds to a bigger portion of the function. Under this proposal, subtyping can instead be established at a particular point P. In that case, the lifetime 'a must only outlive those portions of 'b that are reachable from P.

    The ideas in this RFC have been implemented in prototype form. This prototype includes a simplified control-flow graph that allows one to create the various kinds of region constraints that can arise and implements the region inference algorithm which then solves those constraints.

    Detailed design

    Layering the design

    We describe the design in “layers”:

    1. Initially, we will describe a basic design focused on control-flow within one function.
    2. Next, we extend the control-flow graph to better handle infinite loops.
    3. Next, we extend the design to handle dropck, and specifically the #[may_dangle] attribute introduced by RFC 1327.
    4. Next, we will extend the design to consider named lifetime parameters, like those in problem case 3.
    5. Finally, we give a brief description of the borrow checker.

    Layer 0: Definitions

    Before we can describe the design, we have to define the terms that we will be using. The RFC is defined in terms of a simplified version of MIR, eliding various details that don’t introduce fundamental complexity.

    Lvalues. A MIR “lvalue” is a path that leads to a memory location. The full MIR Lvalues are defined via a Rust enum and contain a number of knobs, most of which are not relevant for this RFC. We will present a simplified form of lvalues for now:

    LV = x       // local variable
       | LV.f    // field access
       | *LV     // deref
    

    The precedence of * is low, so *a.b.c will deref a.b.c; to deref just a, one would write (*a).b.c.

    Prefixes. We say that the prefixes of an lvalue are all the lvalues you get by stripping away fields and derefs. The prefixes of *a.b would be *a.b, a.b, and a.

    Control-flow graph. MIR is organized into a control-flow graph rather than an abstract syntax tree. It is created in the compiler by transforming the “HIR” (high-level IR). The MIR CFG consists of a set of basic blocks. Each basic block has a series of statements and a terminator. Statements that concern us in this RFC fall into three categories:

    • assignments like x = y; the RHS of such an assignment is called an rvalue. There are no compound rvalues, and hence each statement is a discrete action that executes instantaneously. For example, the Rust expression a = b + c + d would be compiled into two MIR instructions, like tmp0 = b + c; a = tmp0 + d;.
    • drop(lvalue) deallocates an lvalue, if there is a value in it; in the limit, this requires runtime checks (a pass in mir, called elaborate drops, performs this transformation).
    • StorageDead(x) deallocates the stack storage for x. These are used by LLVM to allow stack-allocated values to use the same stack slot (if their live storage ranges are disjoint). Ralf Jung’s recent blog post has more details.

    Layer 1: Control-flow within a function

    Running Example

    We will explain the design with reference to a running example, called Example 4. After presenting the design, we will apply it to the three problem cases, as well as a number of other interesting examples.

    let mut foo: T = ...;
    let mut bar: T = ...;
    let mut p: &T;
    
    p = &foo;
    // (0)
    if condition {
        print(*p);
        // (1)
        p = &bar;
        // (2)
    }
    // (3)
    print(*p);
    // (4)

    The key point of this example is that the variable foo should only be considered borrowed at points 0 and 3, but not point 1. bar, in contrast, should be considered borrowed at points 2 and 3. Neither of them need to be considered borrowed at point 4, as the reference p is not used there.

    We can convert this example into the control-flow graph that follows. Recall that a control-flow graph in MIR consists of basic blocks containing a list of discrete statements and a trailing terminator:

    // let mut foo: i32;
    // let mut bar: i32;
    // let mut p: &i32;
    
    A
    [ p = &foo     ]
    [ if condition ] ----\ (true)
           |             |
           |     B       v
           |     [ print(*p)     ]
           |     [ ...           ]
           |     [ p = &bar      ]
           |     [ ...           ]
           |     [ goto C        ]
           |             |
           +-------------/
           |
    C      v
    [ print(*p)    ]
    [ return       ]
    

    We will use a notation like Block/Index to refer to a specific statement or terminator in the control-flow graph. A/0 and B/4 refer to p = &foo and goto C, respectively.

    What is a lifetime and how does it interact with the borrow checker

    To start with, we will consider lifetimes as a set of points in the control-flow graph; later in the RFC we will extend the domain of these sets to include “skolemized” lifetimes, which correspond to named lifetime parameters declared on a function. If a lifetime contains the point P, that implies that references with that lifetime are valid on entry to P. Lifetimes appear in various places in the MIR representation:

    • The types of variables (and temporaries, etc) may contain lifetimes.
    • Every borrow expression has a designated lifetime.

    We can extend our example 4 to include explicit lifetime names. There are three lifetimes that result. We will call them 'p, 'foo, and 'bar:

    let mut foo: T = ...;
    let mut bar: T = ...;
    let mut p: &'p T;
    //      --
    p = &'foo foo;
    //   ----
    if condition {
        print(*p);
        p = &'bar bar;
        //   ----
    }
    print(*p);

    As you can see, the lifetime 'p is part of the type of the variable p. It indicates the portions of the control-flow graph where p can safely be dereferenced. The lifetimes 'foo and 'bar are different: they refer to the lifetimes for which foo and bar are borrowed, respectively.

    Lifetimes attached to a borrow expression, like 'foo and 'bar, are important to the borrow checker. Those correspond to the portions of the control-flow graph in which the borrow checker will enforce its restrictions. In this case, since both borrows are shared borrows (&), the borrow checker will prevent foo from being modified during 'foo and it will prevent bar from being modified during 'bar. If these had been mutable borrows (&mut), the borrow checker would have prevented all access to foo and bar during those lifetimes.

    There are many valid choices one could make for 'foo and 'bar. This RFC however describes an inference algorithm that aims to pick the minimal lifetimes for each borrow which could possibly work. This corresponds to imposing the fewest restrictions we can.

    In the case of example 4, therefore, we wish our algorithm to compute that 'foo is {A/1, B/0, C/0}, which notably excludes the points B/1 through B/4. 'bar should be inferred to the set {B/3, B/4, C/0}. The lifetime 'p will be the union of 'foo and 'bar, since it contains all the points where the variable p is valid.

    Lifetime inference constraints

    The inference algorithm works by analyzing the MIR and creating a series of constraints. These constraints obey the following grammar:

    // A constraint set C:
    C = true
      | C, (L1: L2) @ P    // Lifetime L1 outlives Lifetime L2 at point P
    
    // A lifetime L:
    L = 'a
      | {P}
    

    Here the terminal P represents a point in the control-flow graph, and the notation 'a refers to some named lifetime inference variable (e.g., 'p, 'foo or 'bar).

    Once the constraints are created, the inference algorithm solves the constraints. This is done via fixed-point iteration: each lifetime variable begins as an empty set and we iterate over the constraints, repeatedly growing the lifetimes until they are big enough to satisfy all constraints.

    (If you’d like to compare this to the prototype code, the file regionck.rs is responsible for creating the constraints, and infer.rs is responsible for solving them.)

    Liveness

    One key ingredient to understanding how NLL should work is understanding liveness. The term “liveness” derives from compiler analysis, but it’s fairly intuitive. We say that a variable is live if the current value that it holds may be used later. This is very important to Example 4:

    let mut foo: T = ...;
    let mut bar: T = ...;
    let mut p: &'p T = &foo;
    // `p` is live here: its value may be used on the next line.
    if condition {
        // `p` is live here: its value will be used on the next line.
        print(*p);
        // `p` is DEAD here: its value will not be used.
        p = &bar;
        // `p` is live here: its value will be used later.
    }
    // `p` is live here: its value may be used on the next line.
    print(*p);
    // `p` is DEAD here: its value will not be used.

    Here you see a variable p that is assigned in the beginning of the program, and then maybe re-assigned during the if. The key point is that p becomes dead (not live) in the span before it is reassigned. This is true even though the variable p will be used again, because the value that is in p will not be used.

    Traditional compiler compute liveness based on variables, but we wish to compute liveness for lifetimes. We can extend a variable-based analysis to lifetimes by saying that a lifetime L is live at a point P if there is some variable p which is live at P, and L appears in the type of p. (Later on, when we cover the dropck, we will use a more selective notion of liveness for lifetimes in which some of the lifetimes in a variable’s type may be live while others are not.) So, in our running example, the lifetime 'p would be live at precisely the same points that p is live. The lifetimes 'foo and 'bar have no points where they are (directly) live, since they do not appear in the types of any variables.

    • However, this does not mean these lifetimes are irrelevant; as shown below, subtyping constraints introduced by subsequent analyses will eventually require 'foo and 'bar to outlive 'p.

    Liveness-based constraints for lifetimes

    The first set of constraints that we generate are derived from liveness. Specifically, if a lifetime L is live at the point P, then we will introduce a constraint like:

    (L: {P}) @ P
    

    (As we’ll see later when we cover solving constraints, this constraint effectively just inserts P into the set for L. In fact, the prototype doesn’t bother to materialize such constraints, instead just immediately inserting P into L.)

    For our running example, this means that we would introduce the following liveness constraints:

    ('p: {A/1}) @ A/1
    ('p: {B/0}) @ B/0
    ('p: {B/3}) @ B/3
    ('p: {B/4}) @ B/4
    ('p: {C/0}) @ C/0
    

    Subtyping

    Whenever references are copied from one location to another, the Rust subtyping rules require that the lifetime of the source reference outlives the lifetime of the target location. As discussed earlier, in this RFC, we extend the notion of subtyping to be location-aware, meaning that we take into account the point where the value is being copied.

    For example, at the point A/0, our running example contains a borrow expression p = &'foo foo. In this case, the borrow expression will produce a reference of type &'foo T, where T is the type of foo. This value is then assigned to p, which has the type &'p T. Therefore, we wish to require that &'foo T be a subtype of &'p T. Moreover, this relation needs to hold at the point A/1 – the successor of the point A/0 where the assignment occurs (this is because the new value of p is first visible in A/1). We write that subtyping constraint as follows:

    (&'foo T <: &'p T) @ A/1
    

    The standard Rust subtyping rules (two examples of which are given below) can then “break down” this subtyping rule into the lifetime constraints we need for inference:

    (T_a <: T_b) @ P
    ('a: 'b) @ P      // <-- a constraint for our inference algorithm
    ------------------------
    (&'a T_a <: &'b T_b) @ P
    
    (T_a <: T_b) @ P
    (T_b <: T_a) @ P  // (&mut T is invariant)
    ('a: 'b) @ P      // <-- another constraint
    ------------------------
    (&'a mut T_a <: &'b mut T_b) @ P
    

    In the case of our running example, we generate the following subtyping constraints:

    (&'foo T <: &'p T) @ A/1
    (&'bar T <: &'p T) @ B/3
    

    These can be converted into the following lifetime constraints:

    ('foo: 'p) @ A/1
    ('bar: 'p) @ B/3
    

    Reborrow constraints

    There is one final source of constraints. It frequently happens that we have a borrow expression that “reborrows” the referent of an existing reference:

    let x: &'x i32 = ...;
    let y: &'y i32 = &*x;
    

    In such cases, there is a connection between the lifetime 'y of the borrow and the lifetime 'x of the original reference. In particular, 'x must outlive 'y ('x: 'y). In simple cases like this, the relationship is the same regardless of whether the original reference x is a shared (&) or mutable (&mut) reference. However, in more complex cases that involve multiple dereferences, the treatment is different.

    Supporting prefixes. To define the reborrow constraints, we first introduce the idea of supporting prefixes – this definition will be useful in a few places. The supporting prefixes for an lvalue are formed by stripping away fields and derefs, except that we stop when we reach the deref of a shared reference. Inituitively, shared references are different because they are Copy – and hence one could always copy the shared reference into a temporary and get an equivalent path. Here are some examples of supporting prefixes:

    let r: (&(i32, i64), (f32, f64));
    
    // The path (*r.0).1 has type `i64` and supporting prefixes:
    // - (*r.0).1
    // - *r.0
    
    // The path r.1.0 has type `f32` and supporting prefixes:
    // - r.1.0
    // - r.1
    // - r
    
    let m: (&mut (i32, i64), (f32, f64));
    
    // The path (*m.0).1 has type `i64` and supporting prefixes:
    // - (*m.0).1
    // - *m.0
    // - m.0
    // - m
    

    Reborrow constraints. Consider the case where we have a borrow (shared or mutable) of some lvalue lv_b for the lifetime 'b:

    lv_l = &'b lv_b      // or:
    lv_l = &'b mut lv_b
    

    In that case, we compute the supporting prefixes of lv_b, and find every deref lvalue *lv in the set where lv is a reference with lifetime 'a. We then add a constraint ('a: 'b) @ P, where P is the point following the borrow (that’s the point where the borrow takes effect).

    Let’s look at some examples. In each case, we will link to the corresponding test from the prototype implementation.

    Example 1. To see why this rule is needed, let’s first consider a simple example involving a single reference:

    let mut foo: i32     = 22;
    let r_a: &'a mut i32 = &'a mut foo;
    let r_b: &'b mut i32 = &'b mut *r_a;
    ...
    use(r_b);

    In this case, the supporting prefixes of *r_a are *r_a and r_a (because r_a is a mutable reference, we recurse). Only one of those, *r_a, is a deref lvalue, and the reference r_a being dereferenced has the lifetime 'a. We would add the constraint that 'a: 'b, thus ensuring that foo is considered borrowed so long as r_b is in use. Without this constraint, the lifetime 'a would end after the second borrow, and hence foo would be considered unborrowed, even though *r_b could still be used to access foo.

    Example 2. Consider now a case with a double indirection:

    let mut foo: i32     = 22;
    let mut r_a: &'a i32 = &'a foo;
    let r_b: &'b &'a i32 = &'b r_a;
    let r_c: &'c i32     = &'c **r_b;
    // What is considered borrowed here?
    use(r_c);

    Just as before, it is important that, so long as r_c is in use, foo is considered borrowed. However, what about the variable r_a: should it considered borrowed? The answer is no: once r_c is initialized, the value of r_a is no longer important, and it would be fine to (for example) overwrite r_a with a new value, even as foo is still considered borrowed. This result falls out from our reborrowing rules: the supporting paths of **r_b is just **r_b. We do not add any more paths because this path is already a dereference of *r_b, and *r_b has (shared reference) type &'a i32. Therefore, we would add one reborrow constraint: that 'a: 'c. This constraint ensures that as long as r_c is in use, the borrow of foo remains in force, but the borrow of r_a (which has the lifetime 'b) can expire.

    Example 3. The previous example showed how a borrow of a shared reference can expire once it has been dereferenced. With mutable references, however, this is not safe. Consider the following example:

    let foo = Foo { ... };
    let p: &'p mut Foo = &mut foo;
    let q: &'q mut &'p mut Foo = &mut p;
    let r: &'r mut Foo = &mut **q;
    use(*p); // <-- This line should result in an ERROR
    use(r);

    The key point here is that we create a reference r by reborrowing **q; r is then later used in the final line of the program. This use of r must extend the lifetime of the borrows used to create both p and q. Otherwise, one could access (and mutate) the same memory through both *r and *p. (In fact, the real rustc did in its early days have a soundness bug much like this one.)

    Because dereferencing a mutable reference does not stop the supporting prefixes from being enumerated, the supporting prefixes of **q are **q, *q, and q. Therefore, we add two reborrow constraints: 'q: 'r and 'p: 'r, and hence both borrows are indeed considered in scope at the line in question.

    As an alternate way of looking at the previous example, consider it like this. To create the mutable reference p, we get a “lock” on foo (that lasts so long as p is in use). We then take a lock on the mutable reference p to create q; this lock must last for as long as q is in use. When we create r by borrowing **q, that is the last direct use of q – so you might think we can release the lock on p, since q is no longer in (direct) use. However, that would be unsound, since then r and *p could both be used to access the same memory. The key is to recognize that r represents an indirect use of q (and q in turn is an indirect use of p), and hence so long as r is in use, p and q must also be considered “in use” (and hence their “locks” still enforced).

    Solving constraints

    Once the constraints are created, the inference algorithm solves the constraints. This is done via fixed-point iteration: each lifetime variable begins as an empty set and we iterate over the constraints, repeatedly growing the lifetimes until they are big enough to satisfy all constraints.

    The meaning of a constraint like ('a: 'b) @ P is that, starting from the point P, the lifetime 'a must include all points in 'b that are reachable from the point P. The implementation does a depth-first search starting from P; the search stops if we exit the lifetime 'b. Otherwise, for each point we find, we add it to 'a.

    In our example, the full set of constraints is:

    ('foo: 'p) @ A/1
    ('bar: 'p) @ B/3
    ('p: {A/1}) @ A/1
    ('p: {B/0}) @ B/0
    ('p: {B/3}) @ B/3
    ('p: {B/4}) @ B/4
    ('p: {C/0}) @ C/0
    

    Solving these constraints results in the following lifetimes, which are precisely the answers we expected:

    'p   = {A/1, B/0, B/3, B/4, C/0}
    'foo = {A/1, B/0, C/0}
    'bar = {B/3, B/4, C/0}
    

    Intuition for why this algorithm is correct

    For the algorithm to be correct, there is a critical invariant that we must maintain. Consider some path H that is borrowed with lifetime L at a point P to create a reference R; this reference R (or some copy/move of it) is then later dereferenced at some point Q.

    We must ensure that the reference has not been invalidated: this means that the memory which was borrowed must not have been freed by the time we reach Q. If the reference R is a shared reference (&T), then the memory must also not have been written (modulo UnsafeCell). If the reference R is a mutable reference (&mut T), then the memory must not have been accessed at all, except through the reference R. To guarantee these properties, we must prevent actions that might affect the borrowed memory for all of the points between P (the borrow) and Q (the use).

    This means that L must at least include all the points between P and Q. There are two cases to consider. First, the case where the access at point Q occurs through the same reference R that was created by the borrow:

    R = &H; // point P
    ...
    use(R); // point Q
    

    In this case, the variable R will be live on all the points between P and Q. The liveness-based rules suffice for this case: specifically, because the type of R includes the lifetime L, we know that L must include all the points between P and Q, since R is live there.

    The second case is when the memory referenced by R is accessed, but through an alias (or move):

    R = &H;  // point P
    R2 = R;  // last use of R, point A
    ...
    use(R2); // point Q
    

    In this case, the liveness rules alone do not suffice. The problem is that the R2 = R assignment may well be the last use of R, and so the variable R is dead at this point. However, the value in R will still be dereferenced later (through R2), and hence we want the lifetime L to include those points. This is where the subtyping constraints come into play: the type of R2 includes a lifetime L2, and the assignment R2 = R will establish an outlives constraint (L: L2) @ A between L and L2. Moreover, this new variable R2 must be live between the assignment and the ultimate use (that is, along the path A…Q). Putting these two facts together, we see that L will ultimately include the points from P to A (because of the liveness of R) and the points from A to Q (because the subtyping requirement propagates the liveness of R2).

    Note that it is possible for these lifetimes to have gaps. This can occur when the same variable is used and overwritten multiple times:

    let R: &L i32;
    let R2: &L2 i32;
    
    R = &H1; // point P1
    R2 = R;  // point A1
    use(R2); // point Q1
    ...
    R2 = &H2; // point P2
    use(R2);  // point Q2
    

    In this example, the liveness constraints on R2 will ensure that L2 (the lifetime in its type) includes Q1 and Q2 (because R2 is live at those two points), but not the “…” nor the points P1 or P2. Note that the subtyping relationship ((L: L2) @ A1)) at A1 here ensures that L also includes Q1, but doesn’t require that L includes Q2 (even though L2 has point Q2). This is because the value in R2 at Q2 cannot have come from the assignment at A1; if it could have done, then either R2 would have to be live between A1 and Q2 or else there would be a subtyping constraint.

    Other examples

    Let us work through some more examples. We begin with problem cases #1 and #2 (problem case #3 will be covered after we cover named lifetimes in a later section).

    Problem case #1.

    Translated into MIR, the example will look roughly as follows:

    let mut data: Vec<i32>;
    let slice: &'slice mut i32;
    START {
        data = ...;
        slice = &'borrow mut data;
        capitalize(slice);
        data.push('d');
        data.push('e');
        data.push('f');
    }

    The constraints generated will be as follows:

    ('slice: {START/2}) @ START/2
    ('borrow: 'slice) @ START/2
    

    Both 'slice and 'borrow will therefore be inferred to START/2, and hence the accesses to data in START/3 and the following statements are permitted.

    Problem case #2.

    Translated into MIR, the example will look roughly as follows (some irrelevant details are elided). Note that the match statement is translated into a SWITCH, which tests the variant, and a “downcast”, which lets us extract the contents out from the Some variant (this operation is specific to MIR and has no Rust equivalent, other than as part of a match).

    let map: HashMap<K,V>;
    let key: K;
    let tmp0: &'tmp0 mut HashMap<K,V>;
    let tmp1: &K;
    let tmp2: Option<&'tmp2 mut V>;
    let value: &'value mut V;
    
    START {
    /*0*/ map = ...;
    /*1*/ key = ...;
    /*2*/ tmp0 = &'map mut map;
    /*3*/ tmp1 = &key;
    /*4*/ tmp2 = HashMap::get_mut(tmp0, tmp1);
    /*5*/ SWITCH tmp2 { None => NONE, Some => SOME }
    }
    
    NONE {
    /*0*/ ...
    /*1*/ goto EXIT;
    }
    
    SOME {
    /*0*/ value = tmp2.downcast<Some>.0;
    /*1*/ process(value);
    /*2*/ goto EXIT;
    }
    
    EXIT {
    }
    

    The following liveness constraints are generated:

    ('tmp0: {START/3}) @ START/3
    ('tmp0: {START/4}) @ START/4
    ('tmp2: {SOME/0}) @ SOME/0
    ('value: {SOME/1}) @ SOME/1
    

    The following subtyping-based constraints are generated:

    ('map: 'tmp0) @ START/3
    ('tmp0: 'tmp2) @ START/5
    ('tmp2: 'value) @ SOME/1
    

    Ultimately, the lifetime we are most interested in is 'map, which indicates the duration for which map is borrowed. If we solve the constraints above, we will get:

    'map == {START/3, START/4, SOME/0, SOME/1}
    'tmp0 == {START/3, START/4, SOME/0, SOME/1}
    'tmp2 == {SOME/0, SOME/1}
    'value == {SOME/1}
    

    These results indicate that map can be mutated in the None arm; map could also be mutated in the Some arm, but only after process() is called (i.e., starting at SOME/2). This is the desired result.

    Example 4, invariant

    It’s worth looking at a variant of our running example (“Example 4”). This is the same pattern as before, but instead of using &'a T references, we use Foo<'a> references, which are invariant with respect to 'a. This means that the 'a lifetime in a Foo<'a> value cannot be approximated (i.e., you can’t make it shorter, as you can with a normal reference). Usually invariance arises because of mutability (e.g., Foo<'a> might have a field of type Cell<&'a ()>). The key point here is that invariance actually makes no difference at all the outcome. This is true because of location-based subtyping.

    let mut foo: T = ...;
    let mut bar: T = ...;
    let p: Foo<'a>;
    
    p = Foo::new(&foo);
    if condition {
        print(*p);
        p = Foo::new(&bar);
    }
    print(*p);

    Effectively, we wind up with the same constraints as before, but where we only had 'foo: 'p/'bar: 'p constraints before (due to subtyping), we now also have 'p: 'foo and 'p: 'bar constraints:

    ('foo: 'p) @ A/1
    ('p: 'foo) @ A/1
    ('bar: 'p) @ B/3
    ('p: 'bar) @ B/3
    ('p: {A/1}) @ A/1
    ('p: {B/0}) @ B/0
    ('p: {B/3}) @ B/3
    ('p: {B/4}) @ B/4
    ('p: {C/0}) @ C/0
    

    The key point is that the new constraints don’t affect the final answer: the new constraints were already satisfied with the older answer.

    vec-push-ref

    In previous iterations of this proposal, the location-aware subtyping rules were replaced with transformations such as SSA form. The vec-push-ref example demonstrates the value of location-aware subtyping in contrast to these approaches.

    let foo: i32;
    let vec: Vec<&'vec i32>;
    let p: &'p i32;
    
    foo = ...;
    vec = Vec::new();
    p = &'foo foo;
    if true {
        vec.push(p);
    } else {
        // Key point: `foo` not borrowed here.
        use(vec);
    }

    This can be converted to control-flow graph form:

    block START {
        v = Vec::new();
        p = &'foo foo;
        goto B C;
    }
    
    block B {
        vec.push(p);
        goto EXIT;
    }
    
    block C {
        // Key point: `foo` not borrowed here
        use(vec);
        goto EXIT;
    }
    
    block EXIT {
    }
    

    Here the relations from liveness are:

    ('vec: {START/1}) @ START/1
    ('vec: {START/2}) @ START/2
    ('vec: {B/0}) @ B/0
    ('vec: {C/0}) @ C/0
    ('p: {START/2}) @ START/2
    ('p: {B/0}) @ B/0
    

    Meanwhile, the call to vec.push(p) establishes this subtyping relation:

    ('p: 'vec) @ B/1
    ('foo: 'p) @ START/2
    

    The solution is:

    'vec = {START/1, START/2, B/0, C/0}
    'p = {START/2, B/0}
    'foo = {START/2, B/0}
    

    What makes this example interesting is that the lifetime 'vec must include both halves of the if – because it is used in both branches – but 'vec only becomes “entangled” with the lifetime 'p on one path. Thus even though 'p has to outlive 'vec, 'p never winds up including the “else” branch thanks to location-aware subtyping.

    Layer 2: Avoiding infinite loops

    The previous design was described in terms of the “pure” MIR control-flow graph. However, using the raw graph has some undesirable properties around infinite loops. In such cases, the graph has no exit, which undermines the traditional definition of reverse analyses like liveness. To address this, when we build the control-flow graph for our functions, we will augment it with additional edges – in particular, for every infinite loop (loop { }), we will add false “unwind” edges. This ensures that the control-flow graph has a final exit node (the success of the RETURN and RESUME nodes) that postdominates all other nodes in the graph.

    If we did not add such edges, the result would also allow a number of surprising programs to type-check. For example, it would be possible to borrow local variables with 'static lifetime, so long as the function never returned:

    fn main() {
        let x: usize;
        let y: &'static x = &x;
        loop { }
    }

    This would work because (as covered in detail under the borrow check section) the StorageDead(x) instruction would never be reachable, and hence any lifetime of borrow would be acceptable. This further leads to other surprising programs that still type-check, such as this example which uses an (incorrect, but declared as unsafe) API for spawning threads:

    let scope = Scope::new();
    let mut foo = 22;
    
    unsafe {
        // dtor joins the thread
        let _guard = scope.spawn(&mut foo);
        loop {
            foo += 1;
        }
        // drop of `_guard` joins the thread
    }

    Without the unwind edges, this code would pass the borrowck, since the drop of _guard (and StorageDead instruction) is not reachable, and hence _guard is not considered live (after all, its destructor will indeed never run). However, this would permit the foo variable to be modified both during the infinite loop and by the thread launched by scope.spawn(), which was given access to an &mut foo reference (albeit one with a theoretically short lifetime).

    With the false unwind edge, the compiler essentially always assumes that a destructor may run, since every scope may theoretically execute. This extends the &mut foo borrow given to scope.spawn() to cover the body of the loop, resulting in a borrowck error.

    Layer 3: Accommodating dropck

    MIR includes an action that corresponds to “dropping” a variable:

    DROP(variable)
    

    Note that while MIR supports general drops of any lvalue, at the point where this analysis is running, we are always dropping entire variables at a time. This operation executes the destructor for variable, effectively “de-initializing” the memory in which the value resides (if the variable – or parts of the variable – have already been dropped, then drop has no effect; this is not relevant to the current analysis).

    Interestingly, in many cases dropping a value does not require that the lifetimes in the dropped value be valid. After all, dropping a reference of type &'a T or &'a mut T is defined as a no-op, so it does not matter if the reference points at valid memory. In cases like this, we say that the lifetime 'a may dangle. This is inspired by the C term “dangling pointer” which means a pointer to freed or invalid memory.

    However, if that same reference is stored in the field of a struct which implements the Drop trait, when the struct may, during its destructor, access the referenced value, so it’s very important that the reference be valid in that case. Put another way, if you have a value v of type Foo<'a> that implements Drop, then 'a typically cannot dangle when v is dropped (just as 'a would not be allowed to dangle for any other operation).

    More generally, RFC 1327 defined specific rules for which lifetimes in a type may dangle during drop and which may not. We integrate those rules into our liveness analysis as follows: the MIR instruction DROP(variable) is not treated like other MIR instructions when it comes to liveness. In a sense, conceptually we run two distinct liveness analyses (in practice, the prototype uses two bits per variable):

    1. The first, which we’ve already seen, indicates when a variable’s current value may be used in the future. This corresponds to “non-drop” uses of the variable in the MIR. Whenever a variable is live by this definition, all of the lifetimes in its type are live.
    2. The second, which we are adding now, indicates when a variable’s current value may be dropped in the future. This corresponds to “drop” uses of the variable in the MIR. Whenever a variable is live in this sense, all of the lifetimes in its type except those marked as may-dangle are live.

    Permitting lifetimes to dangle during drop is very important! In fact, it is essential to even the most basic non-lexical lifetime examples, such as Problem Case #1. After all, if we translate Problem Case #1 into MIR, we see that the reference slice will wind up being dropped at the end of the block:

    let mut data: Vec<i32>;
    let slice: &'slice mut i32;
    START {
        ...
        slice = &'borrow mut data;
        capitalize(slice);
        data.push('d');
        data.push('e');
        data.push('f');
        DROP(slice);
        DROP(data);
    }

    This poses no problem for our analysis, however, because 'slice “may dangle” during the drop, and hence is not considered live.

    Layer 4: Named lifetimes

    Until now, we’ve only considered lifetimes that are confined to the extent of a function. Often, we want to reason about lifetimes that begin or end after the current function has ended. More subtly, we sometimes want to have lifetimes that sometimes begin and end in the current function, but which may (along some paths) extend into the caller. Consider Problem Case #3 (the corresponding test case in the prototype is the get-default test):

    fn get_default<'r,K,V:Default>(map: &'r mut HashMap<K,V>,
                                   key: K)
                                   -> &'r mut V {
        match map.get_mut(&key) { // -------------+ 'r
            Some(value) => value,              // |
            None => {                          // |
                map.insert(key, V::default()); // |
                //  ^~~~~~ ERROR               // |
                map.get_mut(&key).unwrap()     // |
            }                                  // |
        }                                      // |
    }                                          // v

    When we translate this into MIR, we get something like the following (this is “pseudo-MIR”):

    block START {
      m1 = &'m1 mut *map;  // temporary created for `map.get_mut()` call
      v = Map::get_mut(m1, &key);
      switch v { SOME NONE };
    }
    
    block SOME {
      return = v.as<Some>.0; // assign to return value slot
      goto END;
    }
    
    block NONE {
      Map::insert(&*map, key, ...);
      m2 = &'m2 mut *map;  // temporary created for `map.get_mut()` call
      v = Map::get_mut(m2, &key);
      return = ... // "unwrap" of `v`
      goto END;
    }
    
    block END {
      return;
    }
    

    The key to this example is that the first borrow of map, with the lifetime 'm1, must extend to the end of the 'r, but only if we branch to SOME. Otherwise, it should end once we enter the NONE block.

    To accommodate cases like this, we will extend the notion of a region so that it includes not only points in the control-flow graph, but also includes a (possibly empty) set of “end regions” for various named lifetimes. We denote these as end('r) for some named region 'r. The region end('r) can be understood semantically as referring to some portion of the caller’s control-flow graph (actually, they could extend beyond the end of the caller, into the caller’s caller, and so forth, but that doesn’t concern us). This new region might then be denoted as the following (in pseudocode form):

    struct Region {
      points: Set<Point>,
      end_regions: Set<NamedLifetime>,
    }

    In this case, when a type mentions a named lifetime, such as 'r, that can be represented by a region that includes:

    • the entire CFG,
    • and, the end region for that named lifetime (end('r)).

    Furthermore, we can elaborate the set to include end('x) for every named lifetime 'x such that 'r: 'x. This is because, if 'r: 'x, then we know that 'r doesn’t end up until 'x has already ended.

    Finally, we must adjust our definition of subtyping to accommodate this amended definition of a region, which we do as follows. When we have an outlives relation

    'b: 'a @ P
    

    where the end point of the CFG is reachable from P without leaving 'a, the existing inference algorithm would simply add the end-point to 'b and stop. The new algorithm would also add any end regions that are included in 'a to 'b at that time. (Expressed less operationally, 'b only outlives 'a if it also includes the end-regions that 'a includes, presuming that the end point of the CFG is reachable from P). The reason that we require the end point of the CFG to be reachable is because otherwise the data never escapes the current function, and hence end('r) is not reachable (since end('r) only covers the code in callers that executes after the return).

    NB: This part of the prototype is partially implemented. Issue #12 describes the current status and links to the in-progress PRs.

    Layer 5: How the borrow check works

    For the most part, the focus of this RFC is on the structure of lifetimes, but it’s worth talking a bit about how to integrate these non-lexical lifetimes into the borrow checker. In particular, along the way, we’d like to fix two shortcomings of the borrow checker:

    First, support nested method calls like vec.push(vec.len()). Here, the plan is to continue with the mut2 borrow solution proposed in RFC 2025. This RFC does not (yet) propose one of the type-based solutions described in RFC 2025, such as “borrowing for the future” or Ref2. The reasons why are discussed in the Alternatives section. For simplicity, this description of the borrow checker ignores RFC 2025. The extensions described here are fairly orthogonal to the changes proposed in RFC 2025, which in effect cause the start of a borrow to be delayed.

    Second, permit variables containing mutable references to be modified, even if their referent is borrowed. This refers to the “Problem Case #4” described in the introduction; we wish to accept the original program.

    Borrow checker phase 1: computing loans in scope

    The first phase of the borrow checker computes, at each point in the CFG, the set of in-scope loans. A “loan” is represented as a tuple ('a, shared|uniq|mut, lvalue) indicating:

    1. the lifetime 'a for which the value was borrowed;
    2. whether this was a shared, unique, or mutable loan;
      • “unique” loans are exactly like mutable loans, but they do not permit mutation of their referents. They are used only in closure desugarings and are not part of Rust’s surface syntax.
    3. the lvalue that was borrowed (e.g., x or (*x).foo).

    The set of in-scope loans at each point is found via a fixed-point dataflow computation. We create a loan tuple from each borrow rvalue in the MIR (that is, every assignment statement like tmp = &'a b.c.d), giving each tuple a unique index i. We can then represent the set of loans that are in scope at a particular point using a bit-set and do a standard forward data-flow propagation.

    For a statement at point P in the graph, we define the “transfer function” – that is, which loans it brings into or out of scope – as follows:

    • any loans whose region does not include P are killed;
    • if this is a borrow statement, the corresponding loan is generated;
    • if this is an assignment lv = <rvalue>, then any loan for some path P of which lv is a prefix is killed.

    The last point bears some elaboration. This rule is what allows us to support cases like the one in Problem Case #4:

    let list: &mut List<T> = ...;
    let v = &mut (*list).value;
    list = ...; // <-- assignment

    At the point of the marked assignment, the loan of (*list).value is in-scope, but it does not have to be considered in-scope afterwards. This is because the variable list now holds a fresh value, and that new value has not yet been borrowed (or else we could not have produced it). Specifically, whenever we see an assignment lv = <rvalue> in MIR, we can clear all loans where the borrowed path lv_loan has lv as a prefix. (In our example, the assignment is to list, and the loan path (*list).value has list as a prefix.)

    NB. In this phase, when there is an assignment, we always clear all loans that applied to the overwritten path; however, in some cases the assignment itself may be illegal due to those very loans. In our example, this would be the case if the type of list had been List<T> and not &mut List<T>. In such cases, errors will be reported by the next portion of the borrowck, described in the next section.

    Borrow checker phase 2: reporting errors

    At this point, we have computed which loans are in scope at each point. Next, we traverse the MIR and identify actions that are illegal given the loans in scope. Rather than go through every kind of MIR statement, we can break things down into two kinds of actions that can be performed:

    • Accessing an lvalue, which we categorize along two axes (shallow vs deep, read vs write)
    • Dropping an lvalue

    For each of these kinds of actions, we will specify below the rules that determine when they are legal, given the set of loans L in scope at the start of the action. The second phase of the borrow check therefore consists of iterating over each statement in the MIR and checking, given the in-scope loans, whether the actions it performs are legal. Translating MIR statements into actions is mostly straightforward:

    • A StorageDead statement counts as a shallow write.
    • An assignment statement LV = RV is a shallow write to LV;
    • and, within the rvalue RV:
      • Each lvalue operand is either a deep read or a deep write action, depending on whether or not the type of the lvalue implements Copy.
        • Note that moves count as “deep writes”.
      • A shared borrow &LV counts as a deep read.
      • A mutable borrow &mut LV counts as deep write.

    There are a few interesting cases to keep in mind:

    • MIR models discriminants more precisely. They should be thought of as a distinct field when it comes to borrows.
    • In the compiler today, Box is still “built-in” to MIR. This RFC ignores that possibility and instead acts as though borrowed references (& and &mut) and raw pointers (*const and *mut) were the only sorts of pointers. It should be straight-forward to extend the text here to cover Box, though some questions arise around the handling of drop (see the section on drops for details).

    Accessing an lvalue LV. When accessing an lvalue LV, there are two axes to consider:

    • The access can be SHALLOW or DEEP:
      • A shallow access means that the immediate fields reached at LV are accessed, but references or pointers found within are not dereferenced. Right now, the only access that is shallow is an assignment like x = ..., which would be a shallow write of x.
      • A deep access means that all data reachable through a given lvalue may be invalidated or accessed by this action.
    • The access can be a READ or WRITE:
      • A read means that the existing data may be read, but will not be changed.
      • A write means that the data may be mutated to new values or otherwise invalidated (for example, it could be de-initialized, as in a move operation).

    “Deep” accesses are often deep because they create and release an alias, in which case the “deep” qualifier reflects what might happen through that alias. For example, if you have let x = &mut y, that is considered a deep write of y, even though the actual borrow doesn’t do anything at all, we create a mutable alias x that can be used to mutate anything reachable from y. A move let x = y is similar: it writes to the shallow content of y, but then – via the new name x – we can access all other content accessible through y.

    The pseudocode for deciding when an access is legal looks like this:

    fn access_legal(lvalue, is_shallow, is_read) {
        let relevant_borrows = select_relevant_borrows(lvalue, is_shallow);
    
        for borrow in relevant_borrows {
            // shared borrows like `&x` still permit reads from `x` (but not writes)
            if is_read && borrow.is_read { continue; }
            
            // otherwise, report an error, because we have an access
            // that conflicts with an in-scope borrow
            report_error();
        }
    }
    

    As you can see, it works in two steps. First, we enumerate a set of in-scope borrows that are relevant to lvalue – this set is affected by whether this is a “shallow” or “deep” action, as will be described shortly. Then, for each such borrow, we check if it conflicts with the action (i.e.,, if at least one of them is potentially writing), and, if so, we report an error.

    For shallow accesses to the path lvalue, we consider borrows relevant if they meet one of the following criteria:

    • there is a loan for the path lvalue;
      • so: writing a path like a.b.c is illegal if a.b.c is borrowed
    • there is a loan for some prefix of the path lvalue;
      • so: writing a path like a.b.c is illegal if a or a.b is borrowed
    • lvalue is a shallow prefix of the loan path
      • shallow prefixes are found by stripping away fields, but stop at any dereference
      • so: writing a path like a is illegal if a.b is borrowed
      • but: writing a is legal if *a is borrowed, whether or not a is a shared or mutable reference

    For deep accesses to the path lvalue, we consider borrows relevant if they meet one of the following criteria:

    • there is a loan for the path lvalue;
      • so: reading a path like a.b.c is illegal if a.b.c is mutably borrowed
    • there is a loan for some prefix of the path lvalue;
      • so: reading a path like a.b.c is illegal if a or a.b is mutably borrowed
    • lvalue is a supporting prefix of the loan path
      • supporting prefixes were defined earlier
      • so: reading a path like a is illegal if a.b is mutably borrowed, but – in contrast with shallow accesses – reading a is also illegal if *a is mutably borrowed

    Dropping an lvalue LV. Dropping an lvalue can be treated as a DEEP WRITE, like a move, but this is overly conservative. The rules here are under active development, see #40.

    How We Teach This

    Terminology

    In this RFC, I’ve opted to continue using the term “lifetime” to refer to the portion of the program in which a reference is in active use (or, alternatively, to the “duration of a borrow”). As the intro to the RFC makes clear, this terminology somewhat conflicts with an alternative usage, in which lifetime refers to the dynamic extent of a value (what we call the “scope”). I think that – if we were starting over – it might have been preferable to find an alternative term that is more specific. However, it would be rather difficult to try and change the term “lifetime” at this point, and hence this RFC does not attempt do so. To avoid confusion, however, it seems best if the error messages result from the region and borrow check avoid the term lifetime where possible, or use qualification to make the meaning more clear.

    Leveraging intuition: framing errors in terms of points

    Part of the reason that Rust currently uses lexical scopes to determine lifetimes is that it was thought that they would be simpler for users to reason about. Time and experience have not borne this hypothesis out: for many users, the fact that borrows are “artificially” extended to the end of the block is more surprising than not. Furthermore, most users have a pretty intuitive understanding of control flow (which makes sense: you have to, in order to understand what your program will do).

    We therefore propose to leverage this intution when explaining borrow and lifetime errors. To the extent possible, we will try to explain all errors in terms of three points:

    • The point where the borrow occurred (B).
    • The point where the resulting reference is used (U).
    • An intervening point that might have invalidated the reference (A).

    We should select three points such that B can reach A and A can reach U. In general, the approach is to describe the errors in “narrative” form:

    • First, value is borrowed occurs.
    • Next, the action occurs, invalidating the reference.
    • Finally, the next use occurs, after the reference has been invalidated.

    This approach is similar to what we do today, but we often neglect to mention this third point, where the next use occurs. Note that the “point of error” remains the second action – that is, the error, conceptually, is to perform an invalidating action in between two uses of the reference (rather than, say, to use the reference after an invalidating action). This actually reflects the definition of undefined behavior more accurately (that is, performing an illegal write is what causes undefined behavior, but the write is illegal because of the latter use).

    To see the difference, consider this erroneous program:

    fn main() {
        let mut i = 3;
        let x = &i;
        i += 1;
        println!("{}", x);
    }

    Currently, we emit the following error:

    error[E0506]: cannot assign to `i` because it is borrowed
     --> <anon>:4:5
       |
     3 |     let x = &i;
       |              - borrow of `i` occurs here
     4 |     i += 1;
       |     ^^^^^^ assignment to borrowed `i` occurs here
    

    Here, the points B and A are highlighted, but not the point of use U. Moreover, the “blame” is placed on the assignment. Under this RFC, we would display the error as follows:

    error[E0506]: cannot write to `i` while borrowed
     --> <anon>:4:5
       |
     3 |     let x = &i;
       |              - (shared) borrow of `i` occurs here
     4 |     i += 1;
       |     ^^^^^^ write to `i` occurs here, while borrow is still active
     5 |     println!("{}", x);
       |                    - borrow is later used here
    

    Another example, this time using a match:

    fn main() {
        let mut x = Some(3);
        match &mut x {
            Some(i) => {
                x = None;
                *i += 1;
            }
            None => {
                x = Some(0); // OK
            }
        }
    }

    The error might be:

    error[E0506]: cannot write to `x` while borrowed
     --> <anon>:4:5
       |
     3 |     match &mut x {
       |           ------ (mutable) borrow of `x` occurs here
     4 |         Some(i) => {
     5 |              x = None;
       |              ^^^^^^^^ write to `x` occurs here, while borrow is still active
     6 |              *i += 1;
       |              -- borrow is later used here
       |
    

    (Note that the assignment in the None arm is not an error, since the borrow is never used again.)

    Some special cases

    There are some cases where the three points are not all visible in the user syntax where we may need some careful treatment.

    Drop as last use

    There are times when the last use of a variable will in fact be its destructor. Consider an example like this:

    struct Foo<'a> { field: &'a u32 }
    impl<'a> Drop for Foo<'a> { .. }
    
    fn main() {
        let mut x = 22;
        let y = Foo { field: &x };
        x += 1;
    }

    This code would be legal, but for the destructor on y, which will implicitly execute at the end of the enclosing scope. The error message might be shown as follows:

    error[E0506]: cannot write to `x` while borrowed
     --> <anon>:4:5
       |
     6 |     let y = Foo { field: &x };
       |                          -- borrow of `x` occurs here
     7 |     x += 1;
       |     ^ write to `x` occurs here, while borrow is still active
     8 | }
       | - borrow is later used here, when `y` is dropped
    

    Method calls

    One example would be method calls:

    fn main() {
        let mut x = vec![1];
        x.push(x.pop().unwrap());
    }

    We propose the following error for this sort of scenario:

    error[E0506]: cannot write to `x` while borrowed
     --> <anon>:4:5
       |
     3 |     x.push(x.pop().unwrap());
       |     - ---- ^^^^^^^^^^^^^^^^
       |     | |    write to `x` occurs here, while borrow is still in active use
       |     | borrow is later used here, during the call
       |     `x` borrowed here
    

    If you are not using a method, the error would look slightly different, but be similar in concept:

    error[E0506]: cannot assign to `x` because it is borrowed
     --> <anon>:4:5
       |
     3 |     Vec::push(&mut x, x.pop().unwrap());
       |     --------- ------  ^^^^^^^^^^^^^^^^
       |     |         |       write to `x` occurs here, while borrow is still in active use
       |     |         `x` borrowed here
       |     borrow is later used here, during the call
    

    We can detect this scenario in MIR readily enough by checking when the point of use turns out to be a “call” terminator. We’ll have to tweak the spans to get everything to look correct, but that is easy enough.

    Closures

    As today, when the initial borrow is part of constructing a closure, we wish to highlight not only the point where the closure is constructed, but the point within the closure where the variable in question is used.

    Borrowing a variable for longer than its scope

    Consider this example:

    let p;
    {
        let x = 3;
        p = &x;
    }
    println!("{}", p);

    In this example, the reference p refers to x with a lifetime that exceeds the scope of x. In short, that portion of the stack will be popped with p still in active use. In today’s compiler, this is detected during the borrow checker by a special check that computes the “maximal scope” of the path being borrowed (x, here). This makes sense in the existing system since lifetimes and scopes are expressed in the same units (portions of the AST). In the newer, non-lexical formulation, this error would be detected somewhat differently. As described earlier, we would see that a StorageDead instruction frees the slot for x while p is still in use. We can thus present the error in the same “three-point style”:

    error[E0506]: variable goes out of scope while still borrowed
     --> <anon>:4:5
       |
     3 |     p = &x;
       |          - `x` borrowed here
     4 | }
       | ^ `x` goes out of scope here, while borrow is still in active use
     5 | println!("{}", p);
       |                - borrow used here, after invalidation
    

    Errors during inference

    The remaining set of lifetime-related errors come about primarily due to the interaction with function signatures. For example:

    impl Foo {
        fn foo(&self, y: &u8) -> &u8 {
            x
        }
    }

    We already have work-in-progress on presenting these sorts of errors in a better way (see issue 42516 for numerous examples and details), all of which should be applicable here. In short, the name of the game is to identify patterns and suggest changes to improve the function signature to match the body (or at least diagnose the problem more clearly).

    Whenever possible, we should leverage points in the control-flow and try to explain errors in “narrative” form.

    Drawbacks

    There are very few drawbacks to this proposal. The primary one is that the rules for the system become more complex. However, this permits us to accept a larger number of programs, and so we expect that using Rust will feel simpler. Moreover, experience has shown that – for many users – the current scheme of tying reference lifetimes to lexical scoping is confusing and surprising.

    Alternatives

    Alternative formulations of NLL

    During the runup to this RFC, a number of alternate schemes and approaches to describing NLL were tried and discarded.

    RFC 396. RFC 396 defined lifetimes to be a “prefix” of the dominator tree – roughly speaking, a single-entry, multiple-exit region of the control-flow graph. Unlike our system, this definition did not permit gaps or holes in a lifetime. Ensuring continuous lifetimes was meant to guarantee soundness; in this RFC, we use the liveness constraints to achieve a similar effect. This more flexible setup allows us to handle cases like Problem Case #3, which RFC 396 would not have accepted. RFC 396 also did not cover dropck and a number of other complications.

    SSA or SSI transformation. Rather than incorporating the “current location” into the subtype check, we also considered formulations that first applied an SSA transformation to the input program, and then gave each of those variables a distinct type. This does allow some examples to type-check that wouldn’t otherwise, but it is not flexible enough for the vec-push-ref example covered earlier.

    Using SSA also introduces other complications. Among other things, Rust permits variables and temporaries to be borrowed and mutated indirectly (e.g., via &mut). If we were to apply SSA to MIR in a naive fashion, then, it would ignore these assignments when creating numberings. For example:

    let mut x = 1;      // x0, has value 1
    let mut p = &mut x; // p0
    *p += 1;
    use(x);             // uses `x0`, but it now has value 2

    Here, the value of x0 changed due to a write from p. Thus this is not a true SSA form. Normally, SSA transformations achieve this by making local variables like x and p be pointers into stack slots, and then lifting those stack slots into locals when safe. MIR was intentionally not done using SSA form precisely to avoid the need for such contortions (we can leave that to the optimizing backend).

    Type per program point. Going further than SSA, one can accommodate vec-push-ref through a scheme that gives each variable a distinct type at each point in the CFG (similar to what Ericson2314 describes in the stateful MIR for Rust) and applies transformations to the lifetimes on every edge. During the rustc design sprint, the compiler team also enumerated such a design. The author believes this RFC to be a roughly equivalent analysis, but with an alternative, more familiar formulation that still uses one type per variable (rather than one type per variable per point).

    There are several advantages to the design enumerated here. For one thing, it involves far fewer inference variables (if each variable has many types, each of those types needs distinct inference variables at each point) and far fewer constraints (we don’t need constraints just for connecting the type of a variable between distinct points). It is also a more natural fit for the surface language, in which variables have a single type.

    Different “lifetime roles”

    In the discussion about nested method calls (RFC 2025, and the discussions that led up to it), there were various proposals that were aimed at accepting the naive desugaring of a call like vec.push(vec.len()):

    let tmp0 = &mut vec;
    let tmp1 = vec.len(); // does a shared borrow of vec
    Vec::push(tmp0, tmp1);

    The alternatives to RFC 2025 were focused on augmenting the type of references to have distinct “roles” – the most prominent such proposal was Ref2<'r, 'w>, in which mutable references change to have two distinct lifetimes, a “read” lifetime ('r) and a “write” lifetime ('w), where read encompasses the entire span of the reference, but write only contains those points where writes are occurring. This RFC does not attempt to change the approach to nested method calls, rather continuing with the RFC 2025 approach (which affects only the borrowck handling). However, if we did wish to adopt a Ref2-style approach in the future, it could be done backwards compatibly, but it would require modifying (for example) the liveness requirements. For example, currently, if a variable x is live at some point P, then all lifetimes in the type of x must contain P – but in the Ref2 approach, only the read lifetime would have to contain P. This implies that lifetimes are treated differently depending on their “role”. It seems like a good idea to isolate such a change into a distinct RFC.

    Unresolved questions

    None at present.

    Appendix: What this proposal will not fix

    It is worth discussing a few kinds of borrow check errors that the current RFC will not eliminate. These are generally errors that cross procedural boundaries in some form or another.

    Closure desugaring. The first kind of error has to do with the closure desugaring. Right now, closures always capture local variables, even if the closure only uses some sub-path of the variable internally:

    let get_len = || self.vec.len(); // borrows `self`, not `self.vec`
    self.vec2.push(...); // error: self is borrowed

    This was discussed on an internals thread. It is possible to fix this by making the closure desugaring smarter.

    Disjoint fields across functions. Another kind of error is when you have one method that only uses a field a and another that only uses some field b; right now, you can’t express that, and hence these two methods cannot be used “in parallel” with one another:

    impl Foo {
        fn get_a(&self) -> &A { &self.a }
        fn inc_b(&mut self) { self.b.value += 1; }
        fn bar(&mut self) {
            let a = self.get_a();
            self.inc_b(); // Error: self is already borrowed
            use(a);
        }
    }

    The fix for this is to refactor so as to expose the fact that the methods operate on disjoint data. For example, one can factor out the methods into methods on the fields themselves:

    fn bar(&mut self) {
        let a = self.a.get();
        self.b.inc();
        use(a);
    }

    This way, when looking at bar() alone, we see borrows of self.a and self.b, rather than two borrows of self. Another technique is to introduce “free functions” (e.g., get(&self.a) and inc(&mut self.b)) that expose more clearly which fields are operated upon, or to inline the method bodies. This is a non-trivial bit of design and is out of scope for this RFC. See this comment on an internals thread for further thoughts.

    Self-referential structs. The final limitation we are not fixing yet is the inability to have “self-referential structs”. That is, you cannot have a struct that stores, within itself, an arena and pointers into that arena, and then move that struct around. This comes up in a number of settings. There are various workarounds: sometimes you can use a vector with indices, for example, or the owning_ref crate. The latter, when combined with associated type constructors, might be an adequate solution for some uses cases, actually (it’s basically a way of modeling “existential lifetimes” in library code). For the case of futures especially, the ?Move RFC proposes another lightweight and interesting approach.

    Endnotes

    1. Scopes always correspond to blocks with one exception: the scope of a temporary value is sometimes the enclosing statement.

    Summary

    Allow unnamed fields of struct and union type, contained within an outer struct or union; the fields they contain appear directly within the containing structure, with the use of union and struct determining which fields have non-overlapping storage (making them usable at the same time). This allows grouping and laying out fields in arbitrary ways, to match C data structures used in FFI. The C11 standard allows this, and C compilers have allowed it for decades as an extension. This proposal allows Rust to represent such types using the same names as the C structures, without interposing artificial field names that will confuse users of well-established interfaces from existing platforms.

    Motivation

    Numerous C interfaces follow a common pattern, consisting of a struct containing discriminants and common fields, and an unnamed union of fields specific to certain values of the discriminants. To group together fields used together as part of the same variant, these interfaces also often use unnamed struct types.

    Thus, struct defines a set of fields that can appear at the same time, and union defines a set of mutually exclusive overlapping fields.

    This pattern appears throughout many C APIs. The Windows and POSIX APIs both use this pattern extensively. However, Rust currently can’t represent this pattern in a straightforward way. While Rust supports structs and unions, every such struct and union must have a field name. When creating a binding to such an interface, whether manually or using a binding generator, the binding must invent an artificial field name that does not appear in the original interface.

    This RFC proposes a minimal mechanism to support such interfaces in Rust. This feature exists primarily to support ergonomic FFI interfaces that match the layout of data structures for the native platform; this RFC intentionally limits itself to the repr(C) structure representation, and does not provide support for using this feature in Rust data structures using repr(Rust). As precedent, Rust’s support for variadic argument lists only permits its use on extern "C" functions.

    Guide-level explanation

    This explanation should appear after the definition of union, and after an explanation of the rationale for union versus enum in Rust.

    Please note that most Rust code will want to use an enum to define types that contain a discriminant and various disjoint fields. The unnamed field mechanism here exist primarily for compatibility with interfaces defined by non-Rust languages, such as C. Types declared with this mechanism require unsafe code to access.

    A struct defines a set of fields all available at the same time, with storage available for each. A union defines (in an unsafe, unchecked manner) a set of mutually exclusive fields, with overlapping storage. Some types and interfaces may require nesting such groupings. For instance, a struct may contain a set of common fields and a union of fields needed for different variations of the structure; conversely, a union contain a struct grouping together fields needed simultaneously.

    Such groupings, however, do not always have associated types and names. A structure may contain groupings of fields where the fields have meaningful names, but the groupings of fields do not. In this case, the structure can contain unnamed fields of struct or union type, to group the fields together, and determine which fields overlap.

    As an example, when defining a struct, you may have a set of fields that will never be used at the same time, so you could overlap the storage of those fields. This pattern often occurs within C APIs, when defining an interface similar to a Rust enum. You could do so by declaring a separate union type and a field of that type. With the unnamed fields mechanism, you can also define an unnamed grouping of overlapping fields inline within the struct, using the union keyword:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: u32,
            c: f32,
        },
        d: u64,
    }

    The underscore _ indicates the absence of a field name; the fields within the unnamed union will appear directly with the containing structure. Given a struct s of this type, code can access s.a, s.d, and either s.b or s.c. Accesses to a and d can occur in safe code; accesses to b and c require unsafe code, and b and c overlap, requiring care to access only the field whose contents make sense at the time. As with any union, borrows of any union field borrow the entire union, so code cannot borrow s.b and s.c simultaneously if any of the borrows uses &mut.

    Conversely, sometimes when defining a union, you may want to group multiple fields together and make them available simultaneously, with non-overlapping storage. You could do so by defining a separate struct, and placing an instance of that struct within the union. With the unnamed fields mechanism, you can also define an unnamed grouping of non-overlapping fields inline within the union, using the struct keyword:

    #[repr(C)]
    union U {
        a: u32,
        _: struct {
            b: u16,
            c: f16,
        },
        d: f32,
    }

    Given a union u of this type, code can access u.a, or u.d, or both u.b and u.c. Since all of these fields can potentially overlap with others, accesses to any of them require unsafe code; however, b and c do not overlap with each other. Code can borrow u.b and u.c simultaneously, but cannot borrow any other fields at the same time.

    Structs can also contain unnamed structs, and unions can contain unnamed unions.

    Unnamed fields can contain other unnamed fields. For example:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: u32,
            _: struct {
                c: u16,
                d: f16,
            },
            e: f32,
        },
        f: u64,
    }

    This structure contains six fields: a, b, c, d, e, and f. Safe code can access fields a and f, at any time, since those fields do not lie within a union and do not overlap with any other field. Unsafe code can access the remaining fields. This definition effectively acts as the overlap of the following three structures:

    // variant 1
    #[repr(C)]
    struct S {
        a: u32,
        b: u32,
        f: u64,
    }
    
    // variant 2
    #[repr(C)]
    struct S {
        a: u32,
        c: u16,
        d: f16,
        f: u64,
    }
    
    // variant 3
    #[repr(C)]
    struct S {
        a: u32,
        e: f32,
        f: u64,
    }

    Unnamed fields with named types

    An unnamed field may also use a named struct or union type. For instance:

    #[repr(C)]
    union U {
        x: i64,
        y: f64,
    }
    
    #[repr(C)]
    struct S {
        _: U,
        z: usize,
    }

    Given these declarations, S would contain fields x, y, and z, with x and y overlapping. Such a declaration behaves in every way like the equivalent declaration with an unnamed type declared within S, except that this version of the declaration also defines a named union type U.

    This syntax makes it possible to give a name to the intermediate type, while still leaving the field unnamed. While C11 does not directly support inlining of separately defined structures, compilers do support it as an extension, and this addition allows the translation of such code.

    This syntax allows for the common definition of sets of fields inlined into several structures, such as a common header.

    This syntax would also support an obvious translation of inline-declared structures with names, by moving the declaration out-of-line; a macro could easily perform such a translation.

    Note that the intermediate type name in the declaration must resolve to a concrete type, and cannot involve a generic type parameter of the containing structure.

    Mental model

    In the memory layout of a structure, the alternating uses of struct { ... } and union { ... } change the “direction” that fields are being laid out: if you think of memory addresses as going vertically, struct lays out fields vertically, in sequence, and union lays out fields horizontally, overlapping with each other. The following definition:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: u32,
            _: struct {
                c: u16,
                d: f16,
            },
            e: f32,
        },
        f: u64,
    }

    corresponds to the following structure layout in memory:

    +-----------+ 0
    |     a     |
    +-----------+ 4
    | b | c | e |
    |   +---+   | 6
    |   | d |   |
    +-----------+ 8
    |     f     |
    +-----------+ 16
    

    The top-level struct lays out a, the unnamed union, and f, in sequential order. The unnamed union lays out b, the unnamed struct, and e, in parallel. The unnamed struct lays out c and d in sequential order.

    Instantiation

    Given the following declaration:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: u32,
            _: struct {
                c: u16,
                d: f16,
            },
            e: f32,
        },
        f: u64,
    }

    All of the following will instantiate a value of type S:

    • S { a: 1, b: 2, f: 3.0 }
    • S { a: 1, c: 2, d: 3.0, f: 4.0 }
    • S { a: 1, e: 2.0, f: 3.0 }

    Pattern matching

    Code can pattern match on a structure containing unnamed fields as though all the fields appeared at the top level. For instance, the following code matches a discriminant and extracts the corresponding field.

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: u32,
            _: struct {
                c: u16,
                d: f16,
            },
            e: f32,
        },
        f: u64,
    }
    
    unsafe fn func(s: S) {
        match s {
            S { a: 0, b, f } => println!("b: {}, f: {}", b, f),
            S { a: 1, c, d, f } => println!("c: {}, d: {}, f: {}", c, d, f),
            S { a: 2, e, f } => println!("e: {}, f: {}", e, f),
            S { a, f, .. } => println!("a: {} (unknown), f: {}", a, f),
        }
    }

    If a match goes through one or more union fields (named or unnamed), it requires unsafe code; a match that goes through only struct fields can occur in safe code.

    Checks for exhaustiveness work identically to matches on structures with named fields. For instance, if the above match omitted the last case, it would receive a warning for a non-exhaustive match.

    A pattern must include a .. if it does not match all fields, other than union fields for which it matches another branch of the union. Failing to do so will produce error E0027 (pattern does not mention field). For example:

    • Omitting the f from any of the first three cases would require adding ..
    • Omitting b from the first case, or e from the third case, would require adding ..
    • Omitting either c or d from the second case would require adding ..

    Effectively, the pattern acts as if it groups all matches of the fields within an unnamed struct or union into a sub-pattern that matches those fields out of the unnamed struct or union, and then produces errors accordingly if a sub-pattern matching an unnamed struct doesn’t mention all fields of that struct, or if a pattern doesn’t mention any fields in an unnamed union.

    Representation

    This feature exists to support the layout of native platform data structures. Structures using the default repr(Rust) layout cannot use this feature, and the compiler should produce an error when attempting to do so.

    When using this mechanism to define a C interface, always use the repr(C) attribute to match C’s data structure layout. For convenience, repr(C) applied to the top-level structure will automatically apply to every unnamed struct within that declaration, since unnamed fields only permit repr(C). This only applies to repr(C), not to any other attribute.

    Such a structure defined with repr(C) will use a representation identical to the same structure with all unnamed fields transformed to equivalent named fields of a struct or union type with the same fields.

    However, applying repr(packed) (or any other attribute) to the top-level data structure does not automatically apply it to all the contained structures. To apply repr(packed) to an unnamed field, place the attribute before the field declaration:

    #[repr(C)]
    union S {
        a: u32,
        #[repr(packed)]
        _: struct {
            b: u8,
            c: u16,
        },
        _: struct {
            d: u8,
            e: f16,
        },
    }

    In this declaration, the first unnamed struct uses repr(packed), while the second does not.

    Unnamed fields with named types use the representation attributes attached to the named type. The named type must use repr(C).

    Derive

    A struct or union containing unnamed fields may derive Copy, Clone, or both, if all the fields it contains (including within unnamed fields) also implement Copy.

    A struct containing unnamed fields may derive Clone if every field contained directly in the struct implements Clone, and every field contained within an unnamed union (directly or indirectly) implements Copy.

    Ambiguous field names

    You cannot use this feature to define multiple fields with the same name. For instance, the following definition will produce an error:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            a: u32,
            b: f32,
        },
    }

    The error will identify the duplicate a fields as the sources of the error.

    Generics and type parameters

    You can use this feature with a struct or union that has a generic type:

    #[repr(C)]
    struct S<T> {
        a: u32,
        _: union {
            b: T,
            c: u64,
        }
    }

    You may also use a generic struct or union parameterized by a type as the named type of an unnamed field, since the compiler can know all the resulting field names at declaration time without knowing the generic type:

    #[repr(C)]
    struct S<T> {
        a: u32,
        _: U<T>,
        _: U2<u64>,
    }

    However, you cannot use a type parameter itself as the named type of an unnamed field:

    #[repr(C)]
    struct S<T> {
        a: u32,
        _: T, // error
    }

    This avoids situations in which the compiler must delay producing an error on a field name conflict between T and S (or on the use of a non-struct, non-union type for T) until it knows a specific type for T.

    Reference-level explanation

    Parsing

    Within a struct or union’s fields, in place of a field name and value, allow _: struct { fields } or _: union { fields }, where fields allows everything allowed within a struct or union declaration, respectively.

    Additionally, allow _ as the name of a field whose type refers to a struct or union. All of the fields of that struct or union must be visible to the current module.

    The name _ cannot currently appear as a field name, so this will not introduce any compatibility issues with existing code. The keyword struct cannot appear as a field type, making it entirely unambiguous. The contextual keyword union could theoretically appear as a type name, but an open brace cannot appear immediately after a field type, allowing disambiguation via a single token of context (union {).

    Layout and Alignment

    The layout and alignment of a struct or union containing unnamed fields must match the C ABI for the equivalent structure. In particular, it should have the same layout that it would if each unnamed field had a separately declared type and a named field of that type, rather than as if the fields appeared directly within the containing struct or union. This may, in particular, affect alignment.

    Simultaneous Borrows

    An unnamed struct within a union should behave the same with respect to borrows as a named and typed struct within a union, allowing borrows of multiple fields from within the struct, while not permitting borrows of other fields in the union.

    Visibility

    Each field within an unnamed struct or union may have an attached visibility. An unnamed field itself does not have its own visibility; all of its fields appear directly within the containing structure, and their own visibilities apply.

    Documentation

    Public fields within an unnamed struct or union should appear in the rustdoc documentation of the outer structure, along with any doc comment or attribute attached to those fields. The rendering should include all unnamed fields that contain (at any level of nesting) a public field, and should include the // some fields omitted note within any struct or union that has non-public fields, including unnamed fields.

    Any unnamed field that contains only non-public fields should be omitted entirely, rather than included with its fields omitted. Omitting an unnamed field should trigger the // some fields omitted note.

    Drawbacks

    This introduces additional complexity in structure definitions. Strictly speaking, C interfaces do not require this mechanism; any such interface could define named struct or union types, and define named fields of that type. This RFC provides a usability improvement for such interfaces.

    Rationale and Alternatives

    Not implementing this feature at all

    Choosing not to implement this feature would force binding generators (and the authors of manual bindings) to invent new names for these groupings of fields. Users would need to look up the names for those groupings, and would not be able to rely on documentation for the underlying interface. Furthermore, binding generators would not have any basis on which to generate a meaningful name.

    Not implementable as a macro

    We cannot implement this feature as a macro, because it affects the names used to reference the fields contained within an unnamed field. A macro could extract and define types for the unnamed fields, but that macro would have to give a name to those unnamed fields, and accesses would have to include the intermediate name.

    Leaving out the _: in unnamed fields

    Rather than declaring unnamed fields with an _, as in _: union { fields } and _: struct { fields }, we could omit the field name entirely, and write union { fields } and struct { fields } directly. This would more closely match the C syntax. However, this does not provide as natural an extension to support references to named structures.

    Allowing type parameters

    We could allow the type parameters of generic types as the named type of an unnamed field. This could allow creative flexibility in API design, such as having a generic type that adds a field alongside the fields of the type it contains. However, this could also lead to much more complex errors that do not arise until the point that code references the generic type. Prohibiting the use of type parameters in this way will not impact common uses of this feature.

    Field aliases

    Rather than introducing unnamed fields, we could introduce a mechanism to define field aliases for a type, such that for struct S, s.b desugars to s.b_or_c.b. However, such a mechanism does not seem any simpler than unnamed fields, and would not align as well with the potential future introduction of full anonymous structure types. Furthermore, such a mechanism would need to allow hiding the underlying paths for portability; for example, the siginfo_t type on POSIX platforms allows portable access to certain named fields, but different platforms overlap those fields differently using unnamed unions. Finally, such a mechanism would make it harder to create bindings for this common pattern in C interfaces.

    Alternate syntax

    Several alternative syntaxes could exist to designate the equivalent of struct and union. Such syntaxes would declare the same underlying types. However, inventing a novel syntax for this mechanism would make it less familiar both to Rust users accustomed to structs and unions as well as to C users accustomed to unnamed struct and union fields.

    Arbitrary field positioning

    We could introduce a mechanism to declare arbitrarily positioned fields, such as attributes declaring the offset of each field. The same mechanism was also proposed in response to the original union RFC. However, as in that case, using struct and union syntax has the advantage of allowing the compiler to implement the appropriate positioning and alignment of fields.

    General anonymous types

    In addition to introducing just this narrow mechanism for defining unnamed fields, we could introduce a fully general mechanism for anonymous struct and union types that can appear anywhere a type can appear, including in function arguments and return values, named structure fields, or local variables. Such an anonymous type mechanism would not replace the need for unnamed fields, however, and vice versa. Furthermore, anonymous types would interact extensively with far more aspects of Rust. Such a mechanism should appear in a subsequent RFC.

    This mechanism intentionally does not provide any means to reference an unnamed field as a whole, or its type. That intentional limitation avoids allowing such unnamed types to propagate.

    Unresolved questions

    This proposal does not support anonymous struct and union types that can appear anywhere a type can appear, such as in the type of an arbitrary named field or variable. Doing so would further simplify some C interfaces, as well as native Rust constructs.

    However, such a change would also cascade into numerous other changes, such as anonymous struct and union literals. Unlike this proposal, anonymous aggregate types for named fields have a reasonable alternative, namely creating and using separate types; binding generators could use that mechanism, and a macro could allow declaring those types inline next to the fields that use them.

    Furthermore, during the pre-RFC process, that portion of the proposal proved more controversial. And such a proposal would have a much more expansive impact on the language as a whole, by introducing a new construct that works anywhere a type can appear. Thus, this proposal provides the minimum change necessary to enable bindings to these types of C interfaces.

    C structures can still include other constructs that Rust does not currently represent, including bitfields, and variable-length arrays at the end of a structure. Future RFCs may wish to introduce support for those constructs as well. However, I do not believe it makes sense to require a solution for every problem of interfacing with C simultaneously, nor to gate a solution for one common issue on solutions for others.

    Summary

    This RFC proposes a temporary solution to the problem of letting tools use attributes. We outline a (partial) long-term solution and propose a step towards that solution for tools which are part of the Rust distribution.

    The long-term solution is that a crate can use attributes for a specific tool by using some explicit (but unspecified) opt-in mechanism. The tool name then becomes the root of a module hierarchy for attribute naming. E.g., by opting-in to a tool named my_tool, a crate can use #[my_tool::foo] and #[my_tool::bar(42)], etc.

    This RFC is a special case of the long-term solution: any tool distributed with Rust creates a scope for attributes (without any opt-in). So any crate can use #[rustdoc::hidden] or #[rustfmt::skip].

    E.g.,

    #[rustfmt::skip]
    fn foo() {}
    

    This would be allowed by the compiler but ignored. When Rustfmt is run on the crate, it will read the attribute and skip formatting foo (note that we make no provision for reading the attribute or doing anything with it, that is all up to the tool).

    This RFC proposes a second mechanism for scoping lints for tools. Similar to attributes, we propose a subset of a hypothetical long-term solution.

    This RFC supersedes #1755.

    Motivation

    Attributes are a useful, general-purpose mechanism for annotating code with metadata. They are used in the language (e.g., repr), for macros (e.g., derive, and for user-supplied attribute- like macros), and by tools (e.g., rustfmt_skip which instructs Rustfmt not to format an item). Attributes could also be used by compiler plugins such as lints.

    Currently, custom attributes (i.e., those not known to the compiler, e.g., rustfmt_skip) are unstable. There is a future compatibility hazard with custom attributes: if we add #[foo] to the language, then any users using a foo custom attribute will suffer breakage.

    There is a potential problem with the interaction between custom attributes and attribute-like macros. Given an attribute, the compiler cannot tell if the attribute is intended to be a macro invocation or an attribute that might only be used by a tool (either outside or inside the compiler). Currently, the compiler tries to find a macro and if it cannot, ignores the attribute (giving a stability error if not on nightly or the custom_attribute feature is not enabled). However, if the user intended the attribute to be a macro, silently ignoring the missing macro error is not the right thing to do. The compiler needs to know whether an attribute is intended to be a macro or not.

    Given the above constraints, an opt-in solution is attractive. However, any such solution ends up being closely related to mechanisms for importing crates (extern crate) and macro naming. These features are being re-examined or are unstable and so now is a bad time to fully specify a long-term solution.

    We do wish to make progress on allowing tools to use attributes. For example, Rustfmt is mostly ready to move towards stabilisation, but requires some kind of skip attribute. So we are proposing a solution that should work well with any reasonable long-term solution and addresses the needs of some important tools today.

    Similarly, tools (e.g., Clippy) may want to use their own lints without the compiler warning about unused lints. E.g., we want a user to be able to write #![allow(clippy::some_lint)] in their crate without warning.

    Guide-level explanation

    Attributes

    This section assumes that attributes (e.g., #[test]) have already been taught.

    You can use attributes in your crate to pass information to tools. For now, this facility is limited to the tools we include with the Rust distribution.

    The names of these attributes are a path starting with the name of a tool, and then one or more identifiers, e.g., #[tool_name::foo] or #[tool_name::bar::baz::qux(argument)]. Such paths hide any attribute-like macros with the same name and location.

    For example, using #[rustfmt::skip] indicates that an item (such as a function) should not be formatted by Rustfmt:

    #[rustfmt::skip]
    fn foo() { this_will_be_kept_as_is_by_rustfmt(); }
    
    fn bar() { this_will_be_reformatted }
    
    mod baz {
        #![rustfmt::skip]
        // Rustfmt will skip this whole module.
    }
    

    Lints

    This section assumes lints have already been taught.

    Lints can be defined hierarchically as a path, as well as just a single name. For example, nonstandard_style::non_snake_case_functions and nonstandard_style::uppercase_variables. Note this RFC is not proposing changing any existing lints, just extending the current lint naming system. Lint names cannot be imported using use.

    Lints can be enforced by tools other than the compiler. For example, Clippy provides a large suite of lints to catch common mistakes and improve your Rust code. Lints for tools are prefixed with the tool name, e.g., clippy::box_vec.

    Reference-level explanation

    Long-term solution

    There will be some opt-in mechanism for crates to declare that they want to allow use of a tool’s attributes. This might be in the source text (an attribute as in #1755 or new syntax, e.g., extern attribute foo;) or passed to rustc as a command line flag (e.g., --extern-attr foo). The exact mechanism is deliberately unspecified.

    After opting-in to foo, a crate can use foo as the base of a path in any attribute in the crate. E.g., allowing #[foo::bar] to be used (but not #[foo]). This mechanism is follows the normal macro hygiene rules. Depending on the opt-in mechanism a tool might be able to specify to the compiler which paths are valid, e.g., allow #[foo::bar] but disallow #[foo::baz]. I would hope that we’d be able to reuse most of the macro naming feature (see #1561) here (i.e., this won’t be a whole new specification, we’ll just allow a new way to base paths).

    Unscoped attributes will be reserved for the language and can’t be used by tools.

    During macro expansion, when faced with an attribute, the compiler first tries to find a macro using the macro name resolution rules. The compiler then checks if the attribute matches any of the declared or built- in attributes. If this fails, then it reports a macro not found error. The compiler may suggest mis-typed attributes (declared or built-in).

    A similar opt-in mechanism will exist for lints.

    Proposed for immediate implementation

    There is an attribute path white list of the names of tools shipped with the Rust distribution. Any crate can use an attribute path starting with those names and the attribute will not trigger the custom attribute lint or require a macro feature gate.

    E.g., #[rustdoc::foo] will be permitted in stable Rust code; #[rustdoc] will still be treated as a custom attribute.

    The initial list of allowed prefixes is rustc, rustdoc, and rls (but see note below on activation). As tools are added to the distribution, they will be allowed as path prefixes in attributes. We expect to add rustfmt and clippy in the near future. Note that whether one of these names can be used does not depend on whether the relevant component is installed on the user’s system; this is a simple, universal white list.

    Given the earlier rules on name resolution, these attributes would shadow any attribute macro with the same name. This is not problematic because a macro would have to be in a module starting with a tool name (e.g., rustdoc::foo), naming macros in such a way is currently unstable, and this can be worked around by using an import (use).

    Tool-scoped attributes should be preserved by the compiler for as long as possible through compilation. This allows tools which plug into the compiler (like Clippy) to observe these attributes on items during type checking, etc.

    Likewise, white-listed tools may be used as a prefix for lints. So for example, rustfmt::foo and clippy::bar are both valid lint names, from the compiler’s perspective.

    Activation and unused attibutes/lints

    For each name on the whitelist, it is indicated if the name is active for attributes or lints. A name is only activated if required. So for example, rustdoc will not be activated at all until it takes advantage of this feature. I expect clippy will be activated only for lints and attributes, and rustfmt only for attributes.

    A tool that has an active name must check for unused lints/attibutes. For example, if rustfmt becomes active for attributes, and only recognises rustfmt::skip, it must produce a warning if a user uses rustfmt::foo in their code.

    These two requirements together mean that we do not lose checking of unused attributes/lints in any circumstance and we can move to having the compiler check for unused attributes/lints as part of a possible long-term solution without introducing new warnings or errors.

    Forward and backward compatibility

    Since custom attributes are feature gated and scoped attributes are part of the unstable macros 2.0 work, there is no backwards compatibility issue.

    For tools who want to move to these newly stable attributes (e.g., from rustfmt_skip to rustfmt::skip) they will have to manage the change themselves.

    Although the mechanism for opt-in for the long-term solution is unspecified, the actual usage of tool attributes seems pretty clear. Therefore we can be reasonably confident that this proposal is forward-compatible in its syntax, etc.

    For the white-listed tools, will their names be implicitly imported in the long-term solution? One could imagine either leaving them implicit (similar to the libraries prelude) or using warning cycles or an edition to move them to explicit opt-in.

    Drawbacks

    The proposed scheme does not allow tools or macros to use custom top-level attributes (I consider this a feature, not a bug, but others may differ).

    Some tools are clearly given special treatment.

    We permit some useless attributes without warning from the compiler (e.g., #[rustfmt::foo], assuming Rustfmt does nothing with foo). However, tools should warn or error on such attributes.

    We are not planning any infrastructure to help tools use these attributes. That seems fine for now, I imagine a long-term solution should include some library or API for this.

    No interaction with imports or other parts of the module system.

    Alternatives

    We could continue to force tools to rely on cfg_attr - this is very unergonomic, e.g., #[cfg_attr(rustfmt, rustfmt_skip)].

    We could allow all scoped attributes without checks. This feels like it introduces too much scope for error.

    Unresolved questions

    Are there other tools that should be included on the whitelist (#[test] perhaps)?

    Should we try and move some top-level attributes that are compiler-specific (rather than language-specific) to use #[rustc::]? (E.g., crate_type).

    How should the compiler expose path lints to lint plugins/lint tools?

    RFC 2126 may change how paths are written, the paths used in attributes in this RFC should be adjusted accordingly.

    Summary

    Introduce a new dyn Trait syntax for trait objects using a contextual dyn keyword, and deprecate “bare trait” syntax for trait objects. In a future edition, dyn will become a proper keyword and a lint against bare trait syntax will become deny-by-default.

    Motivation

    In a nutshell

    The current syntax is often ambiguous and confusing, even to veterans, and favors a feature that is not more frequently used than its alternatives, is sometimes slower, and often cannot be used at all when its alternatives can. By itself, that’s not enough to make a breaking change to syntax that’s already been stabilized. Now that we have editions, it won’t have to be a breaking change, but it will still cause significant churn. However, impl Trait is going to require a significant shift in idioms and teaching materials all on its own, and “dyn Trait vs impl Trait” is much nicer for teaching and ergonomics than “bare trait vs impl Trait”, so this author believes it is worthwhile to change trait object syntax too.

    Motivation is the key issue for this RFC, so let’s expand on some of those claims:

    The current syntax is often ambiguous and confusing

    Because it makes traits and trait objects appear indistinguishable. Some specific examples of this:

    • This author has seen multiple people write impl SomeTrait for AnotherTrait when they wanted impl<T> SomeTrait for T where T: AnotherTrait.
    • impl MyTrait {} is valid syntax, which can easily be mistaken for adding default impls of methods or adding extension methods or some other useful operation on the trait itself. In reality, it adds inherent methods to the trait object.
    • Function types and function traits only differ in the capitalization of one letter. This leads to function pointers &fn ... and function trait objects &Fn ... differing only in one letter, making it very easy to mistake one for the other.

    Making one of these mistakes typically leads to an error about the trait not implementing Sized, which is at best misleading and unhelpful. It may be possible to produce better error messages today, but the compiler can only do so much when most of this “obviously wrong” syntax is technically legal.

    favors a feature that is not more frequently used than its alternatives

    When you want to store multiple types within a single value or a single container of values, an enum is often a better choice than a trait object.

    When you want to return a type implementing a trait without writing out the type’s name–either because it can’t be written, or it’s too unergonomic to write–you should typically use impl Trait (once it stabilizes).

    When you want a function to accept any type of value that implements a certain trait, you should typically use generics.

    There are many cases where trait objects are the best solution, but they’re not more common than all of the above. Usually trait objects become the best solution when you want to do two or more of the things listed above, e.g. you have an API that accepts values of types defined by external code, and it has to deal with more than one of those types at a time.

    favors a feature that … is sometimes slower

    Trait objects typically require allocating memory and doing virtual dispatch at runtime. They also prevent the compiler from knowing the concrete type of a value, which may inhibit other optimizations. Sometimes these costs are unnoticeable in practice, or even optimized away entirely, but sometimes they have a significant impact on performance.

    enums and impl Trait simply don’t have these costs. It’s strange that the more concise syntax gives you a feature that is often slower and rarely faster than its alternatives.

    favors a feature that … often cannot be used at all when its alternatives can

    Many traits simply can’t have trait objects at all, because they don’t meet the object safety rules.

    In contrast, impl Trait and generics work with any trait. It’s strange that the more concise syntax gives you the feature that’s least likely to compile.

    impl Trait is going to require a significant shift in idioms and teaching materials all on its own

    Today, when you want to return a type implementing a trait without writing out the type’s name, you typically Box a trait object and accept the potential runtime cost. This includes most functions that return closures, iterators, futures, or combinations thereof. Most of those functions should switch to impl Trait once that syntax stabilizes and becomes the preferred idiomatic way of doing this, including many public API methods.

    The way we teach the trait system will also have to change to describe impl Trait alongside all the existing ways of using traits via generics and trait objects, and explain when impl Trait is preferable to those and other options like enums. Moreover, the way we teach closures, iterators and futures will likely need to mention why impl Trait is useful for those types and use impl Trait in many examples, as well as when impl Trait isn’t enough and you do need dyn Trait after all.

    Ideally, introducing dyn Trait won’t create much additional churn on top of impl Trait, since these idiom shifts and documentation rewrites can account for both of those changes together.

    “dyn Trait vs impl Trait” is much nicer for teaching and ergonomics than “bare trait vs impl Trait”

    There’s a natural parallel between the impl/dyn keywords and static/dynamic dispatch that we’ll likely mention in The Book. Having a keyword for both kinds of dispatch correctly implies that both are important and choosing between the two is often non-trivial, while today’s syntax may give the incorrect impression that trait objects are the default and impl Trait is a more niche feature.

    After impl Trait stabilizes, it will become more common to accidentally write a trait object without realizing it by forgetting the impl keyword. This often leads to unhelpful and cryptic errors about your trait not implementing Sized. With a switch to dyn Trait, these errors could become as simple and self-evident as “expected a type, found a trait, did you mean to write impl Trait?”.

    Explanation

    The functionality of dyn Trait is identical to today’s trait object syntax.

    Box<Trait> becomes Box<dyn Trait>.

    &Trait and &mut Trait become &dyn Trait and &mut dyn Trait.

    Migration

    On the current edition:

    • The dyn keyword will be added, and will be a contextual keyword
    • A lint against bare trait syntax will be added

    In the next edition:

    • dyn becomes a real keyword, uses of it as an identifier become hard errors
    • The bare trait syntax lint is raised to deny-by-default

    This follows the policy laid out in the editions RFC, where a hard error is “only available when the deprecation is expected to hit a relatively small percentage of code.” Adding the dyn keyword is unlikely to affect much code, but removing bare trait syntax will clearly affect a lot of code, so only the latter change is implemented as a deny-by-default lint.

    Drawbacks

    • Yet another (temporarily contextual) keyword.

    • Code that uses trait objects becomes slightly more verbose.

    • &dyn Trait might give the impression that &dyn is a third type of reference alongside & and &mut.

    • In general, favoring generics over trait objects makes Rust code take longer to compile, and this change may encourage more of that.

    Rationale and Alternatives

    We could use a different keyword such as obj or virtual. There wasn’t very much discussion of these options on the original RFC thread, since the motivation was a far bigger concern than the proposed syntax, so it wouldn’t be fair to say there’s a consensus for or against any particular keyword.

    This author believes that dyn is a better choice because the notion of “dynamic” typing is familiar to a wide variety of programmers and unlikely to mislead them. obj is likely to incorrectly imply an “object” in the OOP sense, which is very different from a trait object. virtual is a term that may be unfamiliar to programmers whose preferred languages don’t have a virtual keyword or don’t even expose the notion of virtual/dynamic dispatch to the programmer, and the languages that do have a virtual keyword usually use it to mean “this method can be overridden”, not “this value uses dynamic dispatch”.

    We could also use a more radical syntax for trait objects. Object<Trait> was suggested on the original RFC thread but didn’t gain much traction, presumably because it adds more “noise” than a keyword and is arguably misleading.

    Finally, we could repurpose bare trait syntax for something other than trait objects. It’s been frequently suggested in the past that impl Trait would be a far better candidate for bare trait syntax than trait objects. Even this RFC’s motivation section indirectly argues for this, e.g. impl Trait does work with all traits and does not carry a runtime cost, unlike trait objects. However, this RFC does not propose repurposing bare trait syntax yet, only deprecating and removing it. This author believes dyn Trait is worth adding even if we never repurpose bare trait, and repurposing it has some significant downsides that dyn Trait does not (such as creating the possibility of code that compiles in two different editions with radically different semantics). This author believes the repurposing debate should come later, probably after impl Trait and dyn Trait have been stabilized.

    Unresolved questions

    • How common are trait objects in real code? There were some requests for hard data on this in the original RFC thread, but none was ever provided.

    • Does introducing this contextual keyword create any parsing ambiguities?

    • Should we try to write out how The Book would teach impl Trait vs dyn Trait in the future?

    ⚠ Update 4 years later ⚠

    Much of this RFC was stabilized, including the wildcard lifetime and elision in impls.

    However, the team decided to un-accept the parts of this RFC related to using lifetimes without a separate definition.

    Summary

    Eliminate the need for separately binding lifetime parameters in fn definitions and impl headers, so that instead of writing:

    fn two_args<'b>(arg1: &Foo, arg2: &'b Bar) -> &'b Baz
    fn two_lifetimes<'a, 'b>(arg1: &'a Foo, arg2: &'b Bar) -> &'a Quux<'b>
    
    fn nested_lifetime<'inner>(arg: &&'inner Foo) -> &'inner Bar
    fn outer_lifetime<'outer>(arg: &'outer &Foo) -> &'outer Bar

    you can write:

    fn two_args(arg1: &Foo, arg2: &'b Bar) -> &'b Baz
    fn two_lifetimes(arg1: &'a Foo, arg2: &'b Bar) -> &'a Quux<'b>
    
    fn nested_lifetime(arg: &&'inner Foo) -> &'inner Bar
    fn outer_lifetime(arg: &'outer &Foo) -> &'outer Bar

    Lint against leaving off lifetime parameters in structs (like Ref or Iter), instead nudging people to use explicit lifetimes in this case (but leveraging the other improvements to make it ergonomic to do so).

    The changes, in summary, are:

    • A signature is taken to bind any lifetimes it mentions that are not already bound.
    • A style lint checks that lifetimes bound in impl headers are multiple characters long, to reduce potential confusion with lifetimes bound within functions. (There are some additional, less important lints proposed as well.)
    • You can write '_ to explicitly elide a lifetime, and it is deprecated to entirely leave off lifetime arguments for non-& types

    This RFC does not introduce any breaking changes.

    Motivation

    Today’s system of lifetime elision has a kind of “cliff”. In cases where elision applies (because the necessary lifetimes are clear from the signature), you don’t need to write anything:

    fn one_arg(arg: &Foo) -> &Baz

    But the moment that lifetimes need to be disambiguated, you suddenly have to introduce a named lifetime parameter and refer to it throughout, which generally requires changing three parts of the signature:

    fn two_args<'a, 'b: 'a>(arg1: &'a Foo, arg2: &'b Bar) -> &'a Baz<'b>

    These concerns are just a papercut for advanced Rust users, but they also present a cliff in the learning curve, one affecting the most novel and difficult to learn part of Rust. In particular, when first explaining borrowing, we can say that & means “borrowed” and that borrowed values coming out of a function must come from borrowed values in its input:

    fn accessor(&self) -> &Foo

    It’s then not too surprising that when there are multiple input borrows, you need to disambiguate which one you’re borrowing from. But to learn how to do so, you must learn not only lifetimes, but also the system of lifetime parameterization and the subtle way you use it to tie lifetimes together. In the next section, I’ll show how this RFC provides a gentler learning curve around lifetimes and disambiguation.

    Another point of confusion for newcomers and old hands alike is the fact that you can leave off lifetime parameters for types:

    struct Iter<'a> { ... }
    
    impl SomeType {
        // Iter here implicitly takes the lifetime from &self
        fn iter(&self) -> Iter { ... }

    As detailed in the ergonomics initiative blog post, this bit of lifetime elision is considered a mistake: it makes it difficult to see at a glance that borrowing is occurring, especially if you’re unfamiliar with the types involved. (The & types, by contrast, are universally known to involve borrowing.) This RFC proposes some steps to rectify this situation without regressing ergonomics significantly.

    In short, this RFC seeks to improve the lifetime story for existing and new users by simultaneously improving clarity and ergonomics. In practice it should reduce the total occurrences of <, > and 'a in signatures, while increasing the overall clarity and explicitness of the lifetime system.

    Guide-level explanation

    Note: this is a sketch of what it might look like to teach someone lifetimes given this RFC*.

    Introducing references and borrowing

    Assume that ownership has already been introduced, but not yet borrowing.

    While ownership is important in Rust, it’s not very expressive or convenient by itself; it’s quite common to want to “lend” a value to a function you’re calling, without permanently relinquishing ownership of it.

    Rust provides support for this kind of temporary lending through references &T, which signify a temporarily borrowed value of type T. So, for example, you can write:

    fn print_vec(vec: &Vec<i32>) {
        for i in vec {
            println!("{}", i);
        }
    }

    and you designate lending by writing an & on the callee side:

    print_vec(&my_vec)

    This borrow of my_vec lasts only for the duration of the print_vec call.

    Imagine more explanation here…

    Functions that return borrowed data

    So far we’ve only seen functions that consume borrowed data; what about producing it?

    In general, borrowed data is always borrowed from something. And that thing must always be available for longer than the borrow is. When a function returns, its stack frame is destroyed, which means that any borrowed data it returns must come from outside of its stack frame.

    The most typical case is producing new borrowed data from already-borrowed data. For example, consider a “getter” method:

    struct MyStruct {
        field1: Foo,
        field2: Bar,
    }
    
    impl MyStruct {
        fn get_field1(&self) -> &Foo {
            &self.field1
        }
    }

    Here we’re making what looks like a “fresh” borrow, it’s “derived” from the existing borrow of self, and hence fine to return back to our caller; the actual MyStruct value must live outside our stack frame anyway.

    Pinpointing borrows with lifetimes

    For Rust to guarantee safety, it needs to track the lifetime of each loan, which says for what portion of code the loan is valid.

    In particular, each & type also has an associated lifetime—but you can usually leave it off. The reason is that a lot of code works like the getter example above, where you’re returning borrowed data which could only have come from the borrowed data you took in. Thus, in get_field1 the lifetime for &self and for &Foo are assumed to be the same.

    Rust is conservative about leaving lifetimes off, though: if there’s any ambiguity, you need to say explicitly state the relationships between the loans. So for example, the following function signature is not accepted:

    fn select(data: &Data, params: &Params) -> &Item;

    Rust cannot tell how long the resulting borrow of Item is valid for; it can’t deduce its lifetime. Instead, you need to connect it to one or both of the input borrows:

    fn select(data: &'data Data, params: &Params) -> &'data Item;
    fn select(data: &'both Data, params: &'both Params) -> &'both Item;

    This notation lets you name the lifetime associated with a borrow and refer to it later:

    • In the first variant, we name the Data borrow lifetime 'data, and make clear that the returned Item borrow is valid for the same lifetime.

    • In the second variant, we give both input lifetimes the same name 'both, which is a way of asking the compiler to determine their “intersection” (i.e. the period for which both of the loans are active); we then say the returned Item borrow is valid for that period (which means it may incorporate data from both of the input borrows).

    structs and lifetimes

    Sometimes you need to build data types that contain borrowed data. Since those types can then be used in many contexts, you can’t say in advance what the lifetime of those borrows will be. Instead, you must take it as a parameter:

    struct VecIter<'vec, T> {
        vec: &'vec Vec<T>,
        index: usize,
    }

    Here we’re defining a type for iterating over a vector, without requiring ownership of that vector. To do so, we store a borrow of the vector. But because our new VecIter struct contains borrowed data, it needs to surface that fact, and the lifetime connected with it. It does so by taking an explicit 'vec parameter for the relevant lifetime, and using it within.

    When using this struct, you can apply explicitly-named lifetimes as usual:

    impl<T> Vec<T> {
        fn iter(&'vec self) -> VecIter<'vec, T> { ... }
    }

    However, in cases like this example, we would normally be able to leave off the lifetime with &, since there’s only one source of data we could be borrowing from. We can do something similar with structs:

    impl<T> Vec<T> {
        fn iter(&self) -> VecIter<'_, T> { ... }
    }

    The '_ marker makes clear to the reader that borrowing is happening, which might not otherwise be clear.

    impl blocks and lifetimes

    When writing an impl block for a structure that takes a lifetime parameter, you can give that parameter a name, which you should strive to make meaningful:

    impl<T> VecIter<'vec, T> { ... }

    This name can then be referred to in the body:

    impl<T> VecIter<'vec, T> {
        fn foo(&self) -> &'vec T { ... }
        fn bar(&self, arg: &'a Bar) -> &'a Bar { ... }
    }

    If the type’s lifetime is not relevant, you can leave it off using '_:

    impl<T> VecIter<'_, T> { ... }

    Reference-level explanation

    Note: these changes are designed to not require a new edition. They do expand our naming style lint, however.

    Lifetimes in impl headers

    When writing an impl header, you can mention lifetimes without binding them in the generics list. Any lifetimes that are not already in scope (which, today, means any lifetime whatsoever) is treated as being bound as a parameter of the impl.

    Thus, where today you would write:

    impl<'a> Iterator for MyIter<'a> { ... }
    impl<'a, 'b> SomeTrait<'a> for SomeType<'a, 'b> { ... }

    tomorrow you would write:

    impl Iterator for MyIter<'iter> { ... }
    impl SomeTrait<'tcx, 'gcx> for SomeType<'tcx, 'gcx> { ... }

    If any lifetime names are explicitly bound, they all must be.

    This change goes hand-in-hand with a convention that lifetimes introduced in impl headers (and perhaps someday, modules) should be multiple characters, i.e. “meaningful” names, to reduce the chance of collision with typical 'a usage in functions.

    Lifetimes in fn signatures

    When writing a fn declaration, if a lifetime appears that is not already in scope, it is taken to be a new binding, i.e. treated as a parameter to the function.

    Thus, where today you would write:

    fn elided(&self) -> &str
    fn two_args<'b>(arg1: &Foo, arg2: &'b Bar) -> &'b Baz
    fn two_lifetimes<'a, 'b: 'a>(arg1: &'a Foo, arg2: &'b Bar) -> &'a Quux<'b>
    
    impl<'a> MyStruct<'a> {
        fn foo(&self) -> &'a str
        fn bar<'b>(&self, arg: &'b str) -> &'b str
    }
    
    fn take_fn_simple(f: fn(&Foo) -> &Bar)
    fn take_fn<'a>(x: &'a u32, y: for<'b> fn(&'a u32, &'b u32, &'b u32))

    tomorrow you would write:

    fn elided(&self) -> &str
    fn two_args(arg1: &Foo, arg2: &'arg2 Bar) -> &'arg2 Baz
    fn two_lifetimes(arg1: &'arg1 Foo, arg2: &'arg2 Bar) -> &'arg1 Quux<'arg2>
    
    impl MyStruct<'A> {
        fn foo(&self) -> &'A str
        fn bar(&self, arg: &'b str) -> &'b str
    }
    
    fn take_fn_simple(f: fn(&Foo) -> &Bar)
    fn take_fn(x: &'a u32, y: for<'b> fn(&'a u32, &'b u32, &'b u32))

    If any lifetime names are explicitly bound, they all must be.

    For higher-ranked types (including cases like Fn syntax), elision works as it does today. However, it is an error to mention a lifetime in a higher-ranked type that hasn’t been explicitly bound (either at the outer fn definition, or within an explicit for<>). These cases are extremely rare, and making them an error keeps our options open for providing an interpretation later on.

    Similarly, if a fn definition is nested inside another fn definition, it is an error to mention lifetimes from that outer definition (without binding them explicitly). This is again intended for future-proofing and clarity, and is an edge case.

    The wildcard lifetime

    When referring to a type (other than &/&mut) that requires lifetime arguments, it is deprecated to leave off those parameters.

    Instead, you can write a '_ for the parameters, rather than giving a lifetime name, which will have identical behavior to leaving them off today.

    Thus, where today you would write:

    fn foo(&self) -> Ref<SomeType>
    fn iter(&self) -> Iter<T>

    tomorrow you would write:

    fn foo(&self) -> Ref<'_, SomeType>
    fn iter(&self) -> Iter<'_, T>

    Additional lints

    Beyond the change to the style lint for impl header lifetimes, two more lints are provided:

    • One deny-by-default lint against fn definitions in which an unbound lifetime occurs exactly once. Such lifetimes can always be replaced by '_ (or for &, elided altogether), and giving an explicit name is confusing at best, and indicates a typo at worst.

    • An expansion of Clippy’s lints so that they warn when a signature contains other unnecessary elements, e.g. when it could be using elision or could leave off lifetimes from its generics list.

    Drawbacks

    The style lint for impl headers could introduce some amount of churn. This could be mitigated by only applying that lint for lifetimes not bound in the generics list.

    The fact that lifetime parameters are not bound in an out-of-band way is somewhat unusual and might be confusing—but then, so are lifetime parameters! Putting the bindings out of band buys us very little, as argued in the next section.

    It’s possible that the inconsistency with type parameters, which must always be bound explicitly, will be confusing. In particular, lifetime parameters for struct definitions appear side-by-side with parameter lists, but elsewhere are bound differently. However, users are virtually certain to encounter type generics prior to explicit lifetime generics, and if they try to follow the same style – by binding lifetime parameters explicitly – that will work just fine (but may be linted in Clippy as unnecessary).

    Requiring a '_ rather than being able to leave off lifetimes altogether may be a slight decrease in ergonomics in some cases. In particular, SomeType<'_> is pretty sigil-heavy.

    Cases where you could write fn foo<'a, 'b: 'a>(...) now need the 'b: 'a to be given in a where clause, which might be slightly more verbose. These are relatively rare, though, due to our type well-formedness rule.

    Otherwise, it’s a bit hard to see drawbacks here: nothings is made less explicit or harder to determine, since the binding structure continues to be completely unambiguous; ergonomics and, arguably, learnability both improve. And signatures become less noisy and easier to read.

    Rationale and Alternatives

    Core rationale

    The key insight of the proposed design is that out-of-band bindings for lifetime parameters is buying us very little today:

    • For free functions, it’s completely unnecessary; the only lifetime “in scope” is 'static, so everything else must be a parameter.
    • For functions within impl blocks, it is solely serving the purpose of distinguishing between lifetimes bound by the impl header and those bounds by the fn.

    While this might change if we ever allow modules to be parameterized by lifetimes, it won’t change in any essential way: the point is that there are generally going to be very few in-scope lifetimes when writing a function signature. So the premise is that we can use naming conventions to distinguish between the impl header (or eventual module headers) and fn bindings.

    Alternatively, we could instead distinguish these cases at the use-site, for example by writing outer('a) or some such to refer to the impl block bindings.

    Possible extension or alternative: “backreferences”

    A different approach would be referring to elided lifetimes through their parameter name, like so:

    fn scramble(&self, arg: &Foo) -> &'self Bar

    The idea is that each parameter that involves a single, elided lifetime will be understood to bind a lifetime using that parameter’s name.

    Earlier iterations of this RFC combined these “backreferences” with the rest of the proposal, but this was deemed too confusing and error-prone, and in particular harmed readability by requiring you to scan both lifetime mentions and parameter names.

    We could consider only allowing “backreferences” (i.e. references to argument names), and otherwise keeping binding as-is. However, this has a few downsides:

    • It doesn’t help with impl headers
    • It doesn’t entirely eliminate the need for lifetimes in generics lists for fn definitions, meaning that there’s still another step of learning to reach fully expressive lifetimes.
    • As @rpjohnst argued, backreferences can end up reinforcing an importantly-wrong mental model, namely that you’re borrowing from an argument, rather than from its (already-borrowed) contents. By contrast, requiring you to write the lifetime reinforces the opposite idea: that borrowing has already occurred, and that what you’re tying together is that existing lifetime.
    • On a similar note, using backreferences to tie multiple arguments together is often nonsensical, since there’s no sense in which one argument is the “primary definer” of the lifetime.

    Alternatives

    We could consider using this as an opportunity to eliminate ' altogether, by tying these improvements to a new way of providing lifetimes, e.g. &ref(x) T.

    The internals thread on this topic covers a wide array of syntactic options for leaving off a struct lifetime (which is '_ in this RFC), including: _, &, ref. The choice of '_ was driven by two factors: it’s short, and it’s self-explanatory, given our use of wildcards elsewhere. On the other hand, the syntax is pretty clunky.

    As mentioned above, we could consider alternatives to the case distinction in lifetime variables, instead using something like outer('a) to refer to lifetimes from an impl header.

    Unresolved questions

    • How to treat examples like fn f() -> &'a str { "static string" }.

    Summary

    Add minimal support for fallible allocations to the standard collection APIs. This is done in two ways:

    • For users with unwinding, an oom=panic configuration is added to make global allocators panic on oom.
    • For users without unwinding, a try_reserve() -> Result<(), CollectionAllocErr> method is added.

    The former is sufficient to unwinding users, but the latter is insufficient for the others (although it is a decent 80/20 solution). Completing the no-unwinding story is left for future work.

    Motivation

    Many collection methods may decide to allocate (push, insert, extend, entry, reserve, with_capacity, …) and those allocations may fail. Early on in Rust’s history we made a policy decision not to expose this fact at the API level, preferring to abort. This is because most developers aren’t prepared to handle it, or interested. Handling allocation failure haphazardly is likely to lead to many never-tested code paths and therefore bugs. We call this approach infallible collection allocation, because the developer model is that allocations just don’t fail.

    Unfortunately, this stance is unsustainable in several of the contexts Rust is designed for. This RFC seeks to establish a basic fallible collection allocation API, which allows our users to handle allocation failures where desirable. This RFC does not attempt to perfectly address all use cases, but does intend to establish the goals and constraints of those use cases, and sketches a path forward for addressing them all.

    There are 4 user profiles we will be considering in this RFC:

    • embedded: task-oriented, robust, pool-based, no unwinding
    • gecko: semi-task-oriented, best-effort, global, no unwinding
    • server: task-oriented, semi-robust, global, unwinding
    • runtime: whole-system, robust, global, no unwinding

    User Profile: Embedded

    Embedded devs are primarily well-aligned with Rust’s current strategy. First and foremost, embedded devs just try to not dynamically allocate. Memory should ideally all be allocated at startup. In cases where this isn’t practical, simply aborting the process is often the next-best choice. Robust embedded systems need to be able to recover from a crash anyway, and aborting is completely fool-proof.

    However, sometimes the embedded system needs to process some user-defined tasks with unpredictable allocations, and completely crashing on OOM would be inappropriate. In those cases handling allocation failure is the right solution. In the case of a failure, the entire task usually reports a failure and is torn down. To make this robust, all allocations for a task are usually isolated to a single pool that can easily be torn down. This ensures nothing leaks, and helps avoid fragmentation. The important thing to note is that the embedded developers are ready and willing to take control of all allocations to do this properly.

    Some embedded systems do use unwinding, but this is very rare, so it cannot be assumed.

    It seems they would be happy to have some system to prevent infallible allocations from ever being used.

    User Profile: Gecko

    Gecko is also primarily well-aligned with Rust’s current strategy. For the most part, they liberally allocate and are happy to crash on OOM. This is especially palatable now that firefox is multiprocess. However as a quality of implementation matter, they occasionally make some subroutines fallible. For instance, it would be unfortunate if a single giant image prevented a page from loading. Similarly, running out of memory while processing a style sheet isn’t significantly different from failing to download it.

    However in contrast to the embedded case, this isn’t done in a particularly principled way. Some parts might be fallible, some might be infallible. Nothing is pooled to isolate tasks. It’s just a best-effort affair.

    Gecko is built without unwinding.

    It seems they would be happy to have some system to prevent infallible allocations from ever being used.

    Gecko’s need for this API as soon as possible will result in it temporarily forking several of the std collections, which is the primary impetus for this RFC.

    User Profile: Server

    This represents a commodity server which handles tasks using threads or futures.

    Similar to the embedded case, handling allocation failure at the granularity of tasks is ideal for quality-of-implementation purposes. However, unlike embedded development, it isn’t considered practical (in terms of cost) to properly take control of everything and ensure allocation failure is handled robustly.

    Here unwinding is available, and seems to be the preferred solution, as it maximizes the chances of allocation failures bubbling out of whatever libraries are used. This is unlikely to be totally robust, but that’s ok.

    With unwinding there isn’t any apparent use for an infallible allocation checker.

    User Profile: Runtime

    A garbage-collected runtime (such as SpiderMonkey or the Microsoft CLR), is generally expected to avoid crashing due to out-of-memory conditions. Different strategies and allocators are used for different situations here. Most notably, there are allocations on the GC heap for the running script, and allocations on the global heap for the actual runtime’s own processing (e.g. performing a JIT compilation).

    Allocations on the GC heap aren’t particularly interesting for our purposes, as these need to have a special format for tracing, and management by the runtime. A runtime probably wouldn’t ever want to build a native Vec backed by the GC heap, but a Vec might contain GC’d pointers that the runtime must trace. Thankfully, this is unrelated to the process of allocating the Vec itself.

    When performing a GC, allocating data structures may enable faster or more responsive strategies, but the system must be ready to fall back to less memory-intensive solution in the case of allocation failure. In the limit, very small allocations in critical sections may be infallible.

    When performing a JIT, running out of memory can generally be gracefully handled by failing the compilation and remaining in a less-optimized mode (such as the interpreter). For the most part fallible allocation is used here. However SpiderMonkey occasionally uses an interesting mix of fallible and infallible allocations to avoid threading errors through some particularly complex subroutines. Essentially, a chunk of memory is reserved that is supposed to be statically guaranteed to be sufficient for the subroutine to complete its task, and all allocations in the subroutine are subsequently treated as infallible. In debug builds, running out of memory will trigger an abort. In release builds they will first try to just get more memory and proceed, but abort if this fails.

    Although the language the runtime hosts may have an unwinding/exceptions for OOM conditions when the GC heap runs out of space, the runtime itself generally doesn’t use unwinding to handle its own allocation failures.

    Due to mixed fallible/infallible allocation use, tools which prevent the use of infallible allocation may not be appropriate.

    The Runtime dev profile seems to closely reflect that of Database dev (which wasn’t seriously researched for this RFC). A database is in some sense just a runtime for its query language (e.g. SQL), with similar reliability constraints.

    Aside: many devs in this space have a testing feature which can repeatedly run test cases with OOMs injected at the allocator level. This doesn’t really effect our constraints, but it’s something to keep in mind to address the “many untested paths” issue.

    Additional Background: How Collections Handle Allocation Now

    All of our collections consider there to be two interesting cases:

    • The capacity got too big (>isize::MAX), which is handled by panic!("capacity overflow")
    • The allocator returned an err (even Unsupported), which is handled by calling allocator.oom()

    To make matters more complex, on 64-bit platforms we don’t check the isize::MAX condition directly, instead relying on the allocator to deterministically fail on any request that far exceeds a quantity the page table can even support (no 64-bit system we support uses all 64 bits of the pointer, even with new-fangled 5-level page tables). This means that 64-bit platforms behave slightly different on catastrophically large allocations (abort instead of panic).

    These behaviours were purposefully designed, but probably not particularly well-motivated, as discussed here. Some of these details are documented, although not correctly or in sufficient detail. For instance Vec::reserve only mentions panicking when overflowing usize, which is accurate for 64-bit but not 32-bit or 16-bit. Oddly no mention of out-of-memory conditions or aborts can be found anywhere in Vec’s documentation.

    To make matters more complex, the (unstable) heap::Alloc trait currently documents that any oom impl can panic or abort, so collection users need to assume that can happen anyway. This is intended insofar as it was considered desirable for local allocators, but is considered an oversight in the global case. This is because Alloc is mostly designed around local allocators.

    This is enough of a mess (which to be clear can be significantly blamed on the author) that the author expects no one is relying on the specific behaviours here, and they could be changed pretty liberally. That said, the primary version of this proposal doesn’t attempt to change any of these behaviours. It’s certainly a plausible alternative, though.

    Additional Background: Allocation Failure in C(++)

    There are two ways that collection allocation failure is handled in C(++): with error return values, and with unwinding (C++ only). The C++ standard library (STL) only provides fallible allocations through exceptions, but the broader ecosystem also uses return values. For example, mozilla’s own standard library (MFBT) only uses return values.

    Unfortunately, attempting to handle allocation failure in C(++) has been a historical source of critical vulnerabilities. For instance, if reallocating an array fails but isn’t noticed, the user of the array can end up thinking it has more space than it actually does and writing past the end of the allocation.

    The return-value-based approach is problematic because neither language has good facilities for mandating that a result is actually checked. There are two notable cases here: when the result of the allocation is some kind of error code (e.g. a bool), or the result is a pointer into the allocation (or a specific pointer indicating failure).

    In the error code case, neither language provides a native facility to mandate that error codes must be checked. However compiler-specific attributes like GCC’s warn_unused_result can be used here. Unfortunately nothing mandates that the error code is used correctly. In the pointer case, blindly dereferencing is considered a valid use, fooling basic lints.

    Unwinding is better than error codes in this regard, because completely ignoring an exception aborts the process. The author’s understanding is that problems arise from the complicated exception-safety rules C++ collections have.

    Both of these concerns are partially mitigated in Rust. For return values, Result and bool have proper on-by-default must-use checks. However again nothing mandates they are used properly. In the pointer case, we can however prevent you from ever getting the pointer if the Result is an Err. For unwinding, it’s much harder to run afoul of exception-safety in Rust, especially since copy/move can’t be overloaded. However unsafe code may have trouble.

    Additional Background: Overcommit and Killers

    Some operating systems can be configured to pretend there’s more memory than there actually is. Generally this is the result of pretending to allocate physical pages of memory, but only actually doing so when the page is accessed. For instance, forking a process is supposed to create two separate copies of the process’s memory, but this can be avoided by simply marking all the pages as copy on write and having the processes share the same physical memory. The first process to mutate the shared page triggers a page fault, which the OS handles by properly allocating a new physical page for it. Similarly, to postpone zeroing fresh pages of memory, the OS may use a copy-on-write zero page.

    The result of this is that allocation failure may happen when memory is first accessed and not when it’s actually requested. If this happens, someone needs to give up their memory, which can mean the OS killing your process (or another random one!).

    This strategy is used on many *nix variants/descendants, including Android, iOS, MacOS, and Ubuntu.

    Some developers will try to use this as an argument for never trying to handle allocation failure. This RFC does not consider this to be a reasonable stance. First and foremost: Windows doesn’t do it. So anything that’s used a lot on windows (e.g. Firefox) can reasonably try to handle allocation failure there. Similarly, overcommit can be disabled completely or partially on many OSes. For instance the default for Linux is to actually fail on allocations that are “obviously” too large to handle.

    Additional Background: Recovering From Allocation Failure Without Data Loss

    The most common collection interfaces in Rust expect you to move data into them, and may fail to allocate in the middle of processing this data. As a basic example, push consumes a T. To avoid data loss, this T should be returned, so a fallible push would need a signature like:

    /// Inserts the given item at the end of the Vec.
    ///
    /// If allocating space fails, the item is returned.
    fn push(&mut self, item: T) -> Result<(), (T, Error)>;

    More difficult is an API like extend, which in general cannot predict allocation size and so must continually reallocate while processing. It also cannot know if it needs space for an element until its been yielded by the iterator. As such extend might have a signature like:

    /// Inserts all the items in the given iterator at the end of the Vec.
    ///
    /// If allocating space fails, the collection will contain all the elements
    /// that it managed to insert until the failure. The result will contain
    /// the iterator, having been run up until the failure point. If the iterator
    /// has been run at all, the last element yielded will also be returned.
    fn extend<I: IntoIter<Item=T>>(&mut self, iter: I)
        -> Result<(), (I::IntoIter, Option<T>, Err)>

    Note that this API only even works because Iterator’s signature currently guarantees that the yielded elements outlive the iterator. This would not be the case if we ever moved to support so-called “streaming iterators”, which yield elements that point into themselves.

    Guide-level explanation

    Due to the diversity of requirements between our user profiles, there isn’t any one-size fits all solution. This RFC proposes two solutions which will require minimal work for maximal impact:

    • For the server users, an oom=panic configuration, in the same vein as the panic=abort.
    • For everyone else, add try_reserve and try_reserve_exact as standard collection APIs.

    oom=panic

    Applying this configuration in a Cargo.toml would change the behaviour of the global allocator’s oom() function, which currently aborts, to instead panic. As discussed in the Server user profile, this would allow OOM to be handled at task boundaries with minimal effort for server developers, and no effort from library maintainers.

    If using a thread-per-task model, OOMs will be naturally caught at the thread boundary. If using a different model, tasks can be isolated using the thread::catch_unwind or Future::catch_unwind APIs.

    We expose a flag, rather than changing the default, because we maintain that by default Rust programmers should not be trying to recover from allocation failures.

    For instance, a project which desires to work this way would add this to their Cargo.toml:

    [profile]
    oom = "panic"
    

    And then in their application, do something like this:

    fn main() {
        set_up_event_queue();
        loop {
            let event = get_next_event();
            let result = ::std::panic::catch_unwind(|| {
                process_event(&mut event)
            });
    
            if let Err(err) = result {
                if let Some(message) = err.downcast_ref::<&str>() {
                    eprintln!("Task crashed: {}", message);
                } else if let Some(message) = err.downcast_ref::<String>() {
                    eprintln!("Task crashed: {}", message);
                } else {
                    eprintln!("Task crashed (unknown cause)");
                }
    
                // Handle failure...
            }
        }
    }

    try_reserve

    try_reserve and try_reserve_exact would be added to HashMap, Vec, String, and VecDeque. These would have the exact same APIs as their infallible counterparts, except that OOM would be exposed as an error case, rather than a call to Alloc::oom(). They would have the following signatures:

    /// Tries to reserve capacity for at least `additional` more elements to be inserted
    /// in the given `Vec<T>`. The collection may reserve more space to avoid
    /// frequent reallocations. After calling `reserve`, capacity will be
    /// greater than or equal to `self.len() + additional`. Does nothing if
    /// capacity is already sufficient.
    ///
    /// # Errors
    ///
    /// If the capacity overflows, or the allocator reports a failure, then an error
    /// is returned. The Vec is unmodified if this occurs.
    pub fn try_reserve(&mut self, extra: usize) -> Result<(), CollectionAllocErr>;
    
    /// Ditto, but has reserve_exact's behaviour
    pub fn try_reserve_exact(&mut self, extra: usize) -> Result<(), CollectionAllocErr>;
    
    /// Augments `AllocErr` with a CapacityOverflow variant.
    pub enum CollectionAllocErr {
        /// Error due to the computed capacity exceeding the collection's maximum
        /// (usually `isize::MAX` bytes).
        CapacityOverflow,
        /// Error due to the allocator (see the `AllocErr` type's docs).
        AllocErr(AllocErr),
    }

    We propose only these methods because they represent a minimal building block that third parties can develop fallible allocation APIs on top of. For instance, here are some basic implementations:

    impl<T> FallibleVecExt<T> for Vec<T> {
        fn try_push(&mut self, val: T) -> Result<(), (T, Err)> {
            if let Err(err) = self.try_reserve(1) { return Err((val, err)) }
            self.push(val);
        }
    
        fn try_extend_exact<I>(&mut self, iter: T) -> Result<(), (I::IntoIter, Err)>
            where I: IntoIter,
                  I::IntoIter: ExactSizeIterator<Item=T>, // note this!
        {
            let iter = iter.into_iter();
    
            if let Err(err) = self.try_reserve(iter.len()) { return Err((iter, err)) }
    
            self.extend(iter);
        }
    }

    Note that iterator-consuming implementations are limited to ExactSizeIterator, as this lets us perfectly predict how much space we need. In practice this shouldn’t be much of a constraint, as most uses of these APIs just feed arrays into arrays or maps into maps. Only things like filter produce unpredictable iterator sizes.

    Reference-level explanation

    oom=panic

    Disclaimer: not super familiar with all the mechanics here, so this is a sketch that hopefully someone whose worked on these details can help flesh out.

    We add a -C oom=abort|panic flag to rustc, which changes the impl of __rust_oom that’s linked in to either panic or abort. It’s possible that this should just change the value of a extern static bool in libcore (liballoc?) that __rust_oom impls are expected to check?

    Unlike the panic=abort flag, this shouldn’t make your crate incompatible with crates with a different choice. Only a subset of target types should be able to set this, e.g. it’s a bin-level decision?

    Cargo would also add a oom=abort=panic profile configuration, to set the rustc flag. Its value should be ignored in dependencies?

    try_reserve

    An implementation of try_reserve for Vec can be found for here

    The guide-level explanation otherwise covers all the interesting details.

    Drawbacks

    There doesn’t seem to be any drawback for adding support for oom=panic.

    try_reserve’s only serious drawback is that it isn’t a complete solution, and it may not idiomatically match future “complete” solutions to the problem.

    Rationale and Alternatives

    Always panic on OOM

    We probably shouldn’t mandate this in the actual Alloc trait, but certainly we could change how our global Alloc impls behave. This RFC doesn’t propose this for two reasons.

    The first is basically on the grounds of “not rocking the boat”. Notably unsafe code might be relying on global OOM not unwinding for exception safety reasons. The author expects such code could very easily be changed to be exception-safe if we decided to do this.

    The second is that the author still considers it legitimately correct to discourage handling OOM by default, for the reasons stated in earlier sections.

    Eliminate the CapacityOverflow distinction

    Collections could potentially just create an AllocErr::Unsupported("capacity overflow") and feed it to their allocator. Presumably this wouldn’t do something bad to the allocator? Then the oom=abort flag could be used to completely control whether allocation failure is a panic or abort (for participating allocators).

    Again this is avoided simply to leave things “as they are”. In this case it would be a change to a legitimately documented API behaviour (panic on overflow of usize), but again that documentation isn’t even totally accurate.

    Eliminate the 64-bit difference

    This difference literally exists to save a single perfectly-predictable compare-and-branch on 64-bit platforms when allocating collections, which is probably insignificant considering how expensive the success path is. Also the difference here would be a bit exacerbated by exposing the CapacityOverflow variant here.

    Again, not proposed to avoid rocking the boat.

    CollectionAllocErr

    There were a few different possible designs for CollectionAllocErr:

    • Just make it AllocErr
    • Remove the payload from the AllocErr variant
    • Just make it a () (so try_reserve basically returns a bool)

    AllocErr already has an Unsupported(&'static str) variant to capture any miscellaneous allocation problems, so CapacityOverflow could plausibly just be stuffed in there. We opted to keep it separate to most accurately reflect the way collections think about these problems today – CapacityOverflow goes to panic and AllocErr goes to oom(). It’s possible end users simply don’t care, in much the same way that collections don’t actually care if an AllocErr is Exhausted or Unsupported.

    It’s also possible we should suppress the AllocErr details to “hide” how collections are interpreting the requests they receive. This just didn’t seem that important, and has the possibility to get in the way of someone using their own local allocator.

    The most extreme version of this would be to just say “there was an error” without any information. The only reason to really prefer this is for bloat reasons; the current Rust compiler really doesn’t handle Result payloads very efficiently. This should presumably be fixed eventually, since Results are pretty important?

    We simply opted for the version that had maximum information, on the off-chance this was useful.

    Future Work: Infallible Allocation Effect System (w/ Portability Lints)

    Several of our users have expressed desire for some kind of system to prevent a function from ever infallibly allocating. This is ultimately an effect system.

    One possible way to implement this would be to use the portability lint system. In particular, the “subsetting” portability lints that were proposed as future work in RFC-1868.

    This system is supposed to handle things like “I don’t have float support” or “I don’t have AtomicU64”. “I don’t have infallible allocation support” is much the same idea. This could be scoped to modules or functions.

    Future Work: Complete Result APIs

    Although this RFC handles the “wants to unwind” case pretty cleanly and completely, it leaves no-unwind world with an imperfect one. In particular, it’s completely useless for collections which have unpredictable allocations like BTreeMap. This proposal punts on this problem because solving it will be a big change which will likely make a bunch of people mad no matter what.

    The author would prefer that we don’t spend much time focusing on these solutions, but will document them here just for informational purposes. Also for these purposes we will only be discussing the push method on Vec, since any solution for that generalizes cleanly to everything else.

    Broadly speaking, there’s two schools of thought here: fallible operations should just be methods, and fallible operations should be distinguished at the type-level. Basically, should you be able to do: vec.push(x); vec.try_push(y), or will you somehow obtain a special kind of Vec and vec.push(x) will then return a Result.

    It should be noted that this appears to be a source of massive disagreement. Even within the gecko codebase, there are supporters of both approaches, and so it actually supports both. This is probably not a situation we should strive to emulate.

    There are a few motivations for a type-level distinction:

    • If it’s done through a default generic parameter, then code can be written generically over doing something fallibly or infallibly
    • If it’s done through a default generic parameter, it potentially enables code reuse in implementations
    • It can allow you to enforce that all operations on a Vec are performed fallibly
    • It can make usage more ergonomic (no need for try_ in front of everything)

    The first doesn’t appear to actually do much semantically. Code that’s generic over fallibility is literally the exact same as code that only uses the fallible APIs, at which point you might as well just toss an expect at the end if you want to crash on OOM. The only difference seems to be the performance difference between propagating Results vs immediately unwinding/aborting. This can certainly be significant in code that’s doing a lot of allocations, but it’s not really clear how much this matters. Especially if Result-based codegen improves (which there’s a lot of room for).

    The second is interesting, but mostly effects collection implementors. Making users deal with additional generic parameters to make implementations easier doesn’t seem very compelling.

    Also these two benefits must be weighed against the cost of default generic parameters: they don’t work very well (and may never?), and most people won’t bother to support them so using a non-default just makes you incompatible with a bunch of the ecosystem.

    The third is a bit more compelling, but has a few issues. First, it doesn’t actually enforce that a function handles all allocation failures. One can create a fresh Vec, Box, or just call into a routine that allocates like slice::sort() and types won’t do anything to prevent this. Second, it’s a fairly common pattern to fallibly reserve space, and then infallibly insert data. For instance, code like the following can be found in many places in Gecko’s codebase:

    fn process(&mut self, data: &[Item]) -> Result<Vec<Processed>, CollectionAllocErr> {
        let mut vec = FallibleVec::new();
        vec.reserve(data.len())?
    
        for x in data {
            let p = process(x);
            self.push(p).unwrap();  // Wait, is this fallible or not?
        }
    }

    Mandating all operations be fallible can be confusing in that case (and has similar inefficiencies to the ones discussed in the previous point). Although admittedly this is a lot better in Rust with must-be-unwrapped-Results. In Gecko, “unwrapping” is often just blindly dereferencing a pointer, which is Undefined Behaviour if the allocation actually fails.

    The fourth is certainly nice-to-have, but probably not a high enough priority to create an entire separate Vec type.

    All of the type-based solutions also suffer from a fairly serious problem: they can’t implement many core traits in the fallible state. For instance, Extend::extend and Display::to_string require allocation and don’t support fallibility.

    With all that said, these are the proposed solutions:

    Method-Based

    Fairly straight-forward, but a bunch of duplicate code. Probably we would either end up implementing push in terms of try_push (which would be inefficient but easy), or with macros.

    impl<T> Vec<T> {
        fn try_push(&mut self, elem: T) -> Result<(), (T, CollectionAllocErr)> {
            if self.len() == self.capacity() {
                if let Err(e) = self.try_reserve(1) {
                    return Err((elem, e));
                }
            }
    
            // ... do actual push normally ...
        }
    }

    Generic (on Vec)

    This is a sketch, didn’t want to put enough effort in to crack this puzzle.

    The most notable thing is that it relies on generic associated types, which don’t actually exist yet, and probably won’t be stable until ~late 2018 (optimistically).

    trait Fallibility {
        type Result<T, E>;
        fn ok<T, E>(val: T) -> Self::Result<T, E>;
        fn err<T, E>(val: E, details: CollectionAllocErr) -> Self::Result<T, E>;
        // ... probably some other stuff here...?
    }
    
    struct Fallible;
    struct Infallible;
    
    impl Fallibility for Fallible {
        type Result<T, E> = Result<T, (E, CollectionAllocErr)>;
        fn ok<T, E>(val: T) -> Self::Result<T, E> {
            Ok(val)
        }
        fn err<T, E>(val: E, details: CollectionAllocErr) -> Self::Result<T, E> {
            Err((val, details))
        }
    }
    
    impl Fallibility for Infallible {
        type Result<T, E> = T;
        fn ok<T, E>(val: T) -> Self::Result<T, E> {
            val
        }
        fn err<T, E>(val: E, defaults: CollectionAllocErr) -> Self::Result<T, E> {
            unreachable!() // ??? maybe ???
        }
    }
    
    struct Vec<T, ..., F: Fallibility=Infallible> { ... }
    
    impl<T, ..., F> Vec<T, ..., F> {
        fn push(&mut self) -> F::Result<(), T> {
            if self.len() == self.capacity() {
                let result = self.reserve(1);
                // ??? How do I match on this in generic code ???
                // (can't use Carrier since we need to add `elem` payload?)
                if result.is_err() {
                    // Have to move elem into closure,
                    // so can only map_err conditionally
                    return result.map_err(move |err| (elem, err));
                }
            }
    
            // ... do actual push normally ...
        }
    }

    Generic (on Alloc)

    Same basic idea as the previous design, but the Fallibility trait is folded into the Alloc trait. Then one would use FallibleHeap or InfallibleHeap, or maybe Infallible<Heap>? This forces anyone who wants to support generic allocators to support generic fallibility. It would require a complete redesign of the allocator API, blocking it on generic associated types.

    FallibleVec

    Just make a completely separate type. Includes an into_fallible(self)/into_infallible(self) conversion which is free since there’s no actual representation change. Makes it possible to change “phases” between fallibility/infallibly for different parts of the program if that’s valuable. Implementation-wise, basically identical to the method approach, but we also need to duplicate non-allocating methods just to mirror the API.

    Alternatively we could make FallibleVec<'a, T> and as_fallible(&mut self), which is a temporary view like Iterator/Entry. This is probably a bit more consistent with how we do this sort of thing. This also makes “temporary” fallibility easier, but at the cost of being able to permanently become fallible:

    vec.as_fallible().push(x)?
    
    // vs
    
    let vec = vec.into_fallible();
    vec.push(x)?
    let vec = vec.into_infallible();
    
    // but this actually works:
    
    return vec.into_fallible()

    Unresolved questions

    • How exactly should oom=panic be implemented in the compiler?
    • How exactly should oom=panic behave for dependencies?

    Summary

    Add the method Option::filter<P>(self, predicate: P) -> Self to the standard library. This method makes it possible to easily throw away a Some value depending on a given predicate. The call opt.filter(p) is equivalent to opt.into_iter().filter(p).next().

    assert_eq!(Some(3).filter(|_| true)), Some(3));
    assert_eq!(Some(3).filter(|_| false)), None);
    assert_eq!(None.filter(|_| true), None);

    Motivation

    The Option type has plenty of methods, every single one intended to help the user write short code dealing with this ubiquitous type. If we would not care about convenience when dealing with Option, the type would not have nearly as many methods.

    Just like other methods, filter() is a useful method in certain situations. While it is not nearly as important as map(), it is very handy in many situations. The feedback on the corresponding rfcs-issue clearly shows that many people encountered a situation in which filter() would have been helpful.

    Consider this tiny example:

    let api_key = std::env::arg("APIKEY").ok()
        .filter(|key| key.starts_with("api"));

    Here is another example showing tree traversal with a queue:

    let mut queue = VecDeque::new();
    queue.push_back(tree.root());
    
    // We want to visit all nodes in breadth first search order, but stop
    // immediately once we found a leaf node.
    while let Some(node) = queue.pop_front().filter(|node| !node.is_leaf()) {
        queue.extend(node.children());
    }

    Additionally, adding filter() would make the interfaces of Option and Iterator more consistent. Both types already shared a handful of methods with identical names and functions, most importantly map(). Adding another such method would make the whole interface feel more consistent.

    In the following example the programmer can easily swap nth() and filter() statements, if they decide they want to allow the -j parameter at any position.

    let num_threads = std::env::args()
        .nth(1)
        .filter(|arg| arg.starts_with("-j"))
        .and_then(|arg| arg[2..].parse().ok());
    

    filter() can be especially useful for integration into existing method- chains. Here is a slightly more complicated example which is taken from an existing, real web app’s session management. Note that each line introduces a new reason to reject the session.

    // Check if there is a session-cookie
    let session = cookies.get(SESSION_COOKIE_NAME)
        // Try to decode the cookie's value as hexadecimal string
        .and_then(|cookie| hex::decode(cookie.value()).ok())
        // Make sure the session id has the correct length
        .filter(|session_id| session_id.len() == SESSION_ID_LEN)
        // Try to find the session with the given ID in the database
        .and_then(|session_id| db.find_session_by_id(session_id));

    All these examples would be less easy to read without filter(). There are two main ways to achieve something equivalent to filter(p):

    • opt.into_iter().filter(p).next(): notably longer and the next() feels semantically wrong.
    • opt.and_then(|v| if p(&v) { Some(v) } else { None }): notably longer and a questionable single-line if-else.

    Guide-level explanation

    A possible documentation of the method:

    fn filter<P>(self, predicate: P) -> Self
        where P: FnOnce(&T) -> bool

    Returns None if the option is None, otherwise calls predicate with the wrapped value and returns:

    • Some(t) if predicate returns true (where t is the wrapped value), and
    • None if predicate returns false.

    This function works similar to Iterator::filter(). You can imagine the Option<T> being an iterator over one or zero elements. filter() lets you decide which elements to keep.

    Examples

    fn is_even(n: i32) -> bool {
        n % 2 == 0
    }
    
    assert_eq!(None.filter(is_even), None);
    assert_eq!(Some(3).filter(is_even), None);
    assert_eq!(Some(4).filter(is_even), Some(4));

    Reference-level explanation

    It is hopefully sufficiently clear how filter() is supposed to work from the explanations above. Here is one example implementation:

    impl<T> Option<T> {
        pub fn filter<P>(self, predicate: P) -> Self
            where P: FnOnce(&T) -> bool
        {
            match self {
                Some(x) => {
                    if predicate(&x) {
                        Some(x)
                    } else {
                        None
                    }
                }
                None => None,
            }
        }
    }

    Drawbacks

    It increases the size of the standard library by a tiny bit.

    Rationale and Alternatives

    • Don’t do anything.

    Unresolved questions

    Maybe filter() wouldn’t be used a lot.

    The feature proposed in this RFC is already implemented in the option-filter crate. This crate hasn’t been used a lot (only around 1500 downloads at the time of writing this). Thus, it makes sense to ask whether people would actually use the filter() method. However, there are many other reasons for not using this crate:

    • The programmer doesn’t know about the crate

    • The programmer knows about the crate, but doesn’t want to have too many tiny dependencies in their project

    • The programmer knows about the crate, but they decided it’s too much work to use the crate.

      A simple calculation: using the crate would require around 80 new characters (option-filter = "*" + extern crate option_filter; + use option_filter::OptionFilterExt;) in at least 2, probably 3, files. On the other hand, using the .and_then() workaround shown above would only need 39 more characters than filter() and wouldn’t require opening other files.

    According to the assessment of this RFC’s author, the mentioned crate is not used for reasons independently of filter()’s usefulness.

    Reading the comments and looking at the feedback in this thread, it’s clear that there are at least some people openly requesting this feature. And to give a specific example: this RFC’s author wanted to use filter() a whole lot more often than he used some of the other methods of Option (like map_or_else() and ok_or_else()).

    This RFC was previously approved, but part of it later withdrawn

    The crate visibility specifier was previously implemented, but later removed. For details see the summary comment.

    Summary

    This RFC seeks to clarify and streamline Rust’s story around paths and visibility for modules and crates. That story will look as follows:

    • Absolute paths should begin with a crate name, where the keyword crate refers to the current crate (other forms are linted, see below)
    • extern crate is no longer necessary, and is linted (see below); dependencies are available at the root unless shadowed.
    • The crate keyword also acts as a visibility modifier, equivalent to today’s pub(crate). Consequently, uses of bare pub on items that are not actually publicly exported are linted, suggesting crate visibility instead.
    • A foo.rs and foo/ subdirectory may coexist; mod.rs is no longer needed when placing submodules in a subdirectory.

    These changes do not require a new edition. The new features are purely additive. They can ship with allow-by-default lints, which can gradually be moved to warn-by-default and deny-by-default over time, as better tooling is developed and more code has actively made the switch.

    This RFC incorporates some text written by @withoutboats and @cramertj, who have both been involved in the long-running discussions on this topic.

    Motivation

    A major theme of this year’s roadmap is improving the learning curve and ergonomics of the core language. That’s based on overwhelming feedback that the single biggest barrier to Rust adoption is its learning curve.

    One part of Rust that has long been a source of friction for some is its module system. There are two related perspectives for improvement here: learnability and productivity:

    • Modules are not a place that Rust was trying to innovate at 1.0, but they are nevertheless often reported as one of the major stumbling blocks to learning Rust. We should fix that.

    • Even for seasoned Rustaceans, the module system has some deficiencies, as we’ll dig into below. Ideally, we can solve these problems while also making modules easier to learn.

    The core problems

    This RFC does not attempt to comprehensively solve the problems that have been raised in today’s module system. The focus is instead high-impact problems with noninvasive solutions.

    Defining versus bringing into scope

    A persistent point of confusion is the relationship between defining an item and bringing an item into scope. First, let’s look at the rules as they exist today:

    • When you refer to items within definitions (e.g. a fn signature or body), those items must be in scope (unless you use a leading :: or super).

    • Defining an item “mounts” its name within the current crate’s module hierarchy, making it available through absolute paths.

    • All items defined within a module are also in scope throughout that module. This includes use statements, which actually define (i.e. mount) items within the current module.

    • Additional names are brought into scope through things like function parameters or generics.

    There’s a beautiful uniformity and sparseness in these rules that makes them appealing. And they turn out to be reasonably intuitive for items whose full definition is given within the module (e.g. struct definitions).

    The struggle tends to instead be with items like extern crate and mod foo; which “bring in” other crates or files. This RFC focuses on the former, so let’s explore that in more detail.

    When you write extern crate futures in your crate root, there are two consequences per the above rules:

    • The external crate futures is “mounted” at the root absolute path.
    • The external crate futures is brought into scope for the top-level module.

    When writing code at crate root, you’re able to freely refer to futures to start paths in both use statements and in references to items:

    extern crate futures;
    
    use futures::Future;
    
    fn my_poll() -> futures::Poll { ... }

    These consequences make it easy to build an incorrect mental model, in which extern crate globally adds the external crate name as something you can start any path with–made worse because it’s half true. (This confusion is undoubtedly influenced by the way that external package references work in many other languages, where absolute paths always begin with a package reference.) This wrong mental model works fine in the crate root, but breaks down as soon as you try it in a submodule:

    extern crate futures;
    
    mod submodule {
        // this still works fine!
        use futures::Future;
    
        // but suddenly this doesn't...
        fn my_poll() -> futures::Poll { ... }
    }

    The fact that adding a use futures; statement to the submodule makes the fn declaration work is almost worse: it reinforces the idea that external crates define names in the root namespace, but that sometimes you need to write use futures to refer to them… but not to refer to them in use declarations! This is the point where some people get exasperated by the module system, which seems to be enforcing some mysterious and pedantic distinctions. And this is perhaps worst with std, in which there’s an implicit extern crate in the root module, so that fn make_vec() -> std::vec::Vec<u8> works fine in crate root but requires use std elsewhere.

    In other words, while there are simple and consistent rules defining the module system, their consequences can feel inconsistent, counterintuitive and mysterious.

    It’s tempting to say that we can fully address these problems by better documentation and compiler diagnostics–and surely we should improve them! But for folks trying out Rust, there’s already plenty to learn, and there’s a sense that the module system is “getting in the way” early on, forcing you to stop and try to understand its particular set of rules before you can get back to trying to understand ownership and other aspects of Rust.

    This RFC instead tweaks the handling of external crates and absolute paths, so that when you apply the general rules of the module system, you get an outcome that feels more consistent and intuitive, and requires less front-loading of explanation. As we’ll see below, in practice these changes will also improve clarity and readability even for users with a full understanding of the rules.

    (We’ll revisit this example at the end of the Guide section to explain how the RFC helps.)

    Nonlocal reasoning

    There are at least two ways in which today’s module system doesn’t support local reasoning. These affect newcomers and old hands alike.

    • Is a use path talking about this crate or an external one? When reading use statements, to know the source of the import you need to have in your head a list of external crates and/or top-level modules for the current crate. It has long been idiomatic to visually group imports from the current crate separately from external imports. In general, this suggests a certain muddiness around the root namespace.

    • Is an item marked pub actually public? It’s a fairly common idiom today to have a private module that contains pub items used by its parent and siblings only. This idiom arises in part because of ergonomic concerns; writing pub(super) or pub(crate) on these internal items feels heavier. But the consequence is that, when reading code, visibility annotations tell you less than you might hope, and in general you have to walk up the module tree looking for re-exports to know exactly how public an item is.

    The mod.rs file

    A final issue, though far less important, is the use of mod.rs files when creating a directory containing submodules. There are several downsides:

    • From a learnability perspective, the fact that the paths in the module system aren’t quite in direct correspondence with the file system is another small speedbump, and in particular makes mod foo; declarations entail extra ceremony (since the parent module must be moved into a new directory). A simpler rule would be: the path to a module’s file is the path to it within Rust code, with .rs appended.
    • From an ergonomics perspective, one often ends up with many mod.rs files open, and thus must depend on editor smarts to easily navigate between them. Again, a minor but nontrivial papercut.
    • When refactoring code to introduce submodules, having to use mod.rs means you often have to move existing files around. Another papercut.

    The main benefit to mod.rs is that the code for a parent module and its children live more closely together (not necessarily desirable!) and that it provides a consistent story with lib.rs.

    Some evidence of learning struggles

    In the survey data collected in both 2016 and 2017, learnability and ergonomics issues were one of the major challenges for people using or considering Rust. While there were other features that were raised more frequently than the module system (lifetimes for example), ideally the module system, which isn’t meant to be novel, would not be a learnability problem at all!

    Here are some select quotes (these are not the only responses that mention the module system):

    Also the module system is confusing (not that I say is wrong, just confusing until you are experienced in it).

    a colleague of mine that started rust got really confused over the module system

    You had to import everything in the main module, but you also had to in submodules, but if it was only imported in a submodule it wouldn’t work.

    I especially find the modules and crates design weird and verbose

    fix the module system

    One user states that the reason they stopped using Rust was that the “module system is really unintuitive.” Similar data is present in the 2016 survey.

    Experiences along similar lines can be found in Rust forums, StackOverflow, and similar, some of which has been collected into a gist.

    The problems presented above represent a boiled down subset of the problems raised in this feedback.

    Guide-level explanation

    As we would teach it

    The following sections sketch a plausible way of teaching the module system once this RFC has been fully implemented.

    Using external dependencies

    To add an external dependency, record it in the [dependencies] section of Cargo.toml:

    [dependencies]
    serde = "1.0.0"
    

    By default, crates have an automatic dependency on std, the standard library.

    Once your dependency has been added, you can bring it or its exports into scope with use declarations:

    use std; // bring `std` itself into scope
    use std::vec::Vec;
    
    use serde::Serialize;

    Note that these use declarations all begin with a crate name.

    Once an item is in scope, you can reference it directly within definitions:

    // Both of these work, because we brought `std` and `Vec` into scope:
    fn make_vec() -> Vec<u8> { ... }
    fn make_vec() -> std::vec::Vec<u8> { ... }
    
    // Only the first of these work, because we didn't bring `serde` into scope:
    impl Serialize for MyType { ... }
    impl serde::Serialize for MyType { ... } // the name `serde` is not in scope here

    You can also reference items from a crate without bringing them into scope by writing a fully qualified path, designated by a leading ::, as follows:

    impl ::serde::Serialize for MyType { ... }

    All use declarations are interpreted as fully qualified paths, making the leading :: optional for them.

    Note: that means that you can write use serde::Serialize in any module without trouble, as long as serde is an external dependency!

    Adding a new file to your crate

    Rust crates have a distinguished entry point (generally called main.rs or lib.rs) which is used to determine the crate’s structure. Other files and directories within src/ are not automatically included in the crate. Instead, you explicitly declare submodules using mod declarations.

    Let’s see how this looks with an example. First, we might set up a directory structure like the following:

    src
    ├── cli
    │   ├── parse.rs
    │   └── usage.rs
    ├── cli.rs
    ├── main.rs
    ├── process
    │   ├── read.rs
    │   └── write.rs
    └── process.rs
    

    The intent is for the crate to have two top-level modules, cli and process, each of which contain two submodules. To turn these files into submodules, we use mod declarations as follows:

    // src/main.rs
    mod cli;
    mod process;
    // src/cli.rs
    mod parse;
    mod usage;
    // src/process.rs
    mod read;
    mod write;

    Note how these declarations follow the structure of the filesystem (except that the entry point, main.rs, has its children modules as sibling files). By default, mod declarations assume this kind of direct mapping to the filesystem; they are used to tell Rust to incorporate those files, and to set attributes on the resulting modules (as we’ll see in a moment).

    Importing items from other parts of your crate

    In Rust, all items defined in a module are private by default, which means they can only be accessed by the module defining them (or any of its submodules). If you want an item to have greater visibility, you can use a visibility modifier. The two most important of these are:

    • crate, which makes an item visible anywhere within the current crate, but not outside of it.
    • pub, which makes an item public, i.e. visible everywhere.

    For binary crates (which have no consumers), crate and pub are equivalent.

    Going back to the earlier example, we might instead write:

    // src/main.rs
    pub mod cli;
    pub mod process;
    // src/cli.rs
    pub mod parse;
    pub mod usage;
    // src/cli/usage.rs
    pub fn print_usage() { ... }
    // src/process.rs
    pub mod read;
    pub mod write;

    To refer to an item within your own crate, you can use a fully qualified path that starts with one of the following:

    • crate, to start at the root of your crate, e.g. crate::cli::usage::print_usage
    • self, to start at the current module
    • super, to start at the current module’s parent

    So we could write in main.rs:

    use crate::cli::usage;
    
    fn main() {
        // ...
        usage::print_usage()
        // ...
    }

    In general, then, fully qualified paths always start with an initial location: an external crate name, or crate/self/super.

    Guide-level thoughts when comparing to today’s system

    Let’s revisit one of the motivating examples. Today, you might write:

    extern crate futures;
    fn my_poll() -> futures::Poll { ... }

    and then be confused when the following doesn’t work:

    extern crate futures;
    mod submodule {
        fn my_poll() -> futures::Poll { ... }
    }

    because you’ve been led to think that extern crate brings the name into scope everywhere.

    After this RFC, you would no longer write extern crate futures. You might try to write just:

    fn my_poll() -> futures::Poll { ... }

    but the compiler would produce an error, saying that there’s no futures in scope; maybe you meant the external dependency, which you can bring into scope by writing use futures;? So you do that:

    use futures;
    fn my_poll() -> futures::Poll { ... }

    and now, when you refactor, you’re much more likely to understand that the use should come along for the ride:

    mod submodule {
        use futures;
        fn my_poll() -> futures::Poll { ... }
    }

    Together with the fact that you use crate:: in use declarations, this strongly reinforces the idea that:

    • use brings items into scope, based on paths that start by identifying the crate
    • an item needs to be in scope before you can refer to it

    Reference-level explanation

    First, a bit of terminology: a fully qualified path is a path starting with ::, which all paths in use do implicitly.

    The actual changes in this RFC are fairly small tweaks to the current module system; most of the complexity comes from the migration plans.

    The proposed migration plan is minimally disruptive; it does not require an edition.

    Basic changes

    • You can write mod bar; statements even when not in a mod.rs or equivalent; in this case, the submodules must appear within a subdirectory with the same name as the current module. Thus, foo.rs can contain mod bar; if there is also a foo/bar.rs.

      • It is not permitted to have both foo.rs and foo/mod.rs at the same point in the file system.
      • The use of mod.rs continues to be allowed without any deprecation. It is expected that tooling like Clippy will push for at least style consistency within a project, and perhaps eventually across the ecosystem.
    • We introduce crate as a new visibility specifier, shorthand for pub(crate) visibility.

    • We introduce crate as a new path component which designates the root of the current crate.

    • In a path fully qualified path ::foo, resolution will first attempt to resolve to a top-level definition of foo, and otherwise fall back to available external crates.

    • Cargo will provide a new alias key for aliasing dependencies, so that e.g. users who want to use the rand crate but call its library crate random instead can now write rand = { version = "0.3", alias = "random" }.

    • We introduce several lints, which all start out allow-by-default but are expected to ratchet up over time:

      • A lint for fully qualified paths that do not begin with one of: an external crate name, crate, super, or self.

      • A lint for use of extern crate.

      • A lint against use of bare pub for items which are not reachable via some fully-pub path. That is, bare pub should truly mean public, and crate should be used for crate-level visibility.

    Resolving fully-qualified paths

    The only way to refer to an external crate without using extern crate is through a fully-qualified path.

    When resolving a fully-qualified path that begins with a name (and not crate, super or self, we go through a two-stage process:

    • First, attempt to resolve the name as an item defined in the top-level module.
      • If successful, issue a deprecation warning, saying that the crate prefix should be used.
    • Otherwise, attempt to resolve the name as an external crate, exactly as we do with extern crate today.

    In particular, no change to the compilation model or interface between rustc and Cargo/the ambient build system is needed.

    This approach is designed for backwards compatibility, but it means that you cannot have a top-level module and an external crate with the same name. Allowing that would require all fully-qualified paths into the current crate to start with crate, which can only be done on a future edition. We can and should consider making such a change eventually, but it is not required for this RFC.

    Migration experience

    We will provide a high-fidelity rustfix tool that makes changes to the a crate such that the lints proposed in this RFC would not fire. In particular, the tool will introduce crate:: prefixes, downgrade from pub to crate where appropriate, and remove extern crate. It must be sound (i.e. keep the meaning of code intact and keep it compiling) but may not be complete (i.e. you may still get some deprecation warnings after running it).

    Such a tool should be working at with very high coverage before we consider changing any of the lints to warn-by-default.

    Drawbacks

    The most important drawback is that this RFC pushes toward ultimately changing most Rust code in existence. There is risk of this reintroducing a sense that Rust is unstable, if not handled properly. However, that risk is mitigated by several factors:

    • The fact that existing forms continue to work indefinitely.
    • The fact that we will provide migration tooling with high coverage.
    • The fact that nudges toward new forms (in the forms of lints) are introduced gradually, and only after strong tooling exists.

    Imports from within your crate become more verbose, since they require a leading crate. However, this downside is considerably mitigated if nesting in use is permitted.

    There is some concern that introducing and encouraging the use of crate as a visibility will, counter to the goals of the RFC, lead to people increasing the visibility of items rather than decreasing it (and hence increasing inter-module coupling). This could happen if, for example, an item needs to be exposed to a cousin module, where a Rust user might hesitate to make it pub but feel that crate is sufficiently “safe” (when really a refactoring is called for). While this is indeed a possibility, it’s offset by some other cultural and design factors: Rust’s design strongly encourages narrow access rights (privacy by default; immutability by default), and this orientation has a strong cultural sway within the Rust community.

    In previous discussions about deprecating extern crate, there were concerns about the impact on non-Cargo tooling, and in overall explicitness. This RFC fully addresses both concerns by leveraging the new, unambiguous nature of fully qualified paths.

    Moving crate renaming externally has implications for procedural macros with dependencies: their clients must include those dependencies without renaming them.

    Rationale and Alternatives

    The core rationale here should be clear given the detailed analysis in the motivation. The crucial insight of the design is that, by making absolute paths unambiguous about which crate they draw from, we can solve a number of confusions and papercuts with the module system.

    Edition-based migration story

    We can avoid the need for fallback in resolution by leveraging editions instead. On the current edition, we would make crate:: paths available and start warning about not using them for crate-internal paths, but we would not issue warnings about extern crate. In the next edition, we would change absolute path interpretations, such that warning-free code on the previous edition would continue to compile and have the same meaning.

    Bike-sheddy choices

    There are a few aspects of this proposal that could be colored a bit differently without fundamental change.

    • Rather than crate::top_level_module, we could consider extern::serde or something like it, which would eliminate the need for any fallback in name resolution. That would come with some significant downsides, though.

      • First, having paths typically start with a crate name, with crate referring to the current crate, provides a very simple and easy to understand model for paths—and its one that’s pretty commonly used in other languages.
      • Second, one benefit of crate is that it helps reduce confusion about paths appearing in use versus references to names elsewhere. In particular, it serves as a reminder that use paths are absolute.
    • Rather than using crate as a visibility specifier, we could use something like local. (If we used it purely as a visibility specifier, we could make it a contextual keyword). That might be preferable, since local is an adjective and is arguably more intuitive. This is an unresolved question.

    • The lint checking for pub items that are not actually public could be extended to check for all visibility levels. The RFC stuck with just pub because the ergonomics of crate make it more feasible to go from pub to crate, which should always work. It seems less feasible to ask people to annotate definitions with e.g. pub(super), though maybe this is a sign that the pub(restricted) syntax is too unergonomic or underused.

    The community discussion around modules

    For the past several months, the Rust community has been investigating the module system, its weaknesses, strengths, and areas of potential improvement. The discussion is far too wide-ranging to summarize here, so I’ll just present links.

    Two blog posts serve as milestones in the discussion, laying out a part of the argument in favor of improving the module system:

    And in addition there’s been extensive discussion on internals:

    These discussions ultimately led to two failed RFCs.

    These earlier RFCs were shooting for a more comprehensive set of improvements around the module system, and in particular both involved eliminating the need for mod declarations in common cases. However, there are enough concerns and open questions about that direction that we chose to split those more ambitious ideas off into a separate experimental RFC:

    We recognize that this is a major point of controversy and so will put aside trying to complete a full RFC on the topic at this time; however, we believe the idea has enough merit that it’s worth an experimental implementation in the compiler that we can use to gather more data, e.g. around the impact on workflow. We would still like to do this before the impl period, so that we can do that exploration during the impl period. (To be clear: experimental RFCs are to approve landing unstable features that seem promising but where we need more experience; they require a standard RFC to be merged before they can be stabilized.)

    Unresolved questions

    • How should we approach migration? Via a fallback, as proposed, or via editions? It is probably best to make this determination with more experience, e.g. after we have a rustfix tool in hand.

    Summary

    Permit nested {} groups in imports.
    Permit * in {} groups in imports.

    use syntax::{
        tokenstream::TokenTree, // >1 segments
        ext::base::{ExtCtxt, MacResult, DummyResult, MacEager}, // nested braces
        ext::build::AstBuilder,
        ext::quote::rt::Span,
    };
    
    use syntax::ast::{self, *}; // * in braces
    
    use rustc::mir::{*, transform::{MirPass, MirSource}}; // both * and nested braces

    Motivation

    The motivation is ergonomics. Prefixes are often shared among imports, especially if many imports import names from the same crate. With this nested grouping it’s more often possible to merge common import prefixes and write them once instead of writing them multiple times.

    Guide-level explanation

    Several use items with common prefix can be merged into one use item, in which the prefix is written once and all the suffixes are listed inside curly braces {}.
    All kinds of suffixes can be listed inside curly braces, including globs * and “subtrees” with their own curly braces.

    // BEFORE
    use syntax::tokenstream::TokenTree;
    use syntax::ext::base::{ExtCtxt, MacResult, DummyResult, MacEager};
    use syntax::ext::build::AstBuilder,
    use syntax::ext::quote::rt::Span,
    
    use syntax::ast;
    use syntax::ast::*;
    
    use rustc::mir::*;
    use rustc::mir::transform::{MirPass, MirSource};
    
    // AFTER
    use syntax::{
        // paths with >1 segments are permitted inside braces
        tokenstream::TokenTree,
        // nested braces are permitted as well
        ext::base::{ExtCtxt, MacResult, DummyResult, MacEager},
        ext::build::AstBuilder,
        ext::quote::rt::Span,
    };
    
    // `*` can be listed in braces too
    use syntax::ast::{self, *};
    
    // both `*` and nested braces
    use rustc::mir::{*, transform::{MirPass, MirSource}};
    
    // the prefix can be empty
    use {
        syntax::ast::*;
        rustc::mir::*;
    };
    
    // `pub` imports can use this syntax as well
    pub use self::Visibility::{self, Public, Inherited};

    A use item with merged prefixes behaves identically to several use items with all the prefixes “unmerged”.

    Reference-level explanation

    Syntax:

    IMPORT = ATTRS VISIBILITY `use` [`::`] IMPORT_TREE `;`
    
    IMPORT_TREE = `*` |
                  REL_MOD_PATH `::` `*` |
                  `{` IMPORT_TREE_LIST `}` |
                  REL_MOD_PATH `::` `{` IMPORT_TREE_LIST `}` |
                  REL_MOD_PATH [`as` IDENT]
    
    IMPORT_TREE_LIST = Ø | (IMPORT_TREE `,`)* IMPORT_TREE [`,`]
    
    REL_MOD_PATH = (IDENT `::`)* IDENT
    

    Resolution:
    First the import tree is prefixed with ::, unless it already starts with ::, self or super.
    Then resolution is performed as if the whole import tree were flattened, except that {self}/{self as name} are processed specially because a::b::self is illegal.

    use a::{
        b::{self as s, c, d as e},
        f::*,
        g::h as i,
        *,
    };
    
    =>
    
    use ::a::b as s;
    use ::a::b::c;
    use ::a::b::d as e;
    use ::a::f::*;
    use ::a::g::h as i;
    use ::a::*;

    Various corner cases are resolved naturally through desugaring

    use an::{*, *}; // Use an owl!
    
    =>
    
    use an::*;
    use an::*; // Legal, but reported as unused by `unused_imports` lint.

    Relationships with other proposal

    This RFC is an incremental improvement largely independent from other import-related proposals, but it can have effect on some other RFCs.

    Some RFCs propose new syntaxes for absolute paths in the current crate and paths from other crates. Some arguments in those proposals are based on usage statistics - “imports from other crates are more common” or “imports from the current crate are more common”. More common imports are supposed to get less verbose syntax.

    This RFC removes the these statistics from the equation by reducing verbosity for all imports with common prefix.
    For example, the difference in verbosity between A, B and C is minimal and doesn’t depend on the number of imports.

    // A
    use extern::{
        a::b::c,
        d::e::f,
        g::h::i,
    };
    // B
    use crate::{
        a::b::c,
        d::e::f,
        g::h::i,
    };
    // C
    use {
        a::b::c,
        d::e::f,
        g::h::i,
    };

    Drawbacks

    The feature encourages (but not requires) multi-line formatting of a single import

    use prefix::{
        MyName,
        x::YourName,
        y::Surname,
    };

    With this formatting it becomes harder to grep for use.*MyName.

    Rationale and Alternatives

    Status quo is always an alternative.

    Unresolved questions

    None so far.

    Summary

    Implement Clone and Copy for closures where possible:

    // Many closures can now be passed by-value to multiple functions:
    fn call<F: FnOnce()>(f: F) { f() }
    let hello = || println!("Hello, world!");
    call(hello);
    call(hello);
    
    // Many `Iterator` combinators are now `Copy`/`Clone`:
    let x = (1..100).map(|x| x * 5);
    let _ = x.map(|x| x - 3); // moves `x` by `Copy`ing
    let _ = x.chain(y); // moves `x` again
    let _ = x.cycle(); // `.cycle()` is only possible when `Self: Clone`
    
    // Closures which reference data mutably are not `Copy`/`Clone`:
    let mut x = 0;
    let incr_x = || x += 1;
    call(incr_x);
    call(incr_x); // ERROR: `incr_x` moved in the call above.
    
    // `move` closures implement `Clone`/`Copy` if the values they capture
    // implement `Clone`/`Copy`:
    let mut x = 0;
    let print_incr = move || { println!("{}", x); x += 1; };
    
    fn call_three_times<F: FnMut()>(mut f: F) {
        for i in 0..3 {
            f();
        }
    }
    
    call_three_times(print_incr); // prints "0", "1", "2"
    call_three_times(print_incr); // prints "0", "1", "2"

    Motivation

    Idiomatic Rust often includes liberal use of closures. Many APIs have combinator functions which wrap closures to provide additional functionality (e.g. methods in the Iterator and Future traits).

    However, closures are unique, unnameable types which do not implement Copy or Clone. This makes using closures unergonomic and limits their usability. Functions which take closures, Iterator or Future combinators, or other closure-based types by-value are impossible to call multiple times.

    One current workaround is to use the coercion from non-capturing closures to fn pointers, but this introduces unnecessary dynamic dispatch and prevents closures from capturing values, even zero-sized ones.

    This RFC solves this issue by implementing the Copy and Clone traits on closures where possible.

    Guide-level explanation

    If a non-move closure doesn’t mutate captured variables, then it is Copy and Clone:

    let x = 5;
    let print_x = || println!("{}", x); // `print_x` is `Copy + Clone`.
    
    // No-op helper function which moves a value
    fn move_it<T>(_: T) {}
    
    // Because `print_x` is `Copy`, we can pass it by-value multiple times:
    move_it(print_x);
    move_it(print_x);

    Non-move closures which mutate captured variables are neither Copy nor Clone:

    let mut x = 0;
    
    // `incr` mutates `x` and isn't a `move` closure,
    // so it's neither `Copy` nor `Clone`
    let incr = || { x += 1; };
    
    move_it(incr);
    move_it(incr); // ERROR: `print_incr` moved in the call above

    move closures are only Copy or Clone if the values they capture are Copy or Clone:

    let x = 5;
    
    // `x` is `Copy + Clone`, so `print_x` is `Copy + Clone`:
    let print_x = move || println!("{}", x);
    
    let foo = String::from("foo");
    // `foo` is `Clone` but not `Copy`, so `print_foo` is `Clone` but not `Copy`:
    let print_foo = move || println!("{}", foo);
    
    // Even closures which mutate variables are `Clone + Copy`
    // if their captures are `Clone + Copy`:
    let mut x = 0;
    
    // `x` is `Clone + Copy`, so `print_incr` is `Clone + Copy`:
    let print_incr = move || { println!("{}", x); x += 1; };
    move_it(print_incr);
    move_it(print_incr);
    move_it(print_incr);

    Reference-level explanation

    Closures are internally represented as structs which contain either values or references to the values of captured variables (move or non-move closures). A closure type implements Clone or Copy if and only if the all values in the closure’s internal representation implement Clone or Copy:

    • Non-mutating non-move closures only contain immutable references (which are Copy + Clone), so these closures are Copy + Clone.

    • Mutating non-move closures contain mutable references, which are neither Copy nor Clone, so these closures are neither Copy nor Clone.

    • move closures contain values moved out of the enclosing scope, so these closures are Clone or Copy if and only if all of the values they capture are Clone or Copy.

    The internal implementation of Clone for non-Copy closures will resemble the basic implementation generated by derive, but the order in which values are Cloned will remain unspecified.

    Drawbacks

    This feature increases the complexity of the language, as it will force users to reason about which variables are being captured in order to understand whether or not a closure is Copy or Clone.

    However, this can be mitigated through error messages which point to the specific captured variables that prevent a closure from satisfying Copy or Clone bounds.

    Rationale and Alternatives

    It would be possible to implement Clone or Copy for a more minimal set of closures, such as only non-move closures, or non-mutating closures. This could make it easier to reason about exactly which closures implement Copy or Clone, but this would come at the cost of greatly decreased functionality.

    Unresolved questions

    • How can we provide high-quality, tailored error messages to indicate why a closure isn’t Copy or Clone?

    Summary

    Add compiler-generated Clone implementations for tuples and arrays with Clone elements of all lengths.

    Motivation

    Currently, the Clone trait for arrays and tuples is implemented using a macro in libcore, for tuples of size 11 or less and for Copy arrays of size 32 or less. This breaks the uniformity of the language and annoys users.

    Also, the compiler already implements Copy for all arrays and tuples with all elements Copy, which forces the compiler to provide an implementation for Copy’s supertrait Clone. There is no reason the compiler couldn’t provide Clone impls for all arrays and tuples.

    Guide-level explanation

    Arrays and tuples of Clone arrays are Clone themselves. Cloning them clones all of their elements.

    Reference-level explanation

    Make clone a lang-item, add the following trait rules to the compiler:

    n number
    T type
    T: Clone
    ----------
    [T; n]: Clone
    
    T1,...,Tn types
    T1: Clone, ..., Tn: Clone
    ----------
    (T1, ..., Tn): Clone
    

    And add the obvious implementations of Clone::clone and Clone::clone_from as MIR shim implementations, in the same manner as drop_in_place. The implementations could also do a shallow copy if the type ends up being Copy.

    Remove the macro implementations in libcore. We still have macro implementations for other “derived” traits, such as PartialEq, Hash, etc.

    Note that independently of this RFC, we’re adding builtin Clone impls for all “scalar” types, most importantly fn pointer and fn item types (where manual impls are impossible in the foreseeable future because of higher-ranked types, e.g. for<'a> fn(SomeLocalStruct<'a>)), which are already Copy:

    T fn pointer type
    ----------
    T: Clone
    
    T fn item type
    ----------
    T: Clone
    
    And just for completeness (these are perfectly done by an impl in Rust 1.19):
    
    T int type | T uint type | T float type
    ----------
    T: Clone
    
    T type
    ----------
    *const T: Clone
    *mut T: Clone
    
    T type
    'a lifetime
    ----------
    &'a T: Clone
    
    ----------
    bool: Clone
    char: Clone
    !: Clone
    

    This was considered a bug-fix (these types are all Copy, so it’s easy to witness that they are Clone).

    Drawbacks

    The MIR shims add complexity to the compiler. Along with the derive(Clone) implementation in libsyntax, we have 2 separate sets of implementations of Clone.

    Having Copy and Clone impls for all arrays and tuples, but not PartialEq etc. impls, could be confusing to users.

    Rationale and Alternatives

    Even with all proposed expansions to Rust’s type-system, for consistency, the compiler needs to have at least some built-in Clone implementations: the type for<'a> fn(Foo<'a>) is Copy for all user-defined types Foo, but there is no way to implement Clone, which is a supertrait of Copy, for it (an impl<T> Clone for fn(T) won’t match against the higher-ranked type).

    The MIR shims for Clone of arrays and tuples are actually pretty simple and don’t add much complexity after we have drop_in_place and shims for Copy types.

    The array situation

    In Rust 1.19, arrays are Clone only if they are Copy. This code does not compile:

    fn main() {
        let x = [Box::new(0)].clone(); //~ ERROR
        println!("{:?}", x[0]);
    }
    

    The reason (I think) is that there is no good way to write a variable-length array expression in macros. This wouldn’t be fixed by the first iteration of const generics. Actually, this can be done using a for-loop (ArrayVec is used here instead of a manual panic guard for simplicity, but it can be easily implemented given const generics).

    impl<const n: usize; T: Clone> Clone for [T; n] {
        fn clone(&self) -> Self {
            unsafe {
                let result : ArrayVec<Self> = ArrayVec::new();
                for elem in (self as &[T]) {
                    result.push(elem.clone());
                }
                result.into_inner().unwrap()
            }
        }
    }
    

    OTOH, this means that making non-Copy arrays Clone is less of a bugfix and more of a new feature. It’s however a nice feature - [Box<u32>; 1] not being Clone is an annoying and seemingly-pointless edge case.

    Implement Clone only for Copy types

    As of Rust 1.19, the compiler does not have the Clone implementations, which causes ICEs such as rust-lang/rust#25733 because Clone is a supertrait of Copy.

    One alternative, which would solve ICEs while being conservative, would be to have compiler implementations for Clone only for Copy tuples of size 12+ and arrays, and maintain the libcore macros for Clone of tuples (in Rust 1.19, arrays are only Clone if they are Copy).

    This would make the shims trivial (a Clone implementation for a Copy type is just a memcpy), and would not implement any features that are not needed.

    When we get variadic generics, we could make all tuples with Clone elements Clone. When we get const generics, we could make all arrays with Clone elements Clone.

    Use a MIR implementation of Clone for all derived impls

    The implementation on the other end of the conservative-radical end would be to use the MIR shims for all #[derive(Clone)] implementations. This would increase uniformity by getting rid of the separate libsyntax derived implementation. However:

    1. We’ll still need the #[derive_Clone] hook in libsyntax, which would presumably result in an attribute that trait selection can see. That’s not a significant concern.

    2. The more annoying issue is that, as a workaround to trait matching being inductive, derived implementations are imperfect - see rust-lang/rust#26925. This means that we either have to solve that issue for Clone (which is dedicatedly non-trivial) or have some sort of type-checking for the generated MIR shims, both annoying options.

    3. A MIR shim implementation would also have to deal with edge cases such as #[repr(packed)], which normal type-checking would handle for ordinary derive. I think drop glue already encounters all of these edge cases so we have to deal with them anyway.

    Copy and Clone for closures

    We could also add implementations of Copy and Clone to closures. That is RFC #2132 and should be discussed there.

    Unresolved questions

    See Alternatives.

    Summary

    This experimental RFC lays out a high-level plan for improving Cargo’s ability to integrate with other build systems and environments. As an experimental RFC, it opens the door to landing unstable features in Cargo to try out ideas, but not to stabilizing those features, which will require follow-up RFCs. It proposes a variety of features which, in total, permit a wide spectrum of integration cases – from customizing a single aspect of Cargo to letting an external build system run almost the entire show.

    Motivation

    One of the first hurdles for using Rust in production is integrating it into your organization’s build system. The level of challenge depends on the level of integration required: it’s relatively painless to invoke Cargo from a makefile and let it fully manage building Rust code, but gets harder as you want the external build system to exert finer-grained control over how Rust code is built. The goal of this RFC is to lay out a vision for making integration at any scale much easier than it is today.

    After extensive discussion with stakeholders, there appear to be two distinct kinds of use-cases (or “customers”) involved here:

    • Mixed build systems, where building already involves a variety of language- or project-specific build systems. For this use case, the desire is to use Cargo as-is, except for some specific concerns. Those concerns take a variety of shapes: customizing caching, having a local crate registry, custom handling for native dependencies, and so on. Addressing these concerns well means adding new points of extensibility or control to Cargo.

    • Homogeneous build systems like Bazel, where there is a single prevailing build system and methodology that works across languages and projects and is expected to drive all aspects of the build. In such cases the goal of Cargo integration is largely interoperability, including easy use of the crates.io ecosystem and Rust-centric tooling, both of which expect Cargo-driven build management.

    The interoperability constraints are, in actuality, hard constraints around any kind of integration.

    In more detail, a build system integration must:

    • Make it easy for the outer build system to control the aspects of building that are under its purview (e.g. artifact management, caching, network access).
    • Make it easy to depend on arbitrary crates in the crates.io ecosystem.
    • Make it easy to use Rust tooling like rustfmt or the RLS with projects that depend on the external build system.

    A build system integration should:

    • Provide Cargo-based or Cargo-like workflows when developing Rust projects, so that documentation and guidance from the Rust community applies even when working within a different build system.
    • To the extent possible, support Cargo concepts in a smooth, first-class way in the external build system (e.g. Cargo features, profiles, etc)

    This RFC does not attempt to provide a detailed solution for all of the needed extensibility points in Cargo, but rather to outline a general plan for how to get there over time. Individual components that add significant features to Cargo will need follow-up RFCs before stabilization.

    Guide-level explanation

    The plan proposed in this RFC is to address the two use-cases from the motivation section in parallel:

    • For the mixed build system case, we will triage feature requests and work on adding further points of extensibility to Cargo based on expected impact. Each added point of extensibility should ease build system integration for another round of customers.

    • For the homogeneous build system case, we will immediately pursue extensibility points that will enable the external build system to perform many of the tasks that Cargo does today–but while still meeting our interoperability constraints. We will then work on smoothing remaining rough edges, which have a high degree of overlap with the work on mixed build systems.

    In the long run, these two parallel lines of work will converge, such that we offer a complete spectrum of options (in terms of what Cargo controls versus an external system). But they start at critically different points, and working on those in parallel is the key to delivering value quickly and incrementally.

    A high-level model of what Cargo does

    Before delving into the details of the plan, it’s helpful to lay out a mental model of the work that Cargo does today, broken into several stages:

    StepConceptual outputRelated concerns
    Dependency resolutionLock fileCustom registries, mirrors, offline/local, native deps, …
    Build configurationCargo settings per crate in graphProfiles
    Build loweringA build plan: a series of steps that must be run in sequence, including rustc and binary invocationsBuild scripts, plugins
    Build executionCompiled artifactsCaching

    The first stage, dependency resolution, is the most complex; it’s where our model of semver comes into play, as well as a huge list of related concerns.

    Dependency resolution produces a lockfile, which records what crates are included in the dependency graph, coming from what sources and at what versions, as well as interdependencies. It operates independently of the requested Cargo workflow.

    The next stage is build configuration, which conceptually is where things like profiles come into play: of the crates we’re going to build, we need to decide, at a high level, “how” we’re going to build them. This configuration is at the “Cargo level of abstraction”, i.e. in terms of things like profiles rather than low-level rustc flags. There’s strong desire to make this system more expressive, for example by allowing you to always optimize certain dependencies even when otherwise in the debug profile.

    After configuration, we know at the Cargo level exactly what we want to build, but we need to lower the level of abstraction into concrete, individual steps. This is where, for example, profile information is transformed into specific rustc flags. Lowering is done independently for each crate, and results in a sequence of process invocations, interleaving calls to rustc with e.g. running the binary for a build script. You can think of these sequences as expanding what was previously a “compile this crate with this configuration” node in the dependency graph into a finer-grained set of nodes for running rustc etc.

    Finally, there’s the actual build execution, which is conceptually straightforward: we analyze the dependency graph and existing, cached artifacts, and then actually perform any un-cached build steps (in parallel when possible). Of course, this is the bread-and-butter of many external build systems, so we want to make it easy for them to tweak or entirely control this part of the process.

    The first two steps – dependency resolution and build configuration – need to operate on an entire dependency graph at once. Build lowering, by contrast, can be performed for any crate in isolation.

    Customizing Cargo

    A key point is that, in principle, each of these steps is separable from the others. That is, we should be able to rearchitect Cargo so that each of these steps is managed by a distinct component, and the components have a stable – and public! – way of communicating with one another. That in turn will enable replacing any particular component while keeping the others. (To be clear, the breakdown above is just a high-level sketch; in reality, we’ll need a more nuanced and layered picture of Cargo’s activities).

    This RFC proposes to provide some means of customizing Cargo’s activities at various layers and stages. The details here are very much up for grabs, and are part of the experimentation we need to do.

    Likely design constraints

    Some likely constraints for a Cargo customization/plugin system are:

    • It should be possible for Rust tools (like rustfmt, IDEs, linters) to “call Cargo” to get information or artifacts in a standardized way, while remaining oblivious to any customizations. Ideally, Cargo workflows (including custom commands) would also work transparently.

    • It should be possible to customize or swap out a small part of Cargo’s behavior without understanding or reimplementing other parts.

    • The interface for customization should be forward-compatible: existing plugins should continue to work with new versions of Cargo.

    • It should be difficult or impossible to introduce customizations that are “incoherent”, for example that result in unexpected differences in the way that rustc is invoked in different workflows (because, say, the testing workflow was customized but the normal build workflow wasn’t). In other words, customizations are subject to cross-cutting concerns, which need to be identified and factored out.

    We will iterate on the constraints to form core design principles as we experiment.

    A concrete example

    Since the above is quite hand-wavy, it’s helpful to see a very simple, concrete example of what a customization might look like. You could imagine something like the following for supplying manifest information from an external build system, rather than through Cargo.toml:

    Cargo.toml

    [plugins.bazel]
    generate-manifest = true
    

    $root/.cargo/meta.toml

    [plugins]
    
    # These dependencies cannot themselves use plugins.
    # This file is "staged" earlier than Cargo.toml
    
    bazel = "1.0" # a regular crates.io dependency
    

    Semantics

    If any plugins entry in Cargo.toml defines a generate-manifest key, whenever Cargo would be about to return the parsed results of Cargo.toml , instead:

    • look for the associated plugin in .cargo/meta.toml, and ask it to generate the manifest
    • return that instead

    Specifics for the homogeneous build system case

    For homogeneous build systems, there are two kinds of code that must be dealt with: code originally written using vanilla Cargo and a crate registry, and code written “natively” in the context of the external build system. Any integration has to handle the first case to have access to crates.io or a vendored mirror thereof.

    Using crates vendored from or managed by a crate registry

    Whether using a registry server or a vendored copy, if you’re building Rust code that is written using vanilla Cargo, you will at some level need to use Cargo’s dependency resolution and Cargo.toml files. In this case, the external build system should invoke Cargo for at least the dependency resolution and build configuration steps, and likely the build lowering step as well. In such a world, Cargo is responsible for planning the build (which involves largely Rust-specific concerns), but the external build system is responsible for executing it.

    A typical pattern of usage is to have a whitelist of “root dependencies” from an external registry which will be permitted as dependencies within the organization, often pinning to a specific version and set of Cargo features. This whitelist can be described as a single Cargo.toml file, which can then drive Cargo’s dependency resolution just once for the entire registry. The resulting lockfile can be used to guide vendoring and construction of a build plan for consumption by the external build system.

    One important concern is: how do you depend on code from other languages, which is being managed by the external build system? That’s a narrow version of a more general question around native dependencies, which will be addressed separately in a later section.

    Workflow and interop story

    On the external build system side, a rule or plugin will need to be written that knows how to invoke Cargo to produce a build plan corresponding to a whitelisted (and potentially vendored) registry, then translate that build plan back into appropriate rules for the build system. Thus, when doing normal builds, the external build system drives the entire process, but invokes Cargo for guidance during the planning stage.

    Using crates managed by the build system

    Many organization want to employ their own strategy for maintaining and versioning code and for resolving dependencies, in addition to build execution.

    In this case, the big question is: how can we arrange things such that the Rust tooling ecosystem can understand what the external build system is doing, to gather the information needed for the tools to operate.

    The possibility we’ll examine here is using Cargo purely as a conduit for information from the external build system to Rust tools (see Alternatives for more discussion). That is, tools will be able to call into Cargo in a uniform way, with Cargo subsequently just forwarding those calls along to custom user code hooking into an external build system. In this approach, Cargo.toml will generally consist of a single entry forwarding to a plugin (as in the example plugin above). The description of dependencies is then written in the external build system’s rule format. Thus, Cargo acts primarily as a workflow and tool orchestrator, since it is not involved in either planning or executing the build. Let’s dig into it.

    Workflow and interop story

    Even though the external build system is entirely handling both dependency resolution and build execution for the crates under its management, it may still use Cargo for lowering, i.e. to produce the actual rustc invocations from a higher-level configuration. Cargo will provide a way to do this.

    When developing a crate, it should be possible to invoke Cargo commands as usual. We do this via a plugin. When invoking, for example, cargo build, the plugin will translate that to a request to the external build system, which will then execute the build (possibly re-invoking Cargo for lowering). For cargo run, the same steps are followed by putting the resulting build artifact in an appropriate location, and then following Cargo’s usual logic. And so on.

    A similar story plays out when using, for example, the RLS or rustfmt. Ideally, these tools will have no idea that a Cargo plugin is in play; the information and artifacts they need can be obtained by using Cargo’s in a standard way, transparently – but the underlying information will be coming from the external build system, via the plugin. Thus the plugin for the external build system must be able to translate its dependencies back into something equivalent to a lockfile, at least.

    The complete picture

    In general, any integration with a homogeneous build system needs to be able to handle (vendored) crate registries, because access to crates.io is a hard constraint.

    Usually, you’ll want to combine the handling of these external registries with crates managed purely by the external build system, meaning that there are effectively two modes of building crates at play overall. All that’s needed to do this is a distinction within the external build system between these two kinds of dependencies, which then drives the plugin interactions accordingly.

    Cross-cutting concern: native dependencies

    One important point left out of the above explanation is the story for dependencies on non-Rust code. These dependencies should be built and managed by the external build system. But there’s a catch: existing “sys” crates on crates.io that provide core native dependencies use custom build scripts to build or discover those dependencies. We want to reroute those crates to instead use the dependencies provided by the build system.

    Here, there’s a short-term story and a long-term story.

    Short term: white lists with build script overrides

    Cargo today offers the ability to override the build script for any crate using the links key (which is generally how you signal what native dependency you’re providing), and instead provide the library location directly. This feature can be used to instead point at the output provided by the external build system. Together with whitelisting crates that use build scripts, it’s possible to use the existing crates.io ecosystem while managing native dependencies via the external build system.

    There are some downsides, though. If the sys crates change in any way – for example, altering the way they build the native dependency, or the version they use – there’s no clear heads-up that something may need to be adjusted within the external build system. It might be possible, however, to use version-specific whitelisting to side-step this issue.

    Even so, whitelisting itself is a laborious process, and in the long run there are advantages to offering a higher-level way of specifying native dependencies in the first place.

    Long term: declarative native dependencies

    Reliably building native dependencies in a cross-platform way is… challenging. Today, Rust offers some help with this through crates like gcc and pkgconfig, which provide building blocks for writing build scripts that discover or build native dependencies. But still, today, each build script is a bespoke affair, customizing the use of these crates in arbitrary ways. It’s difficult, error-prone work.

    This RFC proposes to start a long term effort to provide a more first-class way of specifying native dependencies. The hope is that we can get coverage of, say, 80% of native dependencies using a simple, high-level specification, and only in the remaining 20% have to write arbitrary code. And, in any case, such a system can provide richer information about dependencies to help avoid the downsides of the whitelisting approach.

    The likely approach here is to provide some mechanism for using a dependency as a build script, so that you could specify high-level native dependency information directly in Cargo.toml attributes, and have a general tool translate that into the appropriate build script.

    Needless to say, this approach will need significant experimentation. But if successful, it would have benefits not just for build system integration, but for using external dependencies anywhere.

    The story for externally-managed native dependencies

    Finally, in the case where the external build system is the one specifying and providing a native dependency, all we need is for that to result in the appropriate flags to the lowered rustc invocations. If the external build system is producing those lowered calls itself, it can completely manage this concern. Otherwise, we will need for the plugin interface to provide a way to plumb this information through to Cargo.

    Specifics for the mixed build system case

    Switching gears, let’s look at mixed build systems. Here, we may address the need for customization with a mixture of plugins and new core Cargo features. The primary ones on the radar right now are as follows.

    • Multiple/custom registries. There is a longstanding desire to support registries other than crates.io, e.g. for private code, and to allow them to be used in conjunction with crates.io. In particular, this is a key pain point for customers who are otherwise happy to use Cargo as-is, but want a crates.io-like experience for their own code. There’s an RFC on this topic, and more work here is planned soon. Note: here, we address the needs via a straightforward enhancement to Cargo’s features, rather than via a plugin system.

    • Network and source control. We’ve already put significant work into providing control over where sources live (though vendoring) and tools for preventing network access. However, we could do more to make the experience here first class, and to give people a greater sense of control and assurance when using Cargo on their build farm. Here again, this is probably more about flags and configuration than plugins per se.

    • Caching and artifact control. Many organizations would like to provide a shared build cache for the entire organization, across all of its projects. Here we’d likely need some kind of plugin.

    These bullets are quite vague, and that’s because, while we know there are needs here, the precise problem – let alone the solution – it not yet clear. The point, though, is that these are the most important problems we want to get our head around in the foreseeable future.

    Additional areas where revisions are expected

    Beyond all of the above, it seems very likely that some existing features of Cargo will need to be revisited to fit with the build system integration work. For example:

    • Profiles. Putting the idea of the “build configuration” step on firmer footing will require clarifying the precise role of profiles, which today blur the line somewhat between workflows (e.g. test vs bench) and flags (e.g. --release). Moreover, integration with a homogeneous build system effectively requires that we can translate profiles on the Cargo side back and forth to something meaningful to the external build system, so that for example we can make cargo test invoke the external build system in a sensible way. Additional clarity here might help pave the way for custom profiles and other enhancements. On a very different note, it’s not currently possible to control enough about the rustc invocation for at least some integration cases, and the answer may in part lie in improvements to profiles.

    • Build scripts. Especially for homogeneous build systems, build scripts can pose some serious pain, because in general they may depend on numerous environmental factors invisibly. It may be useful to grow some ways of telling Cargo the precise inputs and outputs of the build script, declaratively.

    • Vendoring. While we have support for vendoring dependencies today, it is not treated uniformly as a mirror. We may want to tighten up Cargo’s understanding, possibly by treating vendoring in a more first-class way.

    There are undoubtedly other aspects of Cargo that will need to be touched to achieve better build system integration; the plan as a whole is predicated on making Cargo much more modular, which is bound to reveal concerns that should be separated. As with everything else in this RFC, user-facing changes will require a full RFC prior to stabilization.

    Reference-level explanation

    This is an experimental RFC. Reference-level details will be presented in follow-up RFCs after experimentation has concluded.

    Drawbacks

    It’s somewhat difficult to state drawbacks for such a high-level plan; they’re more likely to arise through the particulars.

    That said, it’s plausible that following the plan in this RFC will result in greater overall complexity for Cargo. The key to managing this complexity will be ensuring that it’s surfaced only on an as-needed basis. That is, uses of Cargo in the pure crates.io ecosystem should not become more complex – if anything, they should become more streamlined, through improvements to features like profiles, build scripts, and the handling of native dependencies.

    Rationale and Alternatives

    Numerous organizations we’ve talked to who are considering, or already are, running Rust in production complain about difficulties with build system integration. There’s often a sense that Cargo “does too much” or is “too opinionated”, in a way that works fine for the crates.io ecosystem but is “not realistic” when integrating into larger build systems.

    It’s thus critical to take steps to smooth integration, both to remove obstacles to Rust adoption, but also to establish that Cargo has an important role to play even within opinionated external build systems: coordinating with Rust tooling and workflows.

    This RFC is essentially a strategic vision, and so the alternatives are different strategies for tackling the problem of integration. Some options include:

    • Focusing entirely on one of the use-cases mentioned. For example:

      • We could decide that it’s not worthwhile to have Cargo play a role within a build system like Bazel, and instead focus on users who just need to customize a particular aspect of Cargo. However, this would be giving up on the hope of providing strong integration with Rust tooling and workflows.
      • We could decide to focus solely on the Bazel-style use-cases. But that would likely push people who would otherwise be happy to use Cargo to manage most of their build (but need to customize some aspect) to instead try to manage more of the concerns themselves.
    • Attempting to impose more control when integrating with hommogenous build systems. In the most extreme case presented above, for internal crates Cargo is little more than a middleman between Rust tooling and the external build system. We could instead support only using custom registries to manage crates, and hence always use Cargo’s dependency resolution and so on. This would, however, be a non-starter for many organizations who want a single-version, mono-repo world internally, and it’s not clear what the gains would be.

    One key open question is: what, exactly, do Rust tools need to do their work? Tool interop is a major goal for this effort, but ideally we’d support it with a minimum of fuss. It may be that the needs are simple enough that we can get away with a separate interchange format, which both Cargo and other build tools can create. As part of the “experimental” part of this RFC, the Cargo team will work with the Dev Tools team to fully enumerate their needs.

    Unresolved questions

    Since this is an experimental RFC, there are more questions here than answers. However, one question that would be good to tackle prior to acceptance is: how should we prioritize various aspects of this work? Should we have any specific customers in mind that we’re trying to target (or who, better yet, are working directly with us and plan to test and use the results)?

    Summary

    Support defining C-compatible variadic functions in Rust, via new intrinsics. Rust currently supports declaring external variadic functions and calling them from unsafe code, but does not support writing such functions directly in Rust. Adding such support will allow Rust to replace a larger variety of C libraries, avoid requiring C stubs and error-prone reimplementation of platform-specific code, improve incremental translation of C codebases to Rust, and allow implementation of variadic callbacks.

    Motivation

    Rust can currently call any possible C interface, and export almost any interface for C to call. Variadic functions represent one of the last remaining gaps in the latter. Currently, providing a variadic function callable from C requires writing a stub function in C, linking that function into the Rust program, and arranging for that stub to subsequently call into Rust. Furthermore, even with the arguments packaged into a va_list structure by C code, extracting arguments from that structure requires exceptionally error-prone, platform-specific code, for which the crates.io ecosystem provides only partial solutions for a few target architectures.

    This RFC does not propose an interface intended for native Rust code to pass variable numbers of arguments to a native Rust function, nor an interface that provides any kind of type safety. This proposal exists primarily to allow Rust to provide interfaces callable from C code.

    Guide-level explanation

    C code allows declaring a function callable with a variable number of arguments, using an ellipsis (...) at the end of the argument list. For compatibility, unsafe Rust code may export a function compatible with this mechanism.

    Such a declaration looks like this:

    pub unsafe extern "C" fn func(arg: T, arg2: T2, mut args: ...) {
        // implementation
    }

    The use of ... as the type of args at the end of the argument list declares the function as variadic. This must appear as the last argument of the function, and the function must have at least one argument before it. The function must use extern "C", and must use unsafe. To expose such a function as a symbol for C code to call directly, the function may want to use #[no_mangle] as well; however, Rust code may also pass the function to C code expecting a function pointer to a variadic function.

    The args named in the function declaration has the type core::intrinsics::VaList<'a>, where the compiler supplies a lifetime 'a that prevents the arguments from outliving the variadic function.

    To access the arguments, Rust provides the following public interfaces in core::intrinsics (also available via std::intrinsics):

    /// The argument list of a C-compatible variadic function, corresponding to the
    /// underlying C `va_list`. Opaque.
    pub struct VaList<'a> { /* fields omitted */ }
    
    // Note: the lifetime on VaList is invariant
    impl<'a> VaList<'a> {
        /// Extract the next argument from the argument list. T must have a type
        /// usable in an FFI interface.
        pub unsafe fn arg<T>(&mut self) -> T;
    
        /// Copy the argument list. Destroys the copy after the closure returns.
        pub fn copy<'ret, F, T>(&self, F) -> T
        where
            F: for<'copy> FnOnce(VaList<'copy>) -> T, T: 'ret;
    }

    The type returned from VaList::arg must have a type usable in an extern "C" FFI interface; the compiler allows all the same types returned from VaList::arg that it allows in the function signature of an extern "C" function.

    All of the corresponding C integer and float types defined in the libc crate consist of aliases for the underlying Rust types, so VaList::arg can also extract those types.

    Note that extracting an argument from a VaList follows the C rules for argument passing and promotion. In particular, C code will promote any argument smaller than a C int to an int, and promote float to double. Thus, Rust’s argument extractions for the corresponding types will extract an int or double as appropriate, and convert appropriately.

    Like the underlying platform va_list structure in C, VaList has an opaque, platform-specific representation.

    A variadic function may pass the VaList to another function. However, the lifetime attached to the VaList will prevent the variadic function from returning the VaList or otherwise allowing it to outlive that call to the variadic function. Similarly, the closure called by copy cannot return the VaList passed to it or otherwise allow it to outlive the closure.

    A function declared with extern "C" may accept a VaList parameter, corresponding to a va_list parameter in the corresponding C function. For instance, the libc crate could define the va_list variants of printf as follows:

    extern "C" {
        pub unsafe fn vprintf(format: *const c_char, ap: VaList) -> c_int;
        pub unsafe fn vfprintf(stream: *mut FILE, format: *const c_char, ap: VaList) -> c_int;
        pub unsafe fn vsprintf(s: *mut c_char, format: *const c_char, ap: VaList) -> c_int;
        pub unsafe fn vsnprintf(s: *mut c_char, n: size_t, format: *const c_char, ap: VaList) -> c_int;
    }

    Note that, per the C semantics, after passing VaList to these functions, the caller can no longer use it, hence the use of the VaList type to take ownership of the object. To continue using the object after a call to these functions, use VaList::copy to pass a copy of it instead.

    Conversely, an unsafe extern "C" function written in Rust may accept a VaList parameter, to allow implementing the v variants of such functions in Rust. Such a function must not specify the lifetime.

    Defining a variadic function, or calling any of these new functions, requires a feature-gate, c_variadic.

    Sample Rust code exposing a variadic function:

    #![feature(c_variadic)]
    
    #[no_mangle]
    pub unsafe extern "C" fn func(fixed: u32, mut args: ...) {
        let x: u8 = args.arg();
        let y: u16 = args.arg();
        let z: u32 = args.arg();
        println!("{} {} {} {}", fixed, x, y, z);
    }

    Sample C code calling that function:

    #include <stdint.h>
    
    void func(uint32_t fixed, ...);
    
    int main(void)
    {
        uint8_t x = 10;
        uint16_t y = 15;
        uint32_t z = 20;
        func(5, x, y, z);
        return 0;
    }
    

    Compiling and linking these two together will produce a program that prints:

    5 10 15 20
    

    Reference-level explanation

    LLVM already provides a set of intrinsics, implementing va_start, va_arg, va_end, and va_copy. The compiler will insert a call to the va_start intrinsic at the start of the function to provide the VaList argument (if used), and a matching call to the va_end intrinsic on any exit from the function. The implementation of VaList::arg will call va_arg. The implementation of VaList::copy will call va_copy, and then va_end after the closure exits.

    VaList may become a language item (#[lang="VaList"]) to attach the appropriate compiler handling.

    The compiler may need to handle the type VaList specially, in order to provide the desired parameter-passing semantics at FFI boundaries. In particular, some platforms define va_list as a single-element array, such that declaring a va_list allocates storage, but passing a va_list as a function parameter occurs by pointer. The compiler must arrange to handle both receiving and passing VaList parameters in a manner compatible with the C ABI.

    The C standard requires that the call to va_end for a va_list occur in the same function as the matching va_start or va_copy for that va_list. Some C implementations do not enforce this requirement, allowing for functions that call va_end on a passed-in va_list that they did not create. This RFC does not define a means of implementing or calling non-standard functions like these.

    Note that on some platforms, these LLVM intrinsics do not fully implement the necessary functionality, expecting the invoker of the intrinsic to provide additional LLVM IR code. On such platforms, rustc will need to provide the appropriate additional code, just as clang does.

    This RFC intentionally does not specify or expose the mechanism used to limit the use of VaList::arg only to specific types. The compiler should provide errors similar to those associated with passing types through FFI function calls.

    Drawbacks

    This feature is highly unsafe, and requires carefully written code to extract the appropriate argument types provided by the caller, based on whatever arbitrary runtime information determines those types. However, in this regard, this feature provides no more unsafety than the equivalent C code, and in fact provides several additional safety mechanisms, such as automatic handling of type promotions, lifetimes, copies, and cleanup.

    Rationale and Alternatives

    This represents one of the few C-compatible interfaces that Rust does not provide. Currently, Rust code wishing to interoperate with C has no alternative to this mechanism, other than hand-written C stubs. This also limits the ability to incrementally translate C to Rust, or to bind to C interfaces that expect variadic callbacks.

    Rather than having the compiler invent an appropriate lifetime parameter, we could simply require the unsafe code implementing a variadic function to avoid ever allowing the VaList structure to outlive it. However, if we can provide an appropriate compile-time lifetime check, doing would make it easier to correctly write the appropriate unsafe code.

    Rather than naming the argument in the variadic function signature, we could provide a VaList::start function to return one. This would also allow calling start more than once. However, this would complicate the lifetime handling required to ensure that the VaList does not outlive the call to the variadic function.

    We could use several alternative syntaxes to declare the argument in the signature, including ...args, or listing the VaList or VaList<'a> type explicitly. The latter, however, would require care to ensure that code could not reference or alias the lifetime.

    Unresolved questions

    When implementing this feature, we will need to determine whether the compiler can provide an appropriate lifetime that prevents a VaList from outliving its corresponding variadic function.

    Currently, Rust does not allow passing a closure to C code expecting a pointer to an extern "C" function. If this becomes possible in the future, then variadic closures would become useful, and we should add them at that time.

    This RFC only supports the platform’s native "C" ABI, not any other ABI. Code may wish to define variadic functions for another ABI, and potentially more than one such ABI in the same program. However, such support should not complicate the common case. LLVM has extremely limited support for this, for only a specific pair of platforms (supporting the Windows ABI on platforms that use the System V ABI), with no generalized support in the underlying intrinsics. The LLVM intrinsics only support using the ABI of the containing function. Given the current state of the ecosystem, this RFC only proposes supporting the native "C" ABI for now. Doing so will not prevent the introduction of support for non-native ABIs in the future.

    Summary

    This RFC proposes the addition of the support for alternative crates.io servers to be used alongside the public crates.io server. This would allow users to publish crates to their own private instance of crates.io, while still able to use the public instance of crates.io.

    Motivation

    Cargo currently has support for getting crates from a public server, which works well for open source projects using Rust, however is problematic for closed source code. A workaround for this is to use Git repositories to specify the packages, but that means that the helpful versioning and discoverability that Cargo and crates.io provides is lost. We would like to change this such that it is possible to have a local crates.io server which crates can be pushed to, while still making use of the public crates.io server.

    Guide-level explanation

    Registry definition specification

    We need a way to define what registries are valid for Cargo to pull from and publish to. For this purpose, we propose that users would be able to define multiple registries in a .cargo/config file. This allows the user to specify the locations of registries in one place, in a parent directory of all projects, rather than needing to configure the registry location within each project’s Cargo.toml. Once a registry has been configured with a name, each Cargo.toml can use the registry name to refer to that registry.

    Another benefit of using .cargo/config is that these files are not typically checked in to the projects’ source control. The registries might have credentials associated with them, which should not be checked in. Separating the URLs and the use of the URLs in this way encourages good security practices of not checking in credentials.

    In order to tell Cargo about a registry other than crates.io, you can specify and name it in a .cargo/config as follows, under the registries key:

    [registries]
    choose-a-name = "https://my-intranet:8080/index"
    

    Instead of choose-a-name, place the name you’d like to use to refer to this registry in your Cargo.toml files. The URL specified should contain the location of the registry index for this registry; the registry format is specified in the Registry Index Format Specification section.

    Alternatively, you can specify each registry as follows:

    [registries.choose-a-name]
    index = "https://my-intranet:8080/index"
    

    If you need to specify authentication information such as a username or password to access a registry’s index, those should be specified in a .cargo/credentials file since it has more restrictive file permissions than .cargo/config. Adding a username and password to .cargo/credentials for a registry named my-registry would look like this:

    [registries.my-registry]
    username = "myusername"
    password = "mypassword"
    

    CI

    Because this system discourages checking in the registry configuration, the registry configuration won’t be immediately available to continuous integration systems like TravisCI. However, Cargo currently supports configuring any key in .cargo/config using environment variables instead:

    Cargo can also be configured through environment variables in addition to the TOML syntax above. For each configuration key above of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. For example the build.jobs key can also be defined by CARGO_BUILD_JOBS.

    To configure TravisCI to use an alternate registry named my-registry for example, you can use Travis’ encrypted environment variables feature to set:

    CARGO_REGISTRIES_MY_REGISTRY_INDEX=https://my-intranet:8080/index

    Using a dependency from another registry

    Note: this syntax will initially be implemented as an unstable cargo feature available in nightly cargo only and stabilized as it becomes ready.

    Once you’ve configured a registry (with a name, for example, my-registry) in .cargo/config, you can specify that a dependency comes from an alternate registry by using the registry key:

    [dependencies]
    secret-crate = { version = "1.0", registry = "my-registry" }
    

    Publishing to another registry; preventing unwanted publishes

    Today, Cargo allows you to add a key publish = false to your Cargo.toml to indicate that you do not want to publish a crate anywhere. In order to specify that a crate should only be published to a particular set of registries, this key will be extended to accept a list of registries that are allowed with cargo publish:

    publish = ["my-registry"]
    

    If you run cargo publish without specifying an --index argument pointing to an allowed registry, the command will fail. This prevents accidental publishes of private crates to crates.io, for example.

    Not having a publish key is equivalent to specifying publish = true, which means publishing to crates.io is allowed. publish = [] is equivalent to publish = false, meaning that publishing to anywhere is disallowed.

    Running a minimal registry

    The most minimal form of a registry that Cargo can use will consist of:

    This RFC does not attempt to standardize or specify any of crates.io’s APIs, but it should be possible to take crates.io’s codebase and run it along with a registry index in order to provide crates.io’s functionality as an alternate registry.

    Crates.io

    Because crates.io’s purpose is to be a reliable host for open source crates, crates that have dependencies from registries other than crates.io will be rejected at publish time. Crates.io cannot make availability guarantees about alternate registries, so much like git dependencies today, publishing with dependencies from other registries won’t be allowed.

    In crates.io’s codebase, we will add a configuration option that specifies a list of approved alternate registry locations that dependencies may use. For private registries run using crates.io’s code, this will likely include the private registry itself plus crates.io, so that private crates are allowed to depend on open source crates. Any crates with dependencies from registries not specified in this configuration option will be rejected at publish time.

    Interaction with existing features

    This RFC is not proposing any changes to the way source replacement and cargo-vendor work; everything proposed here should be compatible with those.

    Mirrors will still be required to serve exactly the same files (matched checksums) as the source they’re mirroring.

    Reference-level explanation

    Registry index format specification

    Cargo needs to be able to get a registry index containing metadata for all crates and their dependencies available from an alternate registry in order to perform offline version resolution. The registry index for crates.io is available at https://github.com/rust-lang/crates.io-index, and this section aims to specify the format of this registry index so that other registries can provide their own registry index that Cargo will understand.

    This is version 1 of the registry index format specification. There may be other versions of the specification someday. Along with a new specification version will be a plan for supporting registries using the older specification and a migration plan for registries to upgrade the specification version their index is using.

    A valid registry index meets the following criteria:

    • The registry index is stored in a git repository so that Cargo can efficiently fetch incremental updates to the index.

    • There will be a file at the top level named config.json. This file will be a valid JSON object with the following keys:

      {
        "dl": "https://my-crates-server.com/api/v1/crates/{crate}/{version}/download",
        "api": "https://my-crates-server.com/",
        "allowed-registries": ["https://github.com/rust-lang/crates.io-index", "https://my-intranet:8080/index"]
      }
      

      The dl key is required and specifies where Cargo can download the tarballs containing the source files of the crates listed in the registry. It is templated by the strings {crate} and {version} which are replaced with the name and version of the crate to download, respectively.

      The api key is optional and specifies where Cargo can find the API server that provides the same API functionality that crates.io does today, such as publishing and searching. Without the api key, these features will not be available. This RFC is not attempting to standardize crates.io’s API in any way, although that could be a future enhancement.

      The allowed-registries key is optional and specifies the other registries that crates in this index are allowed to have dependencies on. The default will be nothing, which will mean only crates that depend on other crates in the current registry are allowed. This is currently the case for crates.io and will remain the case for crates.io going forward. Alternate registries will probably want to add crates.io to this list.

    • There will be a number of directories in the git repository.

      • 1/ - holds files for all crates whose names have one letter.
      • 2/ - holds files for all crates whose names have two letters.
      • 3/a etc - for all crates whose names have three letters, their files will be in a directory named 3, then a subdirectory named with the first letter of their name.
      • aa/aa/ etc - for all crates whose names have four or more letters, their files will be in a directory named with the first and second letters of their name, then in a subdirectory named with the third and fourth letters of their name. For example, a file for a crate named sample would be found in sa/mp/.
    • For each crate in the registry, there will be a file with the name of that crate in the directory structure as specified above. The file will contain metadata about each version of the crate, with one version per line. Each line will be valid JSON with, minimally, the keys as shown. More keys may be added, but Cargo may ignore them. The contents of one line are pretty-printed here for readability.

      {
          "name": "my_serde",
          "vers": "1.0.11",
          "deps": [
              {
                  "name": "serde",
                  "req": "^1.0",
                  "registry": "https://github.com/rust-lang/crates.io-index",
                  "features": [],
                  "optional": true,
                  "default_features": true,
                  "target": null,
                  "kind": "normal"
              }
          ],
          "cksum": "f7726f29ddf9731b17ff113c461e362c381d9d69433f79de4f3dd572488823e9",
          "features": {
              "default": [
                  "std"
              ],
              "derive": [
                  "serde_derive"
              ],
              "std": [
      
              ],
          },
          "yanked": false
      }
      

      The top-level keys for a crate are:

      • name: the name of the crate
      • vers: the version of the crate this row is describing
      • deps: a list of all dependencies of this crate
      • cksum: a SHA256 checksum of the tarball downloaded
      • features: a list of the features available from this crate
      • yanked: whether or not this version has been yanked

      Within the deps list, each dependency should be listed as an item in the deps array with the following keys:

      • name: the name of the dependency
      • req: the semver version requirement string on this dependency
      • registry: New to this RFC: the registry from which this crate is available
      • features: a list of the features available from this crate
      • optional: whether this dependency is optional or not
      • default_features: whether the parent uses the default features of this dependency or not
      • target: on which target this dependency is needed
      • kind: can be normal, build, or dev to be a regular dependency, a build-time dependency, or a development dependency. Note: this is a required field, but a small number of entries exist in the crates.io index with either a missing or null kind field due to implementation bugs.

    If a dependency’s registry is not specified, Cargo will assume the dependency can be located in the current registry. By specifying the registry of a dependency in the index, cargo will have the information it needs to fetch crate files from the registry indices involved without needing to involve an API server.

    New command: cargo generate-index-metadata

    Currently, the knowledge of how to create a file in the registry index format is spread between Cargo and crates.io. This RFC proposes the addition of a Cargo command that would generate this file locally for the current crate so that it can be added to the git repository using a mechanism other than a server running crates.io’s codebase.

    In order to make working with multiple registries more convenient, we would also like to support:

    • Adding a cargo add-registry command that could prompt for index URL and authentication information and place the right information in the right format in the right files to make setup for each user easier.

    • Being able to specify the API location rather than the index location, so that, for example, you could specify https://host.company.com/api/cargo/private-repo rather than https://github.com/host-company/cargo-index. We do not want to require specifying the API location, since some registries will choose not to have an API at all and only supply an index and a location for crate files. This would require the API to have a way to tell Cargo where the associated registry index is located.

    • Being able to save multiple tokens in .cargo/credentials, one per registry, so that people publishing to multiple registries don’t need to log in over and over or specify tokens on every publish.

    • Being able to specify --registry registry-name for all Cargo commands that currently take --index

    • Being able to use a dependency under a different name. Alternate registries that are not mirrors should be allowed to have crates with the same name as crates in any other registry, including crates.io. In order to allow a crate to depend on both, say, the http crate from crates.io and the http crate from a private registry, at least one will need to be renamed when listed as a dependency in Cargo.toml. RFC 2126 proposes this change as follows:

      Cargo will provide a new crate key for aliasing dependencies, so that e.g. users who want to use the rand crate but call it random instead can now write random = { version = "0.3", crate = "rand" }.

    • Being able to use environment variables to specify values in .cargo/credentials in the same way that you can use environment variables to specify values in .cargo/config

    • For registries that don’t require any authentication to access, such as public registries or registries only accessible within a firewall, we could support a shorthand where the index location (or API location when that is supported) is specified entirely within a crate dependency:

      [dependencies]
      my-crate = { version = "1.0", registry = "http://crate-mirror.org/index" }
      

      In order to discourage/disallow credentials checked in to Cargo.toml, if the URL contains a username or password, Cargo will deliberately remove it. If the registry is then inaccessible, the error message will mention that usernames and passwords in URLs in Cargo.toml are not allowed.

    Drawbacks

    Supporting alternative registries, and having multiple public registries, could fracture the ecosystem. However, we feel that supporting private registries, and the Rust adoption that could enable, outweighs the potential downsides of having multiple public registries.

    Rationale and Alternatives

    A previous RFC proposed having the registry information completely defined within Cargo.toml rather than using .cargo/config. This requires repeating the same information multiple times for multiple projects, and encourages checking in credentials that might be needed to access the registries. That RFC also didn’t specify the format for the registry index, which needs to be shared among all registries.

    An alternative design could be to support specifying the registry URL in either .cargo/config or Cargo.toml. This has the downsides of creating more choices for the user and potentially encouraging poor practices such as checking credentials into a project’s source control. The implementation of this feature would also be more complex. The upside would be supporting configuration in ways that would be more convenient in various situations.

    Unresolved questions

    • Are the names of everything what we want?

      • cargo generate-index-metadata?
      • registry = my-registry?
      • publish-registries = []?
    • What kinds of authentication parameters do we need to support in .cargo/credentials?

    Summary

    Type privacy rules are documented.
    Private-in-public errors are relaxed and turned into lints.

    Motivation

    Type privacy is implemented, but its rules still need to be documentated and explained.

    Private-in-public checker is the previous incarnation of type privacy that still exists in the compiler.
    Experience shows that private-in-public errors are often considered non-intuitive, despite the rules being simple and sufficiently clear when explained.
    People often expect private-in-public checker to check something it is not supposed to check and otherwise, allow code that isn’t supposed to be allowed. This creates a source of confusion.

    With type privacy implemented, private-in-public errors are no longer strictly necessary, so they can be removed from the language, thus removing the source of confusion.
    However diagnosing “private-in-public” situations early can still help programmers to prevent most of client-side type privacy errors, so “private-in-public” diagnostics can be turned into lints instead of being completely removed.
    Lints, unlike errors, can use heuristics, so “private-in-public” diagnostics can match programmer’s intuition closer now by using reachability-based heuristics instead of just local pub annotations.

    Guide-level explanation

    Type privacy

    Type privacy ensures that a type private to some module cannot be used outside of this module (unless anonymized) without a privacy error.
    This is similar to more familiar name privacy ensuring that private items or fields can’t be named outside of their module without a privacy error.

    “Using” a type means either explicitly naming it (maybe through type aliases), or obtaining a value of that type.

    mod m {
        struct Priv; // This is a type private to module `m`
    
        // OK, public alias to the private type
        pub type Alias = Priv;
        pub type AliasOpt = Option<Priv>;
    
        // OK, public function returning a value of the private type
        pub fn get_value() -> Priv { ... }
    }
    
    // ERROR, can't name private type `m::Priv` outside of its module
    type X = m::Alias;
    
    // A type is considered private even if its primary component (type constructor)
    // is public, but it has private generic arguments.
    // ERROR, can't name private type `Option<m::Priv>` outside of its module
    type X = m::AliasOpt;
    
    fn main() {
        // ERROR, can't have a value of private type `m::Priv` outside of its module
        let x = m::get_value();
    }

    Type privacy ensures that a private type is an implementation detail of its module and you can always change it in any way (e.g. add or remove methods, add or remove trait implementations) without requiring any changes in other modules.

    Let’s imagine for a minute that type privacy doesn’t work and you can name a private type Priv through an alias or obtain its values outside of its module.
    Then let’s assume that this type implements some trait Trait at the moment. Now foreign code can freely define functions like

    fn require_trait_value<T: Trait>(arg: T) { ... }
    fn require_trait_type<T: Trait>() { ... }

    and pass Priv to them

    require_trait_value(value_of_priv);
    require_trait_type::<AliasOfPriv>();

    , so it becomes a requirement for Priv to implement Trait and we can’t remove it anymore.
    Type privacy helps to avoid such unintended requirements.

    The sentence introducing type privacy contains a clarification - “unless anonymized”.
    It means that private types can be leaked into other modules through trait objects (dynamically anonymized), or impl Trait (statically anonymized), or usual generics (statically anonymized as well).

    struct Priv;
    
    // By defining functions like these you explicitly give a promise that they will
    // always return something implementing `Trait`, maybe `Priv`, maybe some other
    // type (this is an implementation detail).
    impl Trait for Priv {}
    pub fn leak_anonymized1() -> Box<Trait> { Box::new(Priv) }
    pub fn leak_anonymized2() -> impl Trait { Priv }
    
    // Here some code outside of our module (in `liballoc`) works with objects of
    // our private type, but knows only that they are `Clone`, the specific
    // container element's type is anonymized for code in `liballoc`.
    impl Clone for Priv {}
    let my_vec: Vec<Priv> = vec![Priv, Priv, Priv];
    let my_vec2 = my_vec.clone();

    The rules for type privacy work for traits as well, e.g. you won’t be able to do this when trait aliases are implemented

    mod m {
        trait PrivTr {}
        pub trait Alias = PrivTr;
    }
    
    // ERROR, can't name private trait `m::PrivTr` outside of its module
    fn f<T: m::Alias>() { ... }

    (Trait objects are considered types, so they are covered by previous paragraphs.)

    Private-in-public lints

    Previously type privacy was ensured by so called private-in-public errors, that worked preventively.

    mod m {
        struct Priv;
    
        // ERROR, private type `Priv` in public interface.
        pub fn leak() -> Priv { ... }
    }
    
    // Can't obtain a value of `Priv` because for `leak` the function definition
    // itself is illegal.
    let x = m::leak();

    The logic behind private-in-public rules is very simple, if some type has visibility vis_type then it cannot be used in interfaces of items with visibilities vis_interface where vis_interface > vis_type.
    In particular, this code is illegal

    mod outer {
        struct S;
    
        mod inner {
            pub fn f() -> S { ... }
        }
    }

    for a simple reason - vis(f) = pub, vis(S) = pub(in outer), pub > pub(in outer). Many people found this confusing because they expected private-in-public rules to be based on crate-global reachability and not on local pub annotations.
    (Both S and f are reachable only from outer despite f being pub.)

    In addition, private-in-public rules were found to be insufficient for ensuring type privacy due to type inference being quite smart.
    As a result, type privacy checking was implemented directly - when we see value m::leak() we just check if its type private or not, so private-in-public rules became not-strictly-necessary for the compiler.

    However, private-in-public diagnostics are still pretty useful for humans!
    For example, if a function is defined like this

    mod m {
        struct Priv;
        pub fn f() -> Priv { ... }
    }
    

    it’s guaranteed to be unusable outside of m because every its use will cause a type privacy error.
    That’s probably not what the author of f wanted. Either Priv is supposed to be public, or f is supposed to be private. It would be nice to diagnose cases like this, but to avoid “false positives” like the previous example with outer/inner.
    Meet reachability-based private-in-public lints!

    Lint #1: Private types in primary interface of effectively public items

    Effective visibility of an item is how far it’s actually reexported or leaked through other means, like return types.
    Effective visibility can never be larger than nominal visibility (i.e. what pub annotation says), but it can be smaller.

    For example, in the outer/inner example nominal visibility of f is pub, but its effective visibility is pub(in outer), because it’s neither reexported from outer, nor can be named directly from outside of it.
    effective_vis(f) <= vis(Priv) means that the private-in-public lint #1 is not reported for f.

    “Primary interface” in the lint name means everything in the interface except for trait bounds and where clauses, those are considered secondary interface.

    trait PrivTr {}
    pub fn bad()
        -> Box<PrivTr> // WARN, private type in primary interface
    { ... }
    pub fn better<T>(arg: T)
        where T: PrivTr // OK, private trait in secondary interface
    { ... }

    This lint replaces part of private-in-public errors. Having something private in primary interface guarantees that the item will be unusable from outer modules due to type privacy (primary interface is considered part of the type when type privacy is checked), so it’s very desirable to warn about this situation in advance and this lint needs to be at least warn-by-default.

    Provisional name for the lint - private_interfaces.

    Lint #2: Private traits/types in secondary interface of effectively public items

    This lint is reported if private types or traits are found in trait bounds or where clauses of an effectively public item.

    trait PrivTr {}
    pub fn overloaded<T>(arg: T)
        where T: PrivTr // WARN, private trait in secondary interface
    { ... }

    Function overloaded has public type, can’t leak values of any other private types and can be freely used outside of its module without causing type privacy errors. There are reasonable use cases for such functions, for example emulation of sealed traits.
    The only suspicious part about it is documentation - what arguments can it take exactly? The set of possible argument types is closed and determined by implementations of the private trait PrivTr, so it’s kinda mystery unless it’s well documented by the author of overloaded.
    There are stability implications as well - set of possible Ts is still an interface of overload, so impls of PrivTr cannot be removed backward-compatibly.
    This lint replaces part of private-in-public errors and can be reported as warn-by-default or allow-by-default.

    Provisional name for the lint - private_bounds.

    Lint #3: “Voldemort types” (it’s reachable, but I can’t name it)

    Consider this code

    mod m {
        // `S` has public nominal and effective visibility,
        // but it can't be *named* outside of `m::super`.
        pub struct S;
    }
    
    // OK, can return public type `m::S` and
    // can use the returned value in outer modules.
    // BUT, we can't name the returned type, unless we have `typeof`,
    // and we don't have it yet.
    pub fn get_voldemort() -> m::S { ... }

    The “Voldemort type” (or, more often, “Voldemort trait”) pattern has legitimate uses, but often it’s just an oversight and S is supposed to be reexported and nameable from outer modules.
    The lint is supposed to report items for which effective visibility is larger than the area in which they can be named.
    This lint is new and doesn’t replace private-in-public errors, but it provides checking that many people expected from private-in-public.
    The lint should be allow-by-default or it can be placed into Clippy as an alternative.

    Provisional name for the lint - unnameable_types.

    Lint #4: private_in_public

    Some private-in-public errors are currently reported as a lint private_in_public for compatibility reasons.
    This compatibility lint will be removed and its uses will be reported as warnings by renamed_and_removed_lints.

    Reference-level explanation

    Type privacy

    How to determine visibility of a type?

    • Built-in types are considered pub (integer and floating point types, bool, char, str, !).
    • Type parameters (including Self in traits) are considered pub as well.
    • Arrays and slices inherit visibility from their element types.
      vis([T; N]) = vis([T]) = vis(T).
    • References and pointers inherit visibility from their pointee types.
      vis(&MUTABILITY T) = vis(*MUTABILITY T) = vis(T).
    • Tuple types are as visible as their least visible component.
      vis((A, B)) = min(vis(A), vis(B)).
    • Struct, union and enum types are as visible as their least visible type argument or type constructor.
      vis(Struct<A, B>) = min(vis(Struct), vis(A), vis(B)).
    • Closures and generators have same visibilities as equivalent structs defined in the same module.
      vis(CLOSURE<A, B>) = min(vis(CURRENT_MOD), vis(A), vis(B)).
    • Traits or trait types are as visible as their least visible type argument or trait constructor.
      vis(Tr<A, B>) = min(vis(Tr), vis(A), vis(B)).
    • Trait objects and impl Trait types are as visible as their least visible component.
      vis(TrA + TrB) = vis(impl TrA + TrB) = min(vis(TrA), vis(TrB)).
    • Non-normalizable associated types are as visible as their least visible component.
      vis(<Type as Trait>::AssocType) = min(vis(Type), vis(Trait)).
    • Function pointer types are as visible as least visible types in their signatures.
      vis(fn(A, B) -> R) = min(vis(A), vis(B), vis(R)).
    • Function item types are as visible as their least visible component as well, but the definition of a “component” is a bit more complex.
      • For free functions and foreign functions components include signature, type parameters and the function item’s nominal visibility.
        vis(fn(A, B) -> R { foo<C> }) = min(vis(fn(A, B) -> R), vis(C), vis(foo))
      • For struct and enum variant constructors components include signature, type parameters and the constructor item’s nominal visibility.
        vis(fn(A, B) -> S<C> { S_CTOR<C> }) = min(vis(fn(A, B) -> S<C>), vis(S_CTOR)).
        vis(fn(A, B) -> E<C> { E::V_CTOR<C> }) = min(vis(fn(A, B) -> E<C>), vis(E::V_CTOR)).
        vis(S_CTOR) = min(vis(S), vis(field_1), ..., vis(field_N)).
        vis(E::V_CTOR) = vis(E).
      • For inherent methods components include signature, impl type, type parameters and the method’s nominal visibility.
        vis(fn(A, B) -> R { <Type>::foo<C> })) = min(vis(fn(A, B) -> R), vis(C), vis(Type), vis(foo)).
      • For trait methods components include signature, trait, type parameters (including impl type Self) and the method item’s nominal visibility (inherited from the trait, included automatically).
        vis(fn(A, B) -> R { <Type as Trait>::foo<C> })) = min(vis(fn(A, B) -> R), vis(C), vis(Type), vis(Trait)).
    • “Infer me” types _ are replaced with their inferred types before checking.

    The type privacy rule

    A type or a trait private to module m (vis(in m)) cannot be used outside of that module (vis(outside) > vis(in m)).
    Uses include naming this type or trait (possibly through aliases) or obtaining values (expressions or patterns) of this type.

    The rule is enforced non-hygienically.
    So it’s possible for a macro 2.0 to name some private type without causing name privacy errors, but it will still be reported as a type privacy violation.
    This can be partially relaxed in the future, but such relaxations are out of scope for this RFC.

    Additional restrictions for associated items

    For technical reasons it’s not always desirable or possible to fully normalize associated types before checking them for privacy.
    So, if we see <Type as Trait>::AssocType we can guaranteedly check only Type and Trait, but not the resulting type.
    So we must be sure it’s no more private than what we can check.

    As a result, private-in-public violations for associated type definitions are still eagerly reported as errors, using the old rules based on local pub annotations and not reachability.

    struct Priv;
    pub struct Type;
    pub trait Trait {}
    
    impl Trait for Type {
        type AssocType = Priv; // ERROR, vis(Priv) < min(vis(Trait), vis(Type))
    }

    When associated function is defined in a private impl (i.e. the impl type or trait is private) it’s guaranteed that the function can’t be used outside of the impl’s area of visibility.
    Type privacy ensures this because associated functions have their own unique types attached to them.

    Associated constants and associated types from private impls don’t have attached unique types, so they sometimes can be used from outer modules due to sufficiently smart type inference.

    mod m {
        struct Priv;
        pub struct Pub<T>(T);
        pub trait Trait { type A; }
    
        // This is a private impl because `Pub<Priv>` is a private type
        impl Pub<Priv> {
            const C: u8 = 0;
        }
    
        // This is a private impl because `Pub<Priv>` is a private type
        impl Trait for Pub<Priv> { type A = u8; }
    }
    use m::*;
    
    // But we still can use `C` outside of `m`?
    let x = Pub::C; // With type inference this means `<Pub<Priv>>::C`

    It would be good to provide the same guarantees for associated constants and types as for associated functions.
    As a result, type privacy additionally prohibits use of any associated items from private impls.

    // ERROR, `C` is from a private impl with type `Pub<Priv>`
    let x = Pub::C;
    // ERROR, `A` is from a private impl with type `Pub<Priv>`,
    // even if the whole type of `x` is public `u8`.
    let x: <Pub<_> as Trait>::A;

    In principle, this restriction can be considered a part of the primary type privacy rule - “can’t name a private type” - if all _s (types to infer, explicit or implicit) are replaced by their inferred types before checking, so Pub and Pub<_> in the examples above become Pub<Priv>.

    Lints

    Effective visibility of an item is determined by a module into which it can be leaked through

    • chain of public parent modules (they make it directly nameable)
    • chains of reexports or type aliases (they make it nameable through aliases)
    • functions, constants, fields “returning” the value of this item, if the item is a type
    • maybe something else if deemed necessary, but probably not macros 2.0.

    (Here we consider the “whole universe” a module too for uniformity.)
    If effective visibility of an item is larger than its nominal visibility (pub annotation), then it’s capped by the nominal visibility.

    Primary interface of an item is all its interface (types of returned values, types of fields, types of fn parameters) except for bounds on generic parameters and where clauses.

    Secondary interface of an item consists of bounds on generic parameters and where clauses, including supertraits for trait items.

    Lint private_interfaces is reported when a type with visibility x is used in primary interface of an item with effective visibility y and x < y.
    This lint is warn-by-default.

    Lint private_bounds is reported when a type or trait with visibility x is used in secondary interface of an item with effective visibility y and x < y.
    This lint is warn-by-default.

    Lint unnameable_types is reported when effective visibility of a type is larger than module in which it can be named, either directly, or through reexports, or through trivial type aliases (type X = Y;, no generics on both sides).
    This lint is allow-by-default.

    Compatibility lint private_in_public is never reported and removed.

    Drawbacks

    With

    pub fn f<T>(arg: T)
        where T: PrivateTrait
    { ... }

    being legal (even if it’s warned against by default) the set of PrivateTrait’s implementations becomes a part of f’s interface. PrivateTrait can still be freely renamed or even split into several traits though.
    rustdoc may be not fully prepared to document items with private traits in bounds, manually written documentation explaining how to use the interface may be required.

    Rationale and Alternatives

    Names for the lints are subject to bikeshedding.

    private_interfaces and private_bounds can be merged into one lint. The rationale for keeping them separate is different probabilities of errors in case of lint violations.
    The first lint indicates an almost guaranteed error on client side, the second one is more in the “missing documentation” category.

    Unresolved questions

    It’s not fully clear if the restriction for associated type definitions required for type privacy soundness, or it’s just a workaround for a technical difficulty.

    Interactions between macros 2.0 and the notions of reachability / effective visibility used for the lints are unclear.

    Summary

    Add a raw identifier format r#ident, so crates written in future language editions/versions can still use an older API that overlaps with new keywords.

    Motivation

    One of the primary examples of breaking changes in the edition RFC is to add new keywords, and specifically catch is the first candidate. However, since that’s seeking crate compatibility across editions, this would leave a crate in a newer edition unable to use catch identifiers in the API of a crate in an older edition. @matklad found 28 crates using catch identifiers, some public.

    A raw syntax that’s always an identifier would allow these to remain compatible, so one can write r#catch where catch-as-identifier is needed.

    Guide-level explanation

    Although some identifiers are reserved by the Rust language as keywords, it is still possible to write them as raw identifiers using the r# prefix, like r#ident. When written this way, it will always be treated as a plain identifier equivalent to a bare ident name, never as a keyword.

    For instance, the following is an erroneous use of the match keyword:

    fn match(needle: &str, haystack: &str) -> bool {
        haystack.contains(needle)
    }
    error: expected identifier, found keyword `match`
     --> src/lib.rs:1:4
      |
    1 | fn match(needle: &str, haystack: &str) -> bool {
      |    ^^^^^
    

    It can instead be written as fn r#match(needle: &str, haystack: &str), using the r#match raw identifier, and the compiler will accept this as a true match function.

    Generally when defining items, you should just avoid keywords altogether and choose a different name. Raw identifiers require the r# prefix every time they are mentioned, making them cumbersome to both the developer and users. Usually an alternate is preferable: crate -> krate, const -> constant, etc.

    However, new Rust editions may add to the list of reserved keywords, making a formerly legal identifier now interpreted otherwise. Since compatibility is maintained between crates of different editions, this could mean that code written in a new edition might not be able to name an identifier in the API of another crate. Using a raw identifier, it can still be named and used.

    //! baseball.rs in edition 2015
    pub struct Ball;
    pub struct Player;
    impl Player {
        pub fn throw(&mut self) -> Result<Ball> { ... }
        pub fn catch(&mut self, ball: Ball) -> Result<()> { ... }
    }
    //! main.rs in edition 2018 -- `catch` is now a keyword!
    use baseball::*;
    fn main() {
        let mut player = Player;
        let ball = player.throw()?;
        player.r#catch(ball)?;
    }

    Reference-level explanation

    The syntax for identifiers allows an optional r# prefix for a raw identifier, otherwise following the normal identifier rules. Raw identifiers are always interpreted as plain identifiers and never as keywords, regardless of context. They are also treated equivalent to an identifier that wasn’t raw – for instance, it’s perfectly legal to write:

    let foo = 123;
    let bar = r#foo * 2;

    Drawbacks

    • New syntax is always scary/noisy/etc.
    • It might not be intuitively “raw” to a user coming upon this the first time.

    Rationale and Alternatives

    If we don’t have any way to refer to identifiers that were legal in prior editions, but later became keywords, then this may hurt interoperability between crates of different editions. The r#ident syntax enables interoperability, and will hopefully invoke some intuition of being raw, similar to raw strings.

    The br#ident syntax is also possible, but I see no advantage over r#ident. Identifiers don’t need the same kind of distinction as str and [u8].

    A small possible alternative is to also terminate it like r#ident#, which could allow non-identifier characters to be part of a raw identifier. This could take a cue from raw strings and allow repetition for internal #, like r##my #1 ident##. That doesn’t allow a leading # or " though.

    A different possibility is to use backticks for a string-like `ident`, like Kotlin, Scala, and Swift. If it allows non-identifier chars, it could embrace escapes like \u, and have a raw-string-identifier r`slash\ident` and even r#`tick`ident`#. However, backtick identifiers are annoying to write in markdown. (e.g. `` `ident` ``)

    Backslashes could connote escaping identifiers, like \ident, perhaps surrounded like \ident\, \{ident}, etc. However, the infix RFC #1579 currently seems to be leaning towards \op syntax already.

    Alternatives which already start legal tokens, like C#’s @ident, Dart’s #ident, or alternate prefixes like identifier#catch, all break Macros 1.0 as @kennytm demonstrated:

    macro_rules! x {
        (@ $a:ident) => {};
        (# $a:ident) => {};
        ($a:ident # $b:ident) => {};
        ($a:ident) => { should error };
    }
    x!(@catch);
    x!(#catch);
    x!(identifier#catch);
    x!(keyword#catch);
    

    C# allows Unicode escapes directly in identifiers, which also separates them from keywords, so both @catch and cl\u0061ss are valid class identifiers. Java also allows Unicode escapes, but they don’t avoid keywords.

    For some new keywords, there may be contextual mitigations. In the case of catch, it couldn’t be a fully contextual keyword because catch { ... } could be a struct literal. That context might be worked around with a path, like old_edition::catch { ... } to use an identifier instead. Contexts that don’t make sense for a catch expression can just be identifiers, like foo.catch(). However, this might not be possible for all future keywords.

    There might also be a need for raw keywords in the other direction, e.g. so the older edition can still use the new catch functionality somehow. I think this particular case is already served well enough by do catch { ... }, if we choose to stabilize it that way. Perhaps br#keyword could be used for this, but that may not be a good intuitive relationship.

    Unresolved questions

    • Do macros need any special care with such identifier tokens?
    • Should diagnostics use the r# syntax when printing identifiers that overlap keywords?
    • Does rustdoc need to use the r# syntax? e.g. to document pub use old_edition::*

    Summary

    The use …::{… as …} syntax can now accept _ as alias to a trait to only import the implementations of such a trait.

    Motivation

    Sometimes, we might need to use a trait to be able to use its methods on a type in our code. However, we might also not want to import the trait symbol (because we redefine it, for instance):

    // in zoo.rs
    pub trait Zoo {
      fn zoo(&self) -> u32;
    }
    
    // several impls here
    // …
    // in main.rs
    struct Zoo {
      // …
    }
    
    fn main() {
      let x = "foo";
      let y = x.zoo(); // won’t compile because `zoo::Zoo` not in scope
    }

    To solve this, we need to import the trait:

    // in main.rs
    use zoo::Zoo;
    
    struct Zoo { // wait, what happens here?
      // …
    }
    
    fn main() {
      let x = "foo";
      let y = x.zoo();
    }

    However, you can see that we’ll hit a problem here, because we define an ambiguous symbol. We have two solutions:

    • Change the name of the struct to something else.
    • Qualify the use.

    The problem is that if we qualify the use, what name do we give the trait? We’re not even referring to it directly.

    use zoo::Zoo as ZooTrait;

    This will work but seems a bit like a hack because rustc forces us to give a name to something we won’t use in our types.

    This RFC suggests to solve this by adding the possibility to explicitly state that we won’t directly refer to that trait, but we want the impls:

    use zoo::Zoo as _;

    Guide-level explanation

    Qualifying a use with _ on a trait imports the trait’s impls but not the symbol directly. It’s handy if you don’t use the trait’s symbol in your type and if you redefine the symbol to something else.

    The _ means that you “don’t care about the name rustc will use for that qualified use“.

    Reference-level explanation

    use Trait as _ needs to desugar into use Trait as SomeGenSym. With this scheme, global imports and exports can work properly with such items, i.e. import / re-export them.

    mod m {
      pub use Trait as _;
    
      // `Trait` is in scope
    }
    
    use m::*;
    
    // `Trait` is in scope too

    In the case where the symbol is not a trait, it works the exact same way. However, a warning must be emitted by the compiler to state the unused import (as types don’t have impl!).

    In the same way, it’s possible to use the same mechanism with extern crate for linking-only crates:

    extern crate my_crate as _;

    Drawbacks

    This RFC tries to solve a very specific problem (when you must alias a trait use). It’s just a nit to make the syntax more “rust-ish” (it’s very easy to think such a thing would work given the way _ works pretty much everywhere else).

    Rationale and alternatives

    The simple alternative is to let the programmer give a name to the qualified import, which is not a big deal, but is a bit ugly.

    Unresolved questions

    Summary

    This RFC proposes the addition of a modulo method with more useful and mathematically regular properties over the built-in remainder % operator when the dividend or divisor is negative, along with the associated division method.

    For previous discussion, see: https://internals.rust-lang.org/t/mathematical-modulo-operator/5952.

    Motivation

    The behaviour of division and modulo, as implemented by Rust’s (truncated) division / and remainder (or truncated modulo) % operators, with respect to negative operands is unintuitive and has fewer useful mathematical properties than that of other varieties of division and modulo, such as flooring and Euclidean[1]. While there are good reasons for this design decision[2], having convenient access to a modulo operation, in addition to the remainder is very useful, and has often been requested[3][4][5][6][7].

    Guide-level explanation

    // Comparison of the behaviour of Rust's truncating division
    // and remainder, vs Euclidean division & modulo.
    (-8 / 3,      -8 % 3)           // (-2, -2)
    ((-8).div_euc(3), (-8).mod_euc(3))  // (-3,  1)

    Euclidean division & modulo for integers and floating-point numbers will be achieved using the div_euc and mod_euc methods. The % operator has identical behaviour to mod_euc for unsigned integers. However, when using signed integers or floating-point numbers, you should be careful to consider the behaviour you want: often Euclidean modulo will be more appropriate.

    Reference-level explanation

    It is important to have both division and modulo methods, as the two operations are intrinsically linked[8], though it is often the modulo operator that is specifically requested.

    A complete implementation of Euclidean modulo would involve adding 8 methods to the integer primitives in libcore/num/mod.rs and 2 methods to the floating-point primitives in libcore/num and libstd:

    // Implemented for all numeric primitives.
    fn div_euc(self, rhs: Self) -> Self;
    
    fn mod_euc(self, rhs: Self) -> Self;
    
    // Implemented for all integer primitives (signed and unsigned).
    fn checked_div_euc(self, other: Self) -> Option<Self>;
    fn overflowing_div_euc(self, rhs: Self) -> (Self, bool);
    fn wrapping_div_euc(self, rhs: Self) -> Self;
    
    fn checked_mod_euc(self, other: Self) -> Option<Self>;
    fn overflowing_mod_euc(self, rhs: Self) -> (Self, bool);
    fn wrapping_mod_euc(self, rhs: Self) -> Self;

    Sample implementations for div_euc and mod_euc on signed integers:

    fn div_euc(self, rhs: Self) -> Self {
        let q = self / rhs;
        if self % rhs < 0 {
            return if rhs > 0 { q - 1 } else { q + 1 }
        }
        q
    }
    
    fn mod_euc(self, rhs: Self) -> Self {
        let r = self % rhs;
        if r < 0 {
            return if rhs > 0 { r + rhs } else { r - rhs }
        }
        r
    }

    And on f64 (analogous to the f32 implementation):

    fn div_euc(self, rhs: f64) -> f64 {
        let q = (self / rhs).trunc();
        if self % rhs < 0.0 {
            return if rhs > 0.0 { q - 1.0 } else { q + 1.0 }
        }
        q
    }
    
    fn mod_euc(self, rhs: f64) -> f64 {
        let r = self % rhs;
        if r < 0.0 {
            return if rhs > 0.0 { r + rhs } else { r - rhs }
        }
        r
    }

    The unsigned implementations of these methods are trivial. The checked_*, overflowing_* and wrapping_* methods would operate analogously to their non-Euclidean *_div and *_rem counterparts that already exist. The edge cases are identical.

    Drawbacks

    Standard drawbacks of adding methods to primitives apply. However, with the proposed method names, there are unlikely to be conflicts downstream[9][10].

    Rationale and alternatives

    Flooring modulo is another variant that also has more useful behaviour with negative dividends than the remainder (truncating modulo). The difference in behaviour between flooring and Euclidean division & modulo come up rarely in practice, but there are arguments in favour of the mathematical properties of Euclidean division and modulo[1]. Alternatively, both methods (flooring and Euclidean) could be made available, though the difference between the two is likely specialised-enough that this would be overkill.

    The functionality could be provided as an operator. However, it is likely that the functionality of remainder and modulo are small enough that it is not worth providing a dedicated operator for the method.

    This functionality could instead reside in a separate crate, such as num (floored division & modulo is already available in this crate). However, there are strong points for inclusion into core itself:

    • Modulo as an operation is more often desirable than remainder for signed operations (so much so that it is the default in a number of languages) – the mailing list discussion has more support in favour of flooring/Euclidean division.
    • Many people are unaware that the remainder can cause problems with signed integers, and having a method displaying the other behaviour would draw attention to this subtlety.
    • The previous support for this functionality in core shows that many are keen to have this available.
    • The Euclidean or flooring modulo is used (or reimplemented) commonly enough that it is worth having it generally accessible, rather than in a separate crate that must be depended on by each project.

    Unresolved questions

    None.

    Summary

    Enables “or” patterns for if let and while let expressions as well as let and for statements. In other words, examples like the following are now possible:

    enum E<T> {
        A(T), B(T), C, D, E, F
    }
    
    // Assume the enum E and the following for the remainder of the RFC:
    use E::*;
    
    let x = A(1);
    let r = if let C | D = x { 1 } else { 2 };
    
    while let A(x) | B(x) = source() {
        react_to(x);
    }
    
    enum ParameterKind<T, L = T> { Ty(T), Lifetime(L), }
    use ParameterKind::*;
    
    // Only possible when `L = T` such that `kind : ParameterKind<T, T>`.
    let Ty(x) | Lifetime(x) = kind;
    
    for Ty(x) | Lifetime(x) in ::std::iter::once(kind);

    Motivation

    While nothing in this RFC is currently impossible in Rust, the changes the RFC proposes improves the ergonomics of control flow when dealing with enums (sum types) with three or more variants where the program should react in one way to a group of variants, and another way to another group of variants. Examples of when such sum types occur are protocols, when dealing with languages (ASTs), and non-trivial iterators.

    The following snippet (written with this RFC):

    if let A(x) | B(x) = expr {
        do_stuff_with(x);
    }

    must be written as:

    if let A(x) = expr {
        do_stuff_with(x);
    } else if let B(x) = expr {
        do_stuff_with(x);
    }

    or, using match:

    match expr {
        A(x) | B(x) => do_stuff_with(x),
        _           => {},
    }

    This way of using match is seen multiple times in std::iter when dealing with the Chain iterator adapter. An example of this is:

        fn fold<Acc, F>(self, init: Acc, mut f: F) -> Acc
            where F: FnMut(Acc, Self::Item) -> Acc,
        {
            let mut accum = init;
            match self.state {
                ChainState::Both | ChainState::Front => {
                    accum = self.a.fold(accum, &mut f);
                }
                _ => { }
            }
            match self.state {
                ChainState::Both | ChainState::Back => {
                    accum = self.b.fold(accum, &mut f);
                }
                _ => { }
            }
            accum
        }

    which could have been written as:

        fn fold<Acc, F>(self, init: Acc, mut f: F) -> Acc
            where F: FnMut(Acc, Self::Item) -> Acc,
        {
            use ChainState::*;
            let mut accum = init;
            if let Both | Front = self.state { accum = self.a.fold(accum, &mut f); }
            if let Both | Back  = self.state { accum = self.b.fold(accum, &mut f); }
            accum
        }

    This version is both shorter and clearer.

    With while let, the ergonomics and in particular the readability can be significantly improved.

    The following snippet (written with this RFC):

    while let A(x) | B(x) = source() {
        react_to(x);
    }

    must currently be written as:

    loop {
        match source() {
            A(x) | B(x) => react_to(x),
            _ => { break; }
        }
    }

    Another major motivation of the RFC is consistency with match.

    To keep let and for statements consistent with if let, and to enable the scenario exemplified by ParameterKind in the motivation, these or-patterns are allowed at the top level of let and for statements.

    In addition to the ParameterKind example, we can also consider slice.binary_search(&x). If we are only interested in the index at where x is or would be, without any regard for if it was there or not, we can now simply write:

    let Ok(index) | Err(index) = slice.binary_search(&x);

    and we will get back the index in any case and continue on from there.

    Guide-level explanation

    RFC 2005, in describing the third example in the section “Examples”, refers to patterns with | in them as “or” patterns. This RFC adopts the same terminology.

    While the “sum” of all patterns in match must be irrefutable, or in other words: cover all cases, be exhaustive, this is not the case (currently) with if/while let, which may have a refutable pattern. This RFC does not change this.

    The RFC only extends the use of or-patterns at the top level from matches to if let and while let expressions as well as let and for statements.

    For examples, see motivation.

    Reference-level explanation

    Grammar

    if let

    The grammar in § 7.2.24 is changed from:

    if_let_expr : "if" "let" pat '=' expr '{' block '}'
                   else_tail ? ;
    

    to:

    if_let_expr : "if" "let" '|'? pat [ '|' pat ] * '=' expr '{' block '}'
                   else_tail ? ;
    

    while let

    The grammar in § 7.2.25 is changed from:

    while_let_expr : [ lifetime ':' ] ? "while" "let" pat '=' expr '{' block '}' ;
    

    to:

    while_let_expr : [ lifetime ':' ] ? "while" "let" '|'? pat [ '|' pat ] * '=' expr '{' block '}' ;
    

    for

    The expr_for grammar is changed from:

    expr_for : maybe_label FOR pat IN expr_nostruct block ;
    

    to:

    expr_for : maybe_label FOR '|'? pat ('|' pat)* IN expr_nostruct block ;
    

    let statements

    The statement stmt grammar is replaced with a language equivalent to:

    stmt ::= old_stmt_grammar
           | let_stmt_many
           ;
    
    let_stmt_many ::= "let" pat_two_plus "=" expr ";"
    
    pat_two_plus ::= '|'? pat [ '|' pat ] + ;
    

    Syntax lowering

    The changes proposed in this RFC with respect to if let, while let, and for can be implemented by transforming the if/while let constructs with a syntax-lowering pass into match and loop + match expressions.

    Meanwhile, let statements can be transformed into a continuation with match as described below.

    Examples, if let

    These examples are extensions on the if let RFC. Therefore, the RFC avoids duplicating any details already specified there.

    Source:

    if let |? PAT [| PAT]* = EXPR { BODY }

    Result:

    match EXPR {
        PAT [| PAT]* => { BODY }
        _ => {}
    }

    Source:

    if let |? PAT [| PAT]* = EXPR { BODY_IF } else { BODY_ELSE }

    Result:

    match EXPR {
        PAT [| PAT]* => { BODY_IF }
        _ => { BODY_ELSE }
    }

    Source:

    if COND {
        BODY_IF
    } else if let |? PAT [| PAT]* = EXPR {
        BODY_ELSE_IF
    } else {
        BODY_ELSE
    }

    Result:

    if COND {
        BODY_IF
    } else {
        match EXPR {
            |? PAT [| PAT]* => { BODY_ELSE_IF }
            _ => { BODY_ELSE }
        }
    }

    Source

    if let |? PAT [| PAT]* = EXPR {
        BODY_IF
    } else if COND {
        BODY_ELSE_IF_1
    } else if OTHER_COND {
        BODY_ELSE_IF_2
    }

    Result:

    match EXPR {
        |? PAT [| PAT]* => { BODY_IF }
        _ if COND => { BODY_ELSE_IF_1 }
        _ if OTHER_COND => { BODY_ELSE_IF_2 }
        _ => {}
    }

    Examples, while let

    The following example is an extension on the while let RFC.

    Source

    ['label:] while let |? PAT [| PAT]* = EXPR {
        BODY
    }

    Result:

    ['label:] loop {
        match EXPR {
            PAT [| PAT]* => BODY,
            _ => break
        }
    }

    Examples, for

    Assuming that the semantics of for is defined by a desugaring from:

    for PAT in EXPR_ITER {
        BODY
    }

    into:

    match IntoIterator::into_iter(EXPR_ITER) {
        mut iter => loop {
            let next = match iter.next() {
                Some(val) => val,
                None => break,
            };
            let PAT = next;
            { BODY };
        },
    };

    then the only thing that changes is that PAT may include | at the top level in the for loop and the desugaring as per the section on grammar.

    Desugaring let statements with | in the top-level pattern

    There continues to be an exhaustivity check in let statements, however this check will now be able to support multiple patterns.

    This is a possible desugaring that a Rust compiler may do. While such a compiler may elect to implement this differently, these semantics should be kept.

    Source:

    {
        // prefix of statements:
        stmt*
        // The let statement which is the cause for desugaring:
        let_stmt_many
        // the continuation / suffix of statements:
        stmt*
        tail_expr? // Meta-variable for optional tail expression without ; at end
    }

    Result

    {
        stmt*
        match expr {
            pat_two_plus => {
                stmt*
                tail_expr?
            }
        }
    }

    For example, the following code:

    {
        foo();
        bar();
        let Ok(index) | Err(index) = slice.binary_search(&thing);
        println!("{}", index);
        do_something_to(index)
    }

    can be desugared to

    {
        foo();
        bar();
        match slice.binary_search(&thing) {
            Ok(index) | Err(index) => {
                println!("{}", index);
                do_something_to(index)
            }
        }
    }

    It can also be desugared to:

    {
        foo();
        bar();
        let index = match slice.binary_search(&thing) {
            Ok(index) | Err(index) => index,
        }
        println!("{}", index);
        do_something_to(index)
    }

    (Both are equivalent)

    Drawbacks

    This adds more additions to the grammar and makes the compiler more complex.

    Rationale and alternatives

    This could simply not be done. Consistency with match is however on its own reason enough to do this.

    It could be claimed that the if/while let RFCs already mandate this RFC, this RFC does answer that question and instead simply mandates it now.

    Another alternative is to only deal with if/while let expressions but not let and for statements.

    Unresolved questions

    The exact syntax transformations should be deferred to the implementation. This RFC does not mandate exactly how the AST:s should be transformed, only that the or-pattern feature be supported.

    There are no unresolved questions.

    • Feature Name: really_tagged_unions
    • Start Date: 2017-10-30
    • RFC PR: rust-lang/rfcs#2195
    • Rust Issue: N/A

    Summary

    Formally define the enum #[repr(u32, i8, etc..)] and #[repr(C)] attributes to force a non-C-like enum to have a defined layouts. This serves two purposes: allowing low-level Rust code to independently initialize the tag and payload, and allowing C(++) to safely manipulate these types.

    Motivation

    Enums that contain data are very good and useful. Unfortunately, their layout is currently purposefully unspecified, which makes these kinds of enums unusable for FFI and for low-level code. To demonstrate this, this RFC will look at two examples from firefox development where this has been a problem.

    C(++) FFI

    Consider a native Rust API for drawing a line, that uses a C-like LineStyle enum:

    // In native Rust crate
    
    pub fn draw_line(&mut self, bounds: &Rect, color: &Color, style: LineStyle) {
        ...
    }
    
    #[repr(u8)]
    pub enum LineStyle {
        Solid,
        Dotted,
        Dashed,
    }
    
    #[repr(C)]
    pub struct Rect { x: f32, y: f32, width: f32, height: f32 }
    
    #[repr(C)]
    pub struct Color { r: f32, g: f32, b: f32, a: f32 }

    This API is fairly easy for us to write a machine-checked shim for C++ code to invoke:

    // In Rust shim crate
    
    #[no_mangle]
    pub extern "C" fn wr_draw_line(
        state: &mut State, 
        bounds: &Rect, 
        color: &Color,
        style: LineStyle,
    ) {
        state.draw_line(bounds, color, style);
    } 
    // In C++ shim header
    
    
    // Autogenerated by cbindgen
    extern "C" {
    namespace wr {
    struct State; // opaque
    
    struct Rect { float x; float y; float width; float height; }
    struct Color { float r; float g; float b; float a; }
    
    enum class LineStyle: uint8_t {
        Solid,
        Dotted,
        Dashed,
    }
    
    void wr_draw_line(WrState *state,
                      const Rect *bounds,
                      const ColorF *aColor,
                      LineStyle aStyle);
    } // namespace wr
    } // extern
    
    
    
    // Hand-written
    void WrDrawLine(
        wr::State* aState, 
        const wr::Rect* aRect, 
        const wr::Color* aColor, 
        wr::LineStyle aStyle
    ) {
        wr_draw_line(aState, aRect, aColor, aStyle);
    }
    

    This works well, and we’re happy.

    Now consider adding a WavyLine style, which requires an extra thickness value:

    // Native Rust crate
    
    #[repr(u8)]         // Doesn't actually do anything we can rely on now
    enum LineStyle {
        Solid,
        Dotted,
        Dashed,
        Wavy { thickness: f32 },
    }

    We cannot safely pass this to/from C(++), nor can we manipulate it there. As such, we’re forced to take the thickness as an extra argument that is just ignored most of the time:

    // Native Rust crate
    
    pub fn draw_line(
        &mut self, 
        bounds: &Rect, 
        color: &Color, 
        style: LineStyle, 
        wavy_line_thickness: f32
    ) { ... }
    
    #[repr(u8)]
    enum LineStyle {
        Solid,
        Dotted,
        Dashed,
        Wavy,
    }

    This produces a worse API for everyone, while also throwing away the type-safety benefits of enums. This trick also doesn’t scale: if you have many nested enums, the combinatorics eventually become completely intractable.

    In-Place Construction

    Popular deserialization APIs in Rust generally have a signature like deserialize<T>() -> Result<T, Error>. This works well for small values, but optimizes very poorly for large values, as Rust ends up copying the T many times. Further, in many cases we just want to overwrite an old value that we no longer care about.

    In those cases, we could potentially use an API like deserialize_from<T>(&mut T) -> Result<(), Error>. However Rust currently requires enums to be constructed “atomically”, so we can’t actually take advantage of this API if our large value is an enum.

    That is, we must do something like:

    fn deserialize_from(dest: &mut MyBigEnum) -> Result<(), Error> {
        let tag = deserialize_tag()?;
        match tag {
            A => {
                let payload = deserialize_a()?
                *dest = A(payload);
            }
            ..
        }
        Ok(())
    }

    We must construct the entire payload out-of-place, and then move it into place at the end, even though our API is specifically designed to let us construct in-place.

    Now, this is pretty important for memory-safety in the general case, but there are many cases where this can be done safely. For instance, this is safe to do if the entire payload is plain-old-data, like [u8; 200], or if the code catches panics and fixes up the value.

    Note that one cannot do something like:

    *dest = A(mem::uninitialized())
    if let A(ref mut payload_dest) = *dest {
        deserialize_a(payload_dest);
    } else { unreachable!() }

    because enum optimizations make it unsound to put mem::uninitialized in an enum. That is, checking if dest = A can require inspecting the payload.

    To accomplish this task, we need dedicated support from the language.

    Guide-level explanation

    An enum can currently be adorned with #[repr(Int)] where Int is one of Rust’s integer types (u8, isize, etc). For C-like enums – enums which have no variants with associated data – this specifies that the enum should have the ABI of that integer type (size, alignment, and calling convention). #[repr(C)] currently just tells Rust to try to pick whatever integer type that a C compiler for the target platform would use for an enum.

    With this RFC, two new guaranteed, C(++)-compatible enum layouts will be added.

    #[repr(Int)] on a non-C-like enum will now mean: the enum must be represented as a C-union of C-structs that each start with a C-like enum with #[repr(Int)]. The other fields of the structs are the payloads of the variants. This is a mouthful, so let’s look at an example. This definition:

    #[repr(Int)]
    enum MyEnum {
        A(u32),
        B(f32, u64),
        C { x: u32, y: u8 },
        D,
    }

    Has the same layout as the following:

    #[repr(C)]
    union MyEnumRepr {
        A: MyEnumVariantA,
        B: MyEnumVariantB,
        C: MyEnumVariantC,
        D: MyEnumVariantD,
    }
    
    #[repr(Int)]
    enum MyEnumTag { A, B, C, D }
    
    #[repr(C)]
    struct MyEnumVariantA(MyEnumTag, u32);
    
    #[repr(C)]
    struct MyEnumVariantB(MyEnumTag, f32, u64);
    
    #[repr(C)]
    struct MyEnumVariantC { tag: MyEnumTag, x: u32, y: u8 }
    
    #[repr(C)]
    struct MyEnumVariantD(MyEnumTag);

    Note that the structs must be repr(C), because otherwise the MyEnumTag value wouldn’t be guaranteed to have the same position in each variant.

    C++ can also correctly manipulate this enum with the following definition:

    #include <stdint.h>
    
    enum class MyEnumTag: CppEquivalentOfInt { A, B, C, D };
    struct MyEnumPayloadA { MyEnumTag tag; uint32_t payload; };
    struct MyEnumPayloadB { MyEnumTag tag; float _0; uint64_t _1;  };
    struct MyEnumPayloadC { MyEnumTag tag; uint32_t x; uint8_t y; };
    struct MyEnumPayloadD { MyEnumTag tag; };
    
    union MyEnum {
        MyEnumVariantA A;
        MyEnumVariantB B;
        MyEnumVariantC C;
        MyEnumVariantD D;
    };
    

    The correct C definition is essentially the same, but with the enum class replaced with a plain integer of the appropriate type.

    This layout might be a bit surprising to those used to using tagged unions in C(++), which are commonly represented as a (tag, union) pair. There are two reasons to prefer this more complex layout. First, it’s what Rust has incidentally used this layout for a long time, so code that wants to begin relying on this layout will be compatible with old versions of Rust. Second, it can make slightly better use of space. For instance:

    #[repr(u8)]
    enum TwoCases {
        A(u8, u16),
        B(u16),
    }

    Becomes

    union TwoCasesRepr {
        A: TwoCasesVariantA,
        B: TwoCasesVariantB,
    }
    
    #[repr(u8)]
    enum TwoCasesTag { A, B }
    
    #[repr(C)]
    struct TwoCasesVariantA(TwoCasesTag, u8, u16);
    
    #[repr(C)]
    struct TwoCasesVariantB(TwoCasesTag, u16);

    Which ends up being 4 bytes large, because the TwoCasesVariantA struct can be laid out like:

    [ u8  | u8  |   u16    ]
      --    --    --    --
    

    While a (tag, union) pair would have to make it 6 bytes large:

    [ u8  | pad | u8  | pad |   u16   ]
      --    --    --    --    --    --
            ^           ^- u16 needs 16-bit align
            |
        (u8, u16) struct needs 16-bit align 
    

    However, for better compatibility with common C(++) idioms, and better ergonomics for low-level Rust programs, this RFC defines #[repr(C, Int)] on a tagged enum to specify the (tag, union) representation. Specifically the layout will be equivalent to a C-struct containing a C-like #[repr(Int)] enum followed by a C-union containing each payload.

    So for example this enum:

    #[repr(C, Int)]
    enum MyEnum {
        A(u32),
        B(f32, u64),
        C { x: u32, y: u8 },
        D,
    }
    

    Has the same layout as the following:

    #[repr(C)]
    struct MyEnumRepr {
        tag: MyEnumTag,
        payload: MyEnumPayload,
    }
    
    #[repr(Int)]
    enum MyEnumTag { A, B, C, D }
    
    #[repr(C)]
    union MyEnumPayload {
       A: u32,
       B: MyEnumPayloadB,
       C: MyEnumPayloadC,
       D: (),
    }
    
    #[repr(C)]
    struct MyEnumPayloadB(f32, u64);
    
    #[repr(C)]
    struct MyEnumPayloadC { x: u32, y: u8 }

    C++ can also correctly manipulate this enum with the following definition:

    #include <stdint.h>
    
    enum class MyEnumTag: CppEquivalentOfInt { A, B, C, D };
    struct MyEnumPayloadB { float _0; uint64_t _1;  };
    struct MyEnumPayloadC { uint32_t x; uint8_t y; };
    
    union MyEnumPayload {
       uint32_t A;
       MyEnumPayloadB B;   
       MyEnumPayloadC C;
    };
    
    struct MyEnum {
        MyEnumTag tag;
        MyEnumPayload payload;
    };
    

    If a non-C-like enum is only #[repr(C)], then the layout will be the same as #[repr(C, Int)], but the C-like tag enum will instead just be #[repr(C)] (so it will have whatever size C enums default to).

    For both layouts, it is defined for Rust programs to cast/reinterpret/transmute such an enum into the equivalent Repr definition. Separately manipulating the tag and payload is also defined. The tag and payload need only be in a consistent/initialized state when the value is matched on (which includes Dropping it).

    For instance, this code is valid (using the same definitions above):

    /// Tries to parse a `#[repr(C, u8)] MyEnum` from a custom binary format, overwriting `dest`.
    /// On Err, `dest` may be partially overwritten (but will be in a memory-safe state)
    fn parse_my_enum_from<'a>(dest: &'a mut MyEnum, input: &mut &[u8]) -> Result<(), &'static str> {
        unsafe {
            // Convert to raw repr
            let dest: &'a mut MyEnumRepr = mem::transmute(dest);
    
            // If MyEnum was non-trivial, we might match on the tag and 
            // drop_in_place the payload here to start.
    
            // Read the tag
            let tag = input.get(0).ok_or("Couldn't Read Tag")?;
            dest.tag = match tag {
                0 => MyEnumTag::A,
                1 => MyEnumTag::B,
                2 => MyEnumTag::C,
                3 => MyEnumTag::D,
                _ => { return Err("Invalid Tag Value"); }
            };
            *input = &input[1..];
    
            // Note: it would be very bad if we panicked past this point, or if
            // the following methods didn't initialize the payload on Err!
    
            // Read the payload
            match dest.tag {
                MyEnumTag::A => parse_my_enum_a_from(&mut dest.payload.A, input),
                MyEnumTag::B => parse_my_enum_b_from(&mut dest.payload.B, input),
                MyEnumTag::C => parse_my_enum_c_from(&mut dest.payload.C, input),
                MyEnumTag::D => { Ok(()) /* do nothing */ }
            }
        }
    }

    It should be noted that Rust enums should still idiomatically not have any repr annotation, as this allows for maximum optimization opportunities and the precise layout is unlikely to matter. If a deterministic layout is required, repr(Int) should be preferred by default over repr(C, Int) as it has a strictly superior space-usage, and incidentally works in older versions of Rust. However repr(C, Int) is a reasonable choice for a more idiomatic-feeling tagged union, or to interoperate with an existing C(++) codebase.

    There are a few enum repr combinations that are left unspecified under this proposal, and thus produce compiler warnings:

    • repr(Int1, Int2)
    • repr(C, Int) on C-like enums
    • repr(C) on a zero-variant enum
    • repr(Int) on a zero-variant enum
    • repr(packed) on an enum
    • repr(simd) on an enum

    Reference-level explanation

    Since the whole point of this proposal is to enable low-level control, the guide-level explanation should cover all the relevant corner-cases and details in sufficient detail. All that remains is to discuss implementation details.

    It was informally decided earlier this year that repr(Int)should have the behaviour this RFC proposes, as it was being partially relied on (in that it suppressed dangerous optimizations) and it made sense to the developers. There is even a test in the rust-lang repo that was added to ensure that this behaviour doesn’t regress. So this part of the proposal is already implemented and somewhat tested on stable Rust. This RFC just seeks to codify that this won’t break in the future.

    However repr(C, Int) currently doesn’t do anything different from repr(Int). Changing this is a relatively minor tweak to the code that lowers Rust code to a particular ABI. Anyone relying on repr(C, Int) being the same as repr(Int) is relying on unspecified behaviour, but a cargo bomb run should still be done just to check.

    A PR has been submitted to implement this, along with several tests.

    Drawbacks

    Half of this proposal is already implemented, and the other half has an implementation submitted (~20 line patch). The existence of this proposal can also be completely ignored by anyone who doesn’t care about it, as they can keep using the default Rust repr. This is simply making things that exist sort-of-by-accident do something useful, which is basically a pure win considering the implementation/maintenance burden is minimal.

    One minor issue with this proposal is that there’s no way to request the repr(Int) layout with the repr(C) tag size. To be blunt, this doesn’t seem very important. It’s unclear if developers should even use bare repr(C) on tagged unions, as the default C enum size is actually quite large for a tag. This is also consistent with the Rust philosophy of trying to minimize unnecessary platform-specific details. Also, a desperate Rust programmer could acquire the desired behaviour with platform-specific cfgs (Rust has to basically guess at the type of a repr(C) enum anyway).

    The remaining drawbacks amount to “what if this is the wrong interpretation”, which shall be addressed in the alternatives.

    Rationale and alternatives

    There are a few alternative interpretations of repr(Int) on a non-C-like enum.

    It should do nothing

    In which case it should probably become an error/warning. This isn’t particularly desirable, as was discussed when we decided to maintain this behaviour.

    The tag should come after the union, and/or order should be manually specified

    With the repr(C) layout, there isn’t a particularly compelling reason to move the tag around because of how padding and alignment are handled: you can’t actually save space by putting the tag after, as long as your tag is a reasonable size.

    It’s possible positioning the tag afterwards could be desirable to interoperate with a definition that is provided by a third party (hardware spec or some existing C library). However there are tons of other tag packing strategies that we also can’t handle, so we’d probably want a more robust solution for those kinds of cases anyway.

    With the repr(Int) layout, this could potentially save space (for instance, with a variant like A(u16, u8)). However the benefits are relatively minimal compared to the increased complexity. If that complexity is desirable, it can be addressed with a future extension.

    Compound variants shouldn’t automatically be marked as repr(C)

    With the repr(Int) layout this isn’t really possible, because the tag needs a deterministic position, and we can’t “partially” repr(C) a struct.

    With either layout, one can make the payload be a single repr(Rust) struct, and that will have its layout aggressively optimized, because repr(C) isn’t infectious. So this is just a matter of “what is a good default”. The FFI case clearly wants fully defined layouts, while the pure-Rust case seems like a toss up. It seems like repr(C) is therefore the better default.

    Opaque Tags

    This code isn’t valid under the main proposal:

    let x: Option<MyEnum> = Some(mem::uninitialized());
    if let Some(ref mut inner) = x {
        initialize(inner);
    } else { unreachable!() }

    It relies on the fact that the Some-ness of an Option (or the tag of any repr(Rust) enum) can’t rely on the tag of a repr(C/Int) enum. Or in other words, repr(C/Int) enums have opaque tags. The cost of making this work is that Option<MyEnum> would have to be larger than MyEnum.

    It would be nice for this to work, but if you really need it, you can just define #[repr(u8)] COption<T> { ... } and use that.

    Unresolved questions

    Currently None. 🎉

    Future Extensions

    Here’s some quick sketches of future extensions which could be done to this design.

    • A field/method for the tag/payload (my_enum.tag, my_enum.payload)
      • Probably should be a field to avoid conflicts with user-defined methods
      • Might need #[repr(pub(Int))] for API design reasons
    • Compiler-generated definitions for the Repr types
      • With inherent type aliases on the enum? (MyEnum::Tag, MyEnum::Payload, MyEnum::PayloadA, etc.)
    • As discussed in previous sections, more advanced tag placement strategies?
    • Allow specifying tag’s value: #[repr(u32)] MyEnum { A(u32) = 2, B = 5 }

    Summary

    Introduce a mechanism for Cargo crates to make use of declarative build scripts, obtained from one or more of their dependencies rather than via a build.rs file. Support experimentation with declarative build scripts in the crates.io ecosystem.

    Motivation

    Cargo has many potentially desirable enhancements planned for its build process, including integrating a Cargo build process with native dependencies, and integrating with broader build systems or projects, such as massive mono-repo build systems, or Linux distributions.

    Right now, the biggest problem facing such systems involves build.rs scripts and the arbitrary things those scripts can do. Such build systems typically need more information about native dependencies that are embedded in build.rs, so that they can provide their own versions of those dependencies, or encode appropriate dependencies in another metadata format such as the dependencies of their packaging system or build system. Right now, such systems often have to override the build.rs script themselves, and do custom per-crate integration work, manually; there’s no way to introspect what build.rs does, or get a declarative semantic description of the build script.

    At the same time, we don’t yet have sufficiently precise information about the needs of such systems to design an ideal set of Cargo metadata on the first try. Rather than attempt to architect the perfect solution from the start, and potentially create an intermediate state that will require long-term support, we propose to allow experimentation with declarative build systems within the crates.io ecosystem, in crates supplying modular components similar to build.rs scripts. By convention, such scripts should typically read any parameters and metadata they need from Cargo.toml, in a form that other build-related software can read as well.

    Guide-level explanation

    In the [package] section of Cargo.toml, you can specify a field metabuild, whose value should be a string or list of strings, each one exactly matching the name of a dependency specified in the [build-dependencies] section. If you specify metabuild, you must not specify build, and Cargo will ignore the build.rs file if any.

    When Cargo builds a crate that specifies a metabuild field, at the point when it would have built and run build.rs, it will instead invoke the metabuild() function from each of the specified crates in order.

    In effect, Cargo will act as though it had a build.rs file containing an extern crate line for each string, in order, as well as a main function that calls the metabuild function in each such crate, in order. For example, if the crate contains metabuild = ["pkgc", "parsegen"], then the effective build.rs will look like this:

    extern crate pkgc;
    extern crate parsegen;
    
    fn main() {
        pkgc::metabuild();
        parsegen::metabuild();
    }

    Note that the metabuild functions intentionally take no parameters; they should obtain any parameters they need from Cargo.toml. Various crates to parse Cargo.toml exist in the crates.io ecosystem.

    Also note that the metabuild functions do not return an error type; if they fail, they should panic.

    Future versions of this interface with higher integration into Cargo may incorporate ways for Cargo to pass pre-parsed data from Cargo.toml, or ways for the metabuild functions to return semantic error information. Metabuild interfaces may also wish to run scripts in parallel, provide dependencies between them, or orchestrate their execution in many other ways. This minimal specification allows for experimentation with such interfaces within the crates.io ecosystem, by providing an adapter from the raw metabuild interface.

    Reference-level explanation

    Cargo’s logic to invoke build.rs should check for the metabuild key, and if present, create and invoke a temporary build.rs as described above. For an initial implementation, Cargo can generate and cache that build.rs in the target directory when needed, alongside the built version of the script.

    For Cargo schema versioning, using the metabuild key will result in the crate requiring a sufficiently new version of Cargo to understand metabuild. This should start out as an unstable Cargo feature; in the course of experimentation and stabilization, the implementation of this feature may change, requiring adaptation of experimental build scripts.

    If any of the strings mentioned in metabuild do not match one of the build-dependencies, Cargo should produce an error (before attempting to generate and compile a build.rs script). However, if a string matches a conditional build-dependency, such as one conditional on a feature or target, then Cargo should only invoke that build-dependency’s metabuild function when those conditions apply.

    Cargo’s documentation on metabuild should recommend a preferred crate for parsing data from Cargo.toml, to avoid every provider of a metabuild function from reimplementing it themselves.

    As we develop other best practices for the development and implementation of metabuild crates, we should extract and standardize common code for those practices as crates.

    Drawbacks

    While Cargo can change this interface arbitrarily while still unstable, one stabilized, Cargo will have to support it forever, even if we develop a new build/metabuild interface in the future.

    Rationale and Alternatives

    metabuild could always point to a single crate, and not support a list of crate names; a crate in the crates.io ecosystem could easily provide the “list of crate names” functionality, along with more advanced flows of information from one such crate to another. However, many simple cases will only want to invoke a list of crates in order, and handling that one case within Cargo will simplify initial experimentation while still allowing implementation of more complex logic via other crates in the crates.io ecosystem.

    metabuild() functions could take parameters, return errors, or make use of traits. However, this would require providing appropriate types and traits for all of those, as well as a helper crate providing those types and traits, and we do not yet know what interfaces we need or want. We propose experimenting via the crates.io ecosystem first, before considering such interfaces.

    Cargo could compile and run a separate build.rs-like script to run each metabuild function independently, rather than a single script that invokes all of them.

    We could avoid introducing an extensible mechanism, and instead introduce individual semantic build interfaces one-by-one within Cargo itself. However, this would drastically impair experimentation and development, and in particular this would make it more difficult to evaluate multiple potential approaches to any given piece of build functionality. Such an interface would also not provide an obvious path to support code generators.

    ⚠ This RFC has mostly been superseded ⚠

    This turned out to be more complicated than expected to detect while being intuitive to the programmer. As such, it’s expected that this problem space will be addressed with the inline consts from RFC 2920 instead, which have syntax to opt-in to the behaviour.

    However, the simpler case of [SOME_CONST_ITEM; N] was kept (stabilized in rust-lang/rust#49147).

    Summary

    Relaxes the rules for repeat expressions, [x; N] such that x may also be const (strictly speaking rvalue promotable), in addition to typeof(x): Copy. The result of [x; N] where x is const is itself also const.

    Motivation

    RFC 2000, const_generics introduced the ability to have generically sized arrays. Even with that RFC, it is currently impossible to create such an array that is also const. Creating an array that is const may for example be useful for the const_default RFC which proposes the following trait:

    pub trait ConstDefault { const DEFAULT: Self; }

    To add an implementation of this trait for an array of any size where the elements of type T are ConstDefault, as in:

    impl<T: ConstDefault, const N: usize> ConstDefault for [T; N] {
        const DEFAULT: Self = [T::DEFAULT; N];
    }

    In the example given by mem::uninitialized(), a value of type [Vec<u32>; 1000] is created and filled. With this RFC, and when Vec::new() becomes const, the user can simply write:

    let data = [Vec::<u32>::new(); 1000];
    println!("{:?}", &data[0]);

    this removes one common reason to use uninitialized() which “is incredibly dangerous”.

    Guide-level explanation

    You have a variable or expression X which is const, for example:

    type T = Option<Box<u32>>;
    const X: T = None;

    Now, you’d like to use array repeat expressions [X; N] to create an array containing a bunch of Xes. Sorry, you are out of luck!

    But with this RFC, you can now write:

    const X: T = None;
    const arr: [T; 100] = [X; 100];

    or, if you wish to modify the array later:

    const X: T = None;
    let mut arr = [X; 100];
    arr[0] = Some(Box::new(1));

    Reference-level explanation

    Values which are const are freely duplicatable as seen in the following example which compiles today. This is also the case with Copy. Therefore, the value X in the repeat expression may be simply treated as if it were of a Copy type.

    fn main() {
        type T = Option<Box<u32>>;
        const X: T = None;
        let mut arr = [X, X];
        arr[0] = Some(Box::new(1));
    }

    Thus, the compiler may rewrite the following:

    fn main() {
        type T = Option<Box<u32>>;
        const X: T = None;
        let mut arr = [X; 2];
        arr[0] = Some(Box::new(1));
    }

    internally as:

    fn main() {
        type T = Option<Box<u32>>;
    
        // This is the value to be repeated.
        // In this case, a panic won't happen, but if it did, that panic
        // would happen during compile time at this point and not later.
        const X: T = None;
    
        let mut arr = {
            let mut data: [T; 2];
    
            unsafe {
                data = mem::uninitialized();
    
                let mut iter = (&mut data[..]).into_iter();
                while let Some(elem) = iter.next() {
                    // ptr::write does not run destructor of elem already in array.
                    // Since X is const, it can not panic at this point.
                    ptr::write(elem, X);
                }
            }
    
            data
        };
    
        arr[0] = Some(Box::new(1));
    }

    Additionally, the pass that checks constness must treat [expr; N] as a const value such that [expr; N] is assignable to a const item as well as permitted inside a const fn.

    Strictly speaking, the set of values permitted in the expression [expr; N] are those where is_rvalue_promotable(expr) or typeof(expr): Copy. Specifically, in [expr; N] the expression expr is evaluated:

    • never, if N == 0,
    • one time, if N == 1,
    • N times, otherwise.

    For values that are not freely duplicatable, evaluating expr will result in a move, which results in an error if expr is moved more than once (including moves outside of the repeat expression). These semantics are intentionally conservative and intended to be forward-compatible with a more expansive is_const(expr) check.

    Drawbacks

    It might make the semantics of array initializers more fuzzy. The RFC, however, argues that the change is quite intuitive.

    Rationale and alternatives

    The alternative, in addition to simply not doing this, is to modify a host of other constructs such as mem::uninitialized(), for loops over iterators, [ptr::write] to be const, which is a larger change. The design offered by this RFC is therefore the simplest and most non-intrusive design. It is also the most consistent.

    Another alternative is to allow a more expansive set of values is_const(expr) rather than is_rvalue_promotable(expr). A consequence of this is that checking constness would be done earlier on the HIR. Instead, checking if expr is rvalue promotable can be done on the MIR and does not require significant changes to the compiler. If we decide to expand to is_const(expr) in the future, we may still do so as the changes proposed in this RFC are compatible with such future changes.

    The impact of not doing this change is to not enable generically sized arrays to be const as well as encouraging the use of mem::uninitialized.

    Unresolved questions

    There are no unresolved questions.

    Summary

    Add support for formatting integers as hexadecimal with the fmt::Debug trait, including when they occur within larger types.

    println!("{:02X?}", b"AZaz\0")
    [41, 5A, 61, 7A, 00]
    

    Motivation

    Sometimes the bits that make up an integer are more meaningful than its purely numerical value. For example, an RGBA color encoded in u32 with 8 bits per channel is easier to understand when shown as 00CC44FF than 13387007.

    The std::fmt::UpperHex and std::fmt::LowerHex traits provide hexadecimal formatting through {:X} and {:x} in formatting strings, but they’re only implemented for plain integer types and not other types like slices that might contain integers.

    The std::fmt::Debug trait (used with {:?}) however is intended for formatting “in a programmer-facing, debugging context”. It can be derived, and doing so is recommended for most types.

    This RFC proposes adding the missing combination of:

    • Output intended primarily for end-users (Display) v.s. for programmers (Debug)
    • Numbers shown in decimal v.s. hexadecimal

    Guide-level explanation

    In formatting strings like in the format! and println! macros, the formatting parameters x or X − to select lower-case or upper-case hexadecimal − can now be combined with ? which select the Debug trait.

    For example, format!("{:X?}", [65280].first()) returns Some(FF00).

    This can also be combined with other formatting parameters. For example, format!("{:02X?}", b"AZaz\0") zero-pads each byte to two hexadecimal digits and return [41, 5A, 61, 7A, 00].

    An API returning Vec<u32> might be tested like this:

    let return_value = foo(bar);
    let expected = &[ /* ... */ ][..];
    assert!(return_value == expected, "{:08X?} != {:08X?}", return_value, expected);

    Reference-level explanation

    Formatting strings

    The syntax of formatting strings is specified with a grammar which at the moment is as follows:

    format_string := <text> [ maybe-format <text> ] *
    maybe-format := '{' '{' | '}' '}' | <format>
    format := '{' [ argument ] [ ':' format_spec ] '}'
    argument := integer | identifier
    
    format_spec := [[fill]align][sign]['#']['0'][width]['.' precision][type]
    fill := character
    align := '<' | '^' | '>'
    sign := '+' | '-'
    width := count
    precision := count | '*'
    type := identifier | ''
    count := parameter | integer
    parameter := argument '$'
    

    This RFC adds an optional radix immediately before type:

    format_spec := [[fill]align][sign]['#']['0'][width]['.' precision][radix][type]
    radix: 'x' | 'X'
    

    Formatter API

    Note that x and X are already valid types. They are only interpreted as a radix when the type is ?, since combining them with other types doesn’t make sense.

    This radix is exposed indirectly in two additional methods of std::fmt::Formatter:

    impl<'a> Formatter<'a> {
        // ...
    
        /// Based on the radix and type: 16, 10, 8, or 2.
        ///
        /// This is mostly useful in `Debug` impls,
        /// where the trait itself doesn’t imply a radix.
        fn number_radix(&self) -> u32
    
        /// true for `X` or `E`
        ///
        /// This is mostly useful in `Debug` impls,
        /// where the trait itself doesn’t imply a case.
        fn number_uppercase(&self) -> bool
    }

    Although the radix and type are separate in the formatting string grammar, they are intentionally conflated in this new API.

    Debug impls

    The Debug implementation for primitive integer types {u,i}{8,16,32,64,128,size} is modified to defer to LowerHex or UpperHex instead of Display, based on formatter.number_radix() and formatter.number_uppercase(). The alternate # flag is ignored, since it already has a separate meaning for Debug: the 0x prefix is not included.

    As of Rust 1.22, impls using the Formatter::debug_* methods do not forward formatting parameters such as width when formatting keys/values/items. Doing so is important for this RFC to be useful. This is fixed by PR #46233.

    Drawbacks

    The hexadecimal flag in the Debug trait is superficially redundant with the LowerHex and UpperHex traits. If these traits were not stable yet, we could have considered a more unified design.

    Rationale and alternatives

    Implementing LowerHex and UpperHex was proposed and rejected in PR #44751.

    The status quo is that debugging or testing code that could be a one-liner requires manual Debug impls and/or concatenating the results of separate string formatting operations.

    Unresolved questions

    • Should this be extended to octal and binary (as {:o?} and {:b?})? Other formatting types/traits too?
    • Details of the new Formatter API

    Summary

    This RFC proposes that closure capturing should be minimal rather than maximal. Conceptually, existing rules regarding borrowing and moving disjoint fields should be applied to capturing. If implemented, the following code examples would become valid:

    let a = &mut foo.a;
    || &mut foo.b; // Error! cannot borrow `foo`
    somefunc(a);
    let a = &mut foo.a;
    move || foo.b; // Error! cannot move `foo`
    somefunc(a);

    Note that some discussion of this has already taken place:

    Motivation

    In the rust language today, any variables named within a closure will be fully captured. This was simple to implement but is inconsistent with the rest of the language because rust normally allows simultaneous borrowing of disjoint fields. Remembering this exception adds to the mental burden of the programmer and makes the rules of borrowing and ownership harder to learn.

    The following is allowed; why should closures be treated differently?

    let _a = &mut foo.a;
    loop { &mut foo.b; } // ok!

    This is a particularly annoying problem because closures often need to borrow data from self:

    pub fn update(&mut self) {
        // cannot borrow `self` as immutable because `self.list` is also borrowed as mutable
        self.list.retain(|i| self.filter.allowed(i));
    }

    Guide-level explanation

    Rust understands structs sufficiently to know that it’s possible to borrow disjoint fields of a struct simultaneously. Structs can also be destructed and moved piece-by-piece. This functionality should be available anywhere, including from within closures:

    struct OneOf {
        text: String,
        of: Vec<String>,
    }
    
    impl OneOf {
        pub fn matches(self) -> bool {
            // Ok! destructure self
            self.of.into_iter().any(|s| s == self.text)
        }
    
        pub fn filter(&mut self) {
            // Ok! mutate and inspect self
            self.of.retain(|s| s != &self.text)
        }
    }

    Rust will prevent dangerous double usage:

    struct FirstDuplicated(Vec<String>)
    
    impl FirstDuplicated {
        pub fn first_count(self) -> usize {
            // Error! can't destructure and mutate same data
            self.0.into_iter()
                .filter(|s| &s == &self.0[0])
                .count()
        }
    
        pub fn remove_first(&mut self) {
            // Error! can't mutate and inspect same data
            self.0.retain(|s| s != &self.0[0])
        }
    }

    Reference-level explanation

    This RFC does not propose any changes to the borrow checker. Instead, the MIR generation for closures should be altered to produce the minimal capture. Additionally, a hidden repr for closures might be added, which could reduce closure size through awareness of the new capture rules (see unresolved).

    In a sense, when a closure is lowered to MIR, a list of “capture expressions” is created, which we will call the “capture set”. Each expression is some part of the closure body which, in order to capture parts of the enclosing scope, must be pre-evaluated when the closure is created. The output of the expressions, which we will call “capture data”, is stored in the anonymous struct which implements the Fn* traits. If a binding is used within a closure, at least one capture expression which borrows or moves that binding’s value must exist in the capture set.

    Currently, lowering creates exactly one capture expression for each used binding, which borrows or moves the value in its entirety. This RFC proposes that lowering should instead create the minimal capture, where each expression is as precise as possible.

    This minimal set of capture expressions might be created through a sort of iterative refinement. We would start out capturing all of the local variables. Then, each path would be made more precise by adding additional dereferences and path components depending on which paths are used and how. References to structs would be made more precise by reborrowing fields and owned structs would be made more precise by moving fields.

    A capture expression is minimal if it produces a value that is used by the closure in its entirety (e.g. is a primitive, is passed outside the closure, etc.) or if making the expression more precise would require one the following.

    • a call to an impure function
    • an illegal move (for example, out of a Drop type)

    When generating a capture expression, we must decide if the output should be owned or if it can be a reference. In a non-move closure, a capture expression will only produce owned data if ownership of that data is required by the body of the closure. A move closure will always produce owned data unless the captured binding does not have ownership.

    Note that all functions are considered impure (including to overloaded deref implementations). And, for the sake of capturing, all indexing is considered impure. It is possible that overloaded Deref::deref implementations could be marked as pure by using a new, marker trait (such as DerefPure) or attribute (such as #[deref_transparent]). However, such a solution should be proposed in a separate RFC. In the meantime, <Box as Deref>::deref could be a special case of a pure function (see unresolved).

    Also note that, because capture expressions are all subsets of the closure body, this RFC does not change what is executed. It does change the order/number of executions for some operations, but since these must be pure, order/repetition does not matter. Only changes to lifetimes might be breaking. Specifically, the drop order of uncaptured data can be altered.

    We might solve this by considering a struct to be minimal if it contains unused fields that implement Drop. This would prevent the drop order of those fields from changing, but feels strange and non-orthogonal (see unresolved). Encountering this case at all could trigger a warning, so that this extra rule could exist temporarily but be removed over the next epoc (see unresolved).

    Reference Examples

    Below are examples of various closures and their capture sets.

    let foo = 10;
    || &mut foo;
    • &mut foo (primitive, ownership not required, used in entirety)
    let a = &mut foo.a;
    || (&mut foo.b, &mut foo.c);
    somefunc(a);
    • &mut foo.b (ownership not required, used in entirety)
    • &mut foo.c (ownership not required, used in entirety)

    The borrow checker passes because foo.a, foo.b, and foo.c are disjoint.

    let a = &mut foo.a;
    move || foo.b;
    somefunc(a);
    • foo.b (ownership available, used in entirety)

    The borrow checker passes because foo.a and foo.b are disjoint.

    let hello = &foo.hello;
    move || foo.drop_world.a;
    somefunc(hello);
    • foo.drop_world (ownership available, can’t be more precise without moving out of Drop)

    The borrow checker passes because foo.hello and foo.drop_world are disjoint.

    || println!("{}", foo.wrapper_thing.a);
    • &foo.wrapper_thing (ownership not required, can’t be more precise because overloaded Deref on wrapper_thing is impure)
    || foo.list[0];
    • foo.list (ownership required, can’t be more precise because indexing is impure)
    let bar = (1, 2); // struct
    || myfunc(bar);
    • bar (ownership required, used in entirety)
    let foo_again = &mut foo;
    || &mut foo.a;
    somefunc(foo_again);
    • &mut foo.a (ownership not required, used in entirety)

    The borrow checker fails because foo_again and foo.a intersect.

    let _a = foo.a;
    || foo.a;
    • foo.a (ownership required, used in entirety)

    The borrow checker fails because foo.a has already been moved.

    let a = &drop_foo.a;
    move || drop_foo.b;
    somefunc(a);
    • drop_foo (ownership available, can’t be more precise without moving out of Drop)

    The borrow checker fails because drop_foo cannot be moved while borrowed.

    || &box_foo.a;
    • &<Box<_> as Deref>::deref(&box_foo).b (ownership not required, Box::deref is pure)
    move || &box_foo.a;
    • box_foo (ownership available, can’t be more precise without moving out of Drop)
    let foo = &mut a;
    let other = &mut foo.other;
    move || &mut foo.bar;
    somefunc(other);
    • &mut foo.bar (ownership not available, borrow can be split)

    Drawbacks

    This RFC does ruin the intuition that all variables named within a closure are completely captured. I argue that that intuition is not common or necessary enough to justify the extra glue code.

    Rationale and alternatives

    This proposal is purely ergonomic since there is a complete and common workaround. The existing rules could remain in place and rust users could continue to pre-borrow/move fields. However, this workaround results in significant useless glue code when borrowing many but not all of the fields in a struct. It also produces a larger closure than necessary which could make the difference when inlining.

    Unresolved questions

    • How to optimize pointers. Can borrows that all reference parts of the same object be stored as a single pointer? How should this optimization be implemented (e.g. a special repr, refinement typing)?

    • How to signal that a function is pure. Is this even needed/wanted? Any other places where the language could benefit?

    • Should Box be special?

    • Drop order can change as a result of this RFC, is this a real stability problem? How should this be resolved?

    • Feature Name: optional_error_description
    • Start Date: 2017-11-29
    • RFC PR: rust-lang/rfcs#2230
    • Rust Issue: (leave this empty)

    Default implementation of Error::description()

    Provide a default implementation of the Error trait’s description() method to save users trouble of implementing this flawed method.

    Motivation

    The description() method is a waste of time for implementors and users of the Error trait. There’s high overlap between description and Display, which creates redundant implementation work and confusion about relationship of these two ways of displaying the error.

    The description() method can’t easily return a formatted string with per-instance error description. That’s a gotcha for novice users struggling with the borrow checker, and gotcha for users trying to display the error, because the description() is going to return a less informative message than the Display trait.

    Guide-level explanation

    Let’s steer users away from the description() method.

    1. Change the description() documentation to suggest use of the Display trait instead.
    2. Provide a default implementation of the description() so that the Error trait can be implemented without worrying about this method.

    Reference-level explanation

    Users of the Error trait can then pretend this method does not exist.

    Drawbacks

    When users start omitting bespoke description() implementations, code that still uses this method will start getting default strings instead of human-written description. If this becomes a problem, the description() method can also be formally deprecated (with the #[deprecated] attribute). However, there’s no urgency to remove existing implementations of description(), so this RFC does not propose formal deprecation at this time to avoid unnecessary warnings during the transition.

    Rationale and alternatives

    • Do nothing, and rely on 3rd party crates to improve usability of errors (e.g. various crates providing Error-implementing macros or the Fail trait).
    • The default message returned by description could be different.
      • it could be a hardcoded generic string, e.g. "error",
      • it could return core::intrinsics::type_name::<Self>(),
      • it could try to be nicer, e.g. use the type’s doccomment as the description, or convert type name to a sentence (FileNotFoundError -> “error: file not found”).

    Unresolved questions

    None yet.

    Summary

    Expand the traits implemented by structs libc crate to include Debug, Eq, Hash, and PartialEq.

    Motivation

    This will allow downstream crates to easily support similar operations with any types they provide that contain libc structs. Additionally The Rust API Guidelines specify that it is considered useful to expose as many traits as possible from the standard library. In order to facilitate the following of these guidelines, official Rust libraries should lead by example.

    For many of these traits, it is trivial for downstream crates to implement them for these types by using newtype wrappers. As a specific example, the nix crate offers the TimeSpec wrapper type around the timespec struct. This wrapper could easily implement Eq through comparing both fields in the struct.

    Unfortunately there are a great many structs that are large and vary widely between platforms. Some of these in use by nix are dqblk, utsname, and statvfs. These structs have fields and field types that vary across platforms. As nix aims to support as many platforms as libc does, this variation makes implementing these traits manually on wrapper types time consuming and error prone.

    Guide-level explanation

    Add an extra_traits feature to the libc library that enables Debug, Eq, Hash, and PartialEq implementations for all structs.

    Reference-level explanation

    The Debug, Eq/PartialEq, and Hash traits will be added as automatic derives within the s! macro in src/macros.rs if the corresponding feature flag is enabled. This won’t work for some types because auto-derive doesn’t work for arrays larger than 32 elements, so for these they’ll be implemented manually. For libc as of bbda50d20937e570df5ec857eea0e2a098e76b2d on x86_64-unknown-linux-gnu these many structs will need manual implementations:

    • Debug - 17
    • Eq/PartialEq - 46
    • Hash - 17

    Drawbacks

    While most structs will be able to derive these implementations automatically, some will not (for example arrays larger than 32 elements). This will make it harder to add some structs to libc.

    This extra trait will increase the testing requirements for libc.

    Rationale and alternatives

    Adding these trait implementations behind a singular feature flag has the best combination of utility and ergonomics out of the possible alternatives listed below:

    Always enabled with no feature flags

    This was regarded as unsuitable because it increases compilation times by 100-200%. Compilation times of libc was tested at commit bbda50d20937e570df5ec857eea0e2a098e76b2d with modifications to add derives for the traits discussed here under the extra_traits feature (with no other features). Some types failed to have these traits derived because of specific fields, so these were removed from the struct declaration. The table below shows the compilation times:

    Build argumentsTime
    cargo clean && cargo build --no-default-features0.84s
    cargo clean && cargo build --no-default-features --features extra_traits2.17s
    cargo clean && cargo build --no-default-features --release0.64s
    cargo clean && cargo build --no-default-features --release --features extra_traits1.80s
    cargo clean && cargo build --no-default-features --features use_std1.14s
    cargo clean && cargo build --no-default-features --features use_std,extra_traits2.34s
    cargo clean && cargo build --no-default-features --release --features use_std0.66s
    cargo clean && cargo build --no-default-features --release --features use_std,extra_traits1.94s

    Default-on feature

    For crates that are more than one level above libc in the dependency chain it will be impossible for them to opt out. This could also happen with a default-off feature flag, but it’s more likely the library authors will expose it as a flag as well.

    Multiple feature flags

    Instead of having a single extra_traits feature, have it and feature flags for each trait individually like:

    • trait_debug - Enables Debug for all structs
    • trait_eg - Enables Eq and PartialEq for all structs
    • trait_hash - Enables Hash for all structs
    • extra_traits - Enables all of the above through dependent features

    This change should reduce compilation times when not all traits are desired. The downsides are that it complicates CI. It can be added in a backwards-compatible manner later should compilation times or consumer demand changes.

    Unresolved questions

    Summary

    Finalize syntax of impl Trait and dyn Trait with multiple bounds before stabilization of these features.

    Motivation

    Current priority of + in impl Trait1 + Trait2 / dyn Trait1 + Trait2 brings inconsistency in the type grammar. This RFC outlines possible syntactic alternatives and suggests one of them for stabilization.

    Guide-level explanation

    “Alternative 2” (see reference-level explanation) is selected for stabilization.

    impl Trait1 + Trait2 / dyn Trait1 + Trait2 now require parentheses in all contexts where they are used inside of unary operators &(impl Trait1 + Trait2) / &(dyn Trait1 + Trait2), similarly to trait object types without prefix, e.g. &(Trait1 + Trait2).

    Additionally, parentheses are required in all cases where + in impl or dyn is ambiguous. For example, Fn() -> impl A + B can be interpreted as both (Fn() -> impl A) + B (low priority plus) or Fn() -> (impl A + B) (high priority plus), so we are refusing to disambiguate and require explicit parentheses.

    Reference-level explanation

    Current situation

    In the current implementation when we see impl or dyn we start parsing following bounds separated by +s greedily regardless of context, so + effectively gets the strongest priority.

    So, for example:

    • &dyn A + B is parsed as &(dyn A + B)
    • Fn() -> impl A + B is parsed as Fn() -> (impl A + B)
    • x as &dyn A + y is parsed as x as &(dyn A + y).

    Compare this with parsing of trait object types without prefixes (RFC 438):

    • &A + B is parsed as (&A) + B and is an error
    • Fn() -> A + B is parsed as (Fn() -> A) + B
    • x as &A + y is parsed as (x as &A) + y

    Also compare with unary operators in bounds themselves:

    • for<'a> A<'a> + B is parsed as (for<'a> A<'a>) + B, not for<'a> (A<'a> + B)
    • ?A + B is parsed as (?A) + B, not ?(A + B)

    In general, binary operations like + have lower priority than unary operations in all contexts - expressions, patterns, types. So the priorities as implemented bring inconsistency and may break intuition.

    Alternative 1: high priority + (status quo)

    Pros:

    • The greedy parsing with high priority of + after impl / dyn has one benefit - it requires the least amount of parentheses from all the alternatives. Parentheses are needed only when the greedy behaviour needs to be prevented, e.g. Fn() -> &(dyn Write) + Send, this doesn’t happen often.

    Cons:

    • Inconsistent and possibly surprising operator priorities.
    • impl / dyn is a somewhat weird syntactic construction, it’s not an usual unary operator, its a prefix describing how to interpret the following tokens. In particular, if the impl A + B needs to be parenthesized for some reason, it needs to be done like this (impl A + B), and not impl (A + B). The second variant is a parsing error, but some people find it surprising and expect it to work, as if impl were an unary operator.

    Alternative 2: low priority +

    Basically, impl A + B is parsed using same rules as A + B.

    If impl A + B is located inside a higher priority operator like & it has to be parenthesized. If it is located at intersection of type and expressions grammars like expr1 as Type + expr2, it has to be parenthesized as well.

    &dyn A + B / Fn() -> impl A + B / x as &dyn A + y has to be rewritten as &(dyn A + B) / Fn() -> (impl A + B) / x as &(dyn A + y) respectively.

    One location must be mentioned specially, the location in a function return type:

    fn f() -> impl A + B {
        // Do things
    }

    This is probably the most common location for impl Trait types. In theory, it doesn’t require parentheses in any way - it’s not inside of an unary operator and it doesn’t cross expression boundaries. However, it creates a bit of perceived inconsistency with function-like traits and function pointers that do require parentheses for impl Trait in return types (Fn() -> (impl A + B) / fn() -> (impl A + B)) because they, in their turn, can appear inside of unary operators and casts. So, if avoiding this is considered more important than ergonomics, then we can require parentheses in function definitions as well.

    fn f() -> (impl A + B) {
        // Do things
    }

    Pros:

    • Consistent priorities of binary and unary operators.
    • Parentheses are required relatively rarely (unless we require them in function definitions as well).

    Cons:

    • More parentheses than in the “Alternative 1”.
    • impl / dyn is still a somewhat weird prefix construction and dyn (A + B) is not a valid syntax.

    Alternative 3: Unary operator

    impl and dyn can become usual unary operators in type grammar like & or *const. Their application to any other types except for (possibly parenthesized) paths (single A) or “legacy trait objects” (A + B) becomes an error, but this could be changed in the future if some other use is found.

    &dyn A + B / Fn() -> impl A + B / x as &dyn A + y has to be rewritten as &dyn(A + B) / Fn() -> impl(A + B) / x as &dyn(A + y) respectively.

    Function definitions with impl A + B in return type have to be rewritten too.

    fn f() -> impl(A + B) {
        // Do things
    }

    Pros:

    • Consistent priorities of binary and unary operators.
    • impl / dyn are usual unary operators, dyn (A + B) is a valid syntax.

    Cons:

    • The largest amount of parentheses, parentheses are always required. Parentheses are noise, there may be even less desire to use dyn in trait objects now, if something like Box<Write + Send> turns into Box<dyn(Write + Send)>.

    Other alternatives

    Two separate grammars can be used depending on context (https://github.com/rust-lang/rfcs/pull/2250#issuecomment-352435687) - Alternative 1/2 in lists of arguments like Box<dyn A + B> or Fn(impl A + B, impl A + B), and Alternative 3 otherwise (&dyn (A + B)).

    Compatibility

    The alternatives are ordered by strictness from the most relaxed Alternative 1 to the strictest Alternative 3, but switching from more strict alternatives to less strict is not exactly backward-compatible.

    Switching from 2/3 to 1 can change meaning of legal code in rare cases. Switching from 3 to 2/1 requires keeping around the syntax with parentheses after impl / dyn.

    Alternative 2 can be backward-compatibly extended to “relaxed 3” in which parentheses like dyn (A + B) are permitted, but technically unnecessary. Such parenthesis may keep people expecting dyn (A + B) to work happy, but complicate parsing by introducing more ambiguities to the grammar.

    While unary operators like & “obviously” have higher priority than +, cases like Fn() -> impl A + B are not so obvious. The Alternative 2 considers “low priority plus” to have lower priority than Fn , so Fn() -> impl A + B can be treated as (Fn() -> impl A) + B, however it may be more intuitive and consistent with fn items to make + have higher priority than Fn (but still lower priority than &). As an immediate solution we refuse to disambiguate this case and treat Fn() -> impl A + B as an error, so we can change the rules in the future and interpret Fn() -> impl A + B (and maybe even Fn() -> A + B after long deprecation period) as Fn() -> (impl A + B) (and Fn() -> (A + B), respectively).

    Experimental check

    An application of all the alternatives to rustc and libstd codebase can be found in this branch. The first commit is the baseline (Alternative 1) and the next commits show changes required to move to Alternatives 2 and 3. Alternative 2 requires fewer changes compared to Alternative 3.

    As the RFC author interprets it, the Alternative 3 turns out to be impractical due to common use of Boxes and other contexts where the parenthesis are technically unnecessary, but required by Alternative 3. The number of parenthesis required by Alternative 2 is limited and they seem appropriate because they follow “normal” priorities for unary and binary operators.

    Drawbacks

    See above.

    Rationale and alternatives

    See above.

    Unresolved questions

    None.

    Summary

    Allow overriding profile keys for certain dependencies, as well as providing a way to set profiles in .cargo/config

    Motivation

    Currently the “stable” way to tweak build parameters like “debug symbols”, “debug assertions”, and “optimization level” is to edit Cargo.toml.

    This file is typically checked in tree, so for many projects overriding things involves making temporary changes to this, which feels hacky. On top of this, if Cargo is being called by an encompassing build system as what happens in Firefox, these changes can seem surprising.

    This also doesn’t allow for much customization. For example, when trying to optimize for compilation speed by building in debug mode, build scripts will get built in debug mode as well. In case of complex build-time dependencies like bindgen, this can end up significantly slowing down compilation. It would be nice to be able to say “build in debug mode, but build build dependencies in release”. Also, your program may have large dependencies that it doesn’t use in critical paths, being able to ask for just these dependencies to be run in debug mode would be nice.

    Guide-level explanation

    Currently, the Cargo guide has a section on this.

    We amend this to add that you can override dependency configurations via profile.foo.overrides:

    [profile.dev]
    opt-level = 0
    debug = true
    
    # the `image` crate will be compiled with -Copt-level=3
    [profile.dev.overrides.image]
    opt-level = 3
    
    # All dependencies (but not this crate itself) will be compiled
    # with -Copt-level=2 . This includes build dependencies.
    [profile.dev.overrides."*"]
    opt-level = 2
    
    # Build scripts and their dependencies will be compiled with -Copt-level=3
    # By default, build scripts use the same rules as the rest of the profile
    [profile.dev.build_override]
    opt-level = 3
    

    Additionally, profiles may be listed in .cargo/config. When building, cargo will calculate the current profile, and if it has changed, it will do a fresh/clean build.

    Reference-level explanation

    In case of overlapping rules, the precedence order is that overrides.foo will win over overrides."*" and both will win over build_override.

    So if you specify build_override it will not affect the compilation of any dependencies which are both build-dependencies and regular dependencies. If you have

    [profile.dev]
    opt-level = 0
    [profile.dev.build_override]
    opt-level = 3
    

    and the image crate is both a build dependency and a regular dependency; it will be compiled as per the top level opt-level=0 rule. If you wish it to be compiled as per the build_override rule, use a normal override rule:

    [profile.dev]
    opt-level = 0
    [profile.dev.build_override]
    opt-level = 3
    [profile.dev.overrides.image]
    opt-level = 3
    

    This clash may not occur whilst cross compiling since two separate versions of the crate will be compiled. (This RFC leaves the decision of whether or not to handle this up to the implementors)

    It is not possible to have the same crate compiled in different modes as a build dependency and a regular dependency within the same profile when not cross compiling. (This is a current limitation in Cargo, but it would be nice if we could fix this)

    Put succinctly, build_override is not able to affect anything compiled into the final binary.

    cargo build --target foo will fail to run if foo clashes with the name of a profile; so avoid giving profiles the same name as possible build targets.

    When in a workspace, "*" will apply to all dependencies that are not workspace members, you can explicitly apply things to workspace members with [profile.dev.overrides.membername].

    The panic key cannot be specified in an override; only in the top level of a profile. Rust does not allow the linking together of crates with different panic settings.

    Drawbacks

    This complicates cargo.

    Rationale and alternatives

    There are really two or three concerns here:

    • A stable interface for setting various profile keys (cargo rustc -- -Clto is not good, for example, and doesn’t integrate into Cargo’s target directories)
    • The ability to use a different profile for build scripts (usually, the ability to flip optimization modes; I don’t think folks care as much about -g in build scripts)
    • The ability to use a different profile for specific dependencies

    The first one can be resolved partially by stabilizing cargo arguments for overriding these. It doesn’t fix the target directory issue, but that might not be a major concern. Allowing profiles to come from .cargo/config is another minimal solution to this for use cases like Firefox, which wraps Cargo in another build system.

    The second one can be fixed with a specific build-scripts = release key for profiles.

    The third can’t be as easily fixed, however it’s not clear if that’s a major need.

    The nice thing about this proposal is that it is able to handle all three of these concerns. However, separate RFCs for separate features could be introduced as well.

    In general there are plans for Cargo to support other build systems by making it more modular (so that you can ask it for a build plan and then execute it yourself). Such build systems would be able to provide the ability to override profiles themselves instead. It’s unclear if the general Rust community needs the ability to override profiles.

    Unresolved questions

    • Bikeshedding the naming of the keys
    • The current proposal provides a way to say “special-case all build dependencies, even if they are regular dependencies as well”, but not “special-case all build-only dependencies” (which can be solved with a !build_override thing, but that’s weird and unweildy)
    • It would be nice to have a way for crates to declare that they use a particular panic mode (something like allow-panic=all vs allow-panic=abort/allow_panic=unwind, with all as default) so that they can assume a panic mode and cargo will refuse to compile them with anything else

    Summary

    Introduce the bound form MyTrait<AssociatedType: Bounds>, permitted anywhere a bound of the form MyTrait<AssociatedType = T> would be allowed. The bound T: Trait<AssociatedType: Bounds> desugars to the bounds T: Trait and <T as Trait>::AssociatedType: Bounds. See the reference and rationale for exact details.

    Motivation

    Currently, when specifying a bound using a trait that has an associated type, the developer can specify the precise type via the syntax MyTrait<AssociatedType = T>. With the introduction of the impl Trait syntax for static-dispatch existential types, this syntax also permits MyTrait<AssociatedType = impl Bounds>, as a shorthand for introducing a new type variable and specifying those bounds.

    However, this introduces an unnecessary level of indirection that does not match the developer’s intuition and mental model as well as it could. In particular, given the ability to write bounds on a type variable as T: Bounds, it makes sense to permit writing bounds on an associated type directly. This results in the simpler syntax MyTrait<AssociatedType: Bounds>.

    Guide-level explanation

    Instead of specifying a concrete type for an associated type, we can specify a bound on the associated type, to ensure that it implements specific traits, as seen in the example below:

    fn print_all<T: Iterator<Item: Display>>(printables: T) {
        for p in printables {
            println!("{}", p);
        }
    }

    In anonymous existential types

    fn printables() -> impl Iterator<Item: Display> {
        // ..
    }

    Further examples

    Instead of writing:

    impl<I> Clone for Peekable<I>
    where
        I: Clone + Iterator,
        <I as Iterator>::Item: Clone,
    {
        // ..
    }

    you may write:

    impl<I> Clone for Peekable<I>
    where
        I: Clone + Iterator<Item: Clone>
    {
        // ..
    }

    or replace the where clause entirely:

    impl<I: Clone + Iterator<Item: Clone>> Clone for Peekable<I> {
        // ..
    }

    Reference-level explanation

    The surface syntax T: Trait<AssociatedType: Bounds> should desugar to a pair of bounds: T: Trait and <T as Trait>::AssociatedType: Bounds. Rust currently allows both of those bounds anywhere a bound can currently appear; the new syntax does not introduce any new semantics.

    Additionally, the surface syntax impl Trait<AssociatedType: Bounds> turns into a named type variable T, universal or existential depending on context, with the usual bound T: Trait along with the added bound <T as Trait>::AssociatedType: Bounds.

    Meanwhile, the surface syntax dyn Trait<AssociatedType: Bounds> desugars into dyn Trait<AssociatedType = T> where T is a named type variable T with the bound T: Bounds.

    The desugaring for associated types

    In the case of an associated type having a bound of the form:

    trait TraitA {
        type AssocA: TraitB<AssocB: TraitC>;
    }

    we desugar to an anonymous associated type for AssocB, which corresponds to:

    trait TraitA {
        type AssocA: TraitB<AssocB = Self::AssocA_0>;
        type AssocA_0: TraitC; // Associated type is Unnamed!
    }

    Notes on the meaning of impl Trait<Assoc: Bound>

    Note that in the context -> impl Trait<Assoc: Bound>, since the Trait is existentially quantified, the Assoc is as well. Semantically speaking, fn printables.. is equivalent to:

    fn printables() -> impl Iterator<Item = impl Display> { .. }

    For arg: impl Trait<Assoc: Bound>, it is semantically equivalent to: arg: impl Trait<Assoc = impl Bound>.

    Meaning of existential type Foo: Trait<Assoc: Bound>

    Given:

    existential type Foo: Trait<Assoc: Bound>;
    

    it can be seen as the same as:

    existential type Foo: Trait<Assoc = _0>;
    existential type _0: Bound;

    This syntax is specified in RFC 2071. As in that RFC, this documentation uses the non-final syntax for existential type aliases.

    Drawbacks

    Rust code can already express this using the desugared form. This proposal just introduces a simpler surface syntax that parallels other uses of bounds. As always, when introducing new syntactic forms, an increased burden is put on developers to know about and understand those forms, and this proposal is no different. However, we believe that the parallel to the use of bounds elsewhere makes this new syntax immediately recognizable and understandable.

    Rationale and alternatives

    As with any new surface syntax, one alternative is simply not introducing the syntax at all. That would still leave developers with the MyTrait<AssociatedType = impl Bounds> form. However, allowing the more direct bounds syntax provides a better parallel to the use of bounds elsewhere. The introduced form in this RFC is comparatively both shorter and clearer.

    An alternative desugaring of bounds on associated types

    An alternative desugaring of the following definition:

    trait TraitA {
        type AssocA: TraitB<AssocB: TraitC>;
    }

    is to add the where clause, as specified above, to the trait, desugaring to:

    trait TraitA
    where
        <Self::AssocA as TraitB>::AssocB: TraitC,
    {
        type AssocA: TraitB;
    }

    However, at the time of this writing, a Rust compiler will treat this differently than the desugaring proposed in the reference. The following snippet illustrates the difference:

    trait Foo where <Self::Bar as Iterator>::Item: Copy {
        type Bar: Iterator;
    }
    
    trait Foo2 {
        type Bar: Iterator<Item = Self::BarItem>;
        type BarItem: Copy;
    }
    
    fn use_foo<X: Foo>(arg: X)
    where <X::Bar as Iterator>::Item: Copy
    // ^-- Remove this line and it will error with:
    // error[E0277]: `<<X as Foo>::Bar as std::iter::Iterator>::Item` doesn't implement `Copy`
    {
        let item: <X::Bar as Iterator>::Item;
    }
    
    fn use_foo2<X: Foo2>(arg: X) {
        let item: <X::Bar as Iterator>::Item;
    }

    The desugaring with a where therefore becomes problematic from a perspective of usability.

    However, RFC 2089, Implied Bounds specifies that desugaring to the where clause in the trait will permit the use_foo function to omit its where clause. This entails that both desugarings become equivalent from the point of view of a user. The desugaring with where therefore becomes viable in the presence of RFC 2089.

    Unresolved questions

    • Does allowing this for dyn trait objects introduce any unforeseen issues? This can be resolved during stabilization.

    • The exact desugaring in the context of putting bounds on an associated type of a trait is left unresolved. The semantics should however be preserved. This is also the case with other desugarings in this RFC.

    Summary

    Allow if let guards in match expressions.

    Motivation

    This feature would greatly simplify some logic where we must match a pattern iff some value computed from the match-bound values has a certain form, where said value may be costly or impossible (due to affine semantics) to recompute in the match arm.

    For further motivation, see the example in the guide-level explanation. Absent this feature, we might rather write the following:

    match ui.wait_event() {
        KeyPress(mod_, key, datum) =>
            if let Some(action) = intercept(mod_, key) { act(action, datum) }
            else { accept!(KeyPress(mod_, key, datum)) /* can't re-use event verbatim if `datum` is non-`Copy` */ }
        ev => accept!(ev),
    }

    accept may in general be lengthy and inconvenient to move into another function, for example if it refers to many locals.

    Here is an (incomplete) example taken from a real codebase, to respond to ANSI CSI escape sequences:

    #[inline]
    fn csi_dispatch(&mut self, parms: &[i64], ims: &[u8], ignore: bool, x: char) {
        match x {
            'C' => if let &[n] = parms { self.screen.move_x( n as _) }
                   else { log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                     parms, ims, ignore, x) },
            'D' => if let &[n] = parms { self.screen.move_x(-n as _) }
                   else { log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                     parms, ims, ignore, x) },
            'J' => self.screen.erase(match parms {
                &[] |
                &[0] => Erasure::ScreenFromCursor,
                &[1] => Erasure::ScreenToCursor,
                &[2] => Erasure::Screen,
                _ => { log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                  parms, ims, ignore, x); return },
            }, false),
            'K' => self.screen.erase(match parms {
                &[] |
                &[0] => Erasure::LineFromCursor,
                &[1] => Erasure::LineToCursor,
                &[2] => Erasure::Line,
                _ => { log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                  parms, ims, ignore, x); return },
            }, false),
            'm' => match parms {
                &[] |
                &[0] => *self.screen.def_attr_mut() = Attr { fg_code: 0, fg_rgb: [0xFF; 3],
                                                             bg_code: 0, bg_rgb: [0x00; 3],
                                                             flags: AttrFlags::empty() },
                &[n] => if let (3, Some(rgb)) = (n / 10, color_for_code(n % 10, 0xFF)) {
                    self.screen.def_attr_mut().fg_rgb = rgb;
                } else {
                    log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                               parms, ims, ignore, x);
                },
                _ => log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                parms, ims, ignore, x),
            },
            _ => log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                            parms, ims, ignore, x),
        }
    }

    These examples are both clearer with if let guards as follows. Particularly in the latter example, in the author’s opinion, the control flow is easier to follow.

    Guide-level explanation

    (Adapted from Rust book)

    A match guard is an if let condition specified after the pattern in a match arm that also must match if the pattern matches in order for that arm to be chosen. Match guards are useful for expressing more complex ideas than a pattern alone allows.

    The condition can use variables created in the pattern, and the match arm can use any variables bound in the if let pattern (as well as any bound in the match pattern, unless the if let expression moves out of them).

    Let us consider an example which accepts a user-interface event (e.g. key press, pointer motion) and follows 1 of 2 paths: either we intercept it and take some action or deal with it normally (whatever that might mean here):

    match ui.wait_event() {
        KeyPress(mod_, key, datum) if let Some(action) = intercept(mod_, key) => act(action, datum),
        ev => accept!(ev),
    }

    Here is another example, to respond to ANSI CSI escape sequences:

    #[inline]
    fn csi_dispatch(&mut self, parms: &[i64], ims: &[u8], ignore: bool, x: char) {
        match x {
            'C' if let &[n] = parms => self.screen.move_x( n as _),
            'D' if let &[n] = parms => self.screen.move_x(-n as _),
            _ if let Some(e) = erasure(x, parms) => self.screen.erase(e, false),
            'm' => match parms {
                &[] |
                &[0] => *self.screen.def_attr_mut() = Attr { fg_code: 0, fg_rgb: [0xFF; 3],
                                                             bg_code: 0, bg_rgb: [0x00; 3],
                                                             flags: AttrFlags::empty() },
                &[n] if let (3, Some(rgb)) = (n / 10, color_for_code(n % 10, 0xFF)) =>
                    self.screen.def_attr_mut().fg_rgb = rgb,
                _ => log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                                parms, ims, ignore, x),
            },
            _ => log_debug!("Unknown CSI sequence: {:?}, {:?}, {:?}, {:?}",
                            parms, ims, ignore, x),
        }
    }
    
    #[inline]
    fn erasure(x: char, parms: &[i64]) -> Option<Erasure> {
        match x {
            'J' => match parms {
                &[] |
                &[0] => Some(Erasure::ScreenFromCursor),
                &[1] => Some(Erasure::ScreenToCursor),
                &[2] => Some(Erasure::Screen),
                _ => None,
            },
            'K' => match parms {
                &[] |
                &[0] => Some(Erasure::LineFromCursor),
                &[1] => Some(Erasure::LineToCursor),
                &[2] => Some(Erasure::Line),
                _ => None,
            },
            _ => None,
        }
    }

    Reference-level explanation

    This proposal would introduce syntax for a match arm: pat if let guard_pat = guard_expr => body_expr with semantics so the arm is chosen iff the argument of match matches pat and guard_expr matches guard_pat. The variables of pat are bound in guard_expr, and the variables of pat and guard_pat are bound in body_expr. The syntax is otherwise the same as for if guards. (Indeed, if guards become effectively syntactic sugar for if let guards.)

    An arm may not have both an if and an if let guard.

    Drawbacks

    • It further complicates the grammar.
    • It is ultimately syntactic sugar, but the transformation to present Rust is potentially non-obvious.

    Rationale and alternatives

    • The chief alternatives are to rewrite the guard as an if guard and a bind in the match arm, or in some cases into the argument of match; or to write the if let in the match arm and copy the rest of the match into the else branch — what can be done with this syntax can already be done in Rust (to the author’s knowledge); this proposal is purely ergonomic, but in the author’s opinion, the ergonomic win is significant.
    • The proposed syntax feels natural by analogy to the if guard syntax we already have, as between if and if let expressions. No alternative syntaxes were considered.

    Unresolved questions

    Questions in scope of this proposal: none yet known

    Questions out of scope:

    • Should we allow multiple guards? This proposal allows only a single if let guard. One can combine if guards with &&an RFC to allow && in if let already is, so we may want to follow that in future for if let guards also.
    • What happens if guard_expr moves out of pat but fails to match? This is already a question for if guards and (to the author’s knowledge) not formally specified anywhere — this proposal (implicitly) copies that behavior.

    Summary

    Generalize the WTF-8 encoding to allow OsStr to use the pattern API methods.

    Motivation

    OsStr is missing many common string methods compared to the standard str or even [u8]. There have been numerous attempts to expand the API surface, the latest one being RFC #1309, which leads to an attempt to revamp the std::pattern::Pattern API, but eventually closed due to inactivity and lack of resource.

    Over the past several years, there has been numerous requests and attempts to implement these missing functions in particular OsStr::starts_with (1, 2, 3, 4, 5, 6).

    The main difficulty applying str APIs to OsStr is WTF-8. A surrogate pair (e.g. U+10000 = d800 dc00) is encoded as a 4-byte sequence (f0 90 80 80) similar to UTF-8, but an unpaired surrogate (e.g. U+D800 alone) is encoded as a completely distinct 3-byte sequence (ed a0 80). Naively extending the slice-based pattern API will not work, e.g. you cannot find any ed a0 80 inside f0 90 80 80, so .starts_with() is going to be more complex, and .split() certainly cannot borrow a well-formed WTF-8 slice from it.

    The solution proposed by RFC #1309 is to create two sets of APIs. One, .contains_os(), .starts_with_os(), .ends_with_os() and .replace() which do not require borrowing, will support using &OsStr as input. The rest like .split(), .matches() and .trim() which require borrowing, will only accept UTF-8 strings as input.

    The “pattern 2.0” API does not split into two sets of APIs, but will panic when the search string starts with or ends with an unpaired surrogate.

    We feel that these designs are not elegant enough. This RFC attempts to fix the problem by going one level lower, by generalizing WTF-8 so that splitting a surrogate pair is allowed, so we could search an OsStr with an OsStr using a single Pattern API without panicking.

    Guide-level explanation

    The following new methods are now available to OsStr. They behave the same as their counterpart in str.

    impl OsStr {
        pub fn contains<'a, P>(&'a self, pat: P) -> bool
        where
            P: Pattern<&'a Self>;
    
        pub fn starts_with<'a, P>(&'a self, pat: P) -> bool
        where
            P: Pattern<&'a Self>;
    
        pub fn ends_with<'a, P>(&'a self, pat: P) -> bool
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn find<'a, P>(&'a self, pat: P) -> Option<usize>
        where
            P: Pattern<&'a Self>;
    
        pub fn rfind<'a, P>(&'a self, pat: P) -> Option<usize>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        /// Finds the first range of this string which contains the pattern.
        ///
        /// # Examples
        ///
        /// ```rust
        /// let path = OsStr::new("/usr/bin/bash");
        /// let range = path.find_range("/b");
        /// assert_eq!(range, Some(4..6));
        /// assert_eq!(path[range.unwrap()], OsStr::new("/b"));
        /// ```
        pub fn find_range<'a, P>(&'a self, pat: P) -> Option<Range<usize>>
        where
            P: Pattern<&'a Self>;
    
        /// Finds the last range of this string which contains the pattern.
        ///
        /// # Examples
        ///
        /// ```rust
        /// let path = OsStr::new("/usr/bin/bash");
        /// let range = path.rfind_range("/b");
        /// assert_eq!(range, Some(8..10));
        /// assert_eq!(path[range.unwrap()], OsStr::new("/b"));
        /// ```
        pub fn rfind_range<'a, P>(&'a self, pat: P) -> Option<Range<usize>>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        // (Note: these should return a concrete iterator type instead of `impl Trait`.
        //  For ease of explanation the concrete type is not listed here.)
        pub fn split<'a, P>(&'a self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>;
    
        pub fn rsplit<'a, P>(&'a self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn split_terminator<'a, P>(&'a self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>;
    
        pub fn rsplit_terminator<'a, P>(&'a self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn splitn<'a, P>(&'a self, n: usize, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>;
    
        pub fn rsplitn<'a, P>(&'a self, n: usize, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn matches<'a, P>(&'a self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>;
    
        pub fn rmatches<'a, P>(&self, pat: P) -> impl Iterator<Item = &'a Self>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn match_indices<'a, P>(&self, pat: P) -> impl Iterator<Item = (usize, &'a Self)>
        where
            P: Pattern<&'a Self>;
    
        pub fn rmatch_indices<'a, P>(&self, pat: P) -> impl Iterator<Item = (usize, &'a Self)>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        // this is new
        pub fn match_ranges<'a, P>(&'a self, pat: P) -> impl Iterator<Item = (Range<usize>, &'a Self)>
        where
            P: Pattern<&'a Self>;
    
        // this is new
        pub fn rmatch_ranges<'a, P>(&'a self, pat: P) -> impl Iterator<Item = (Range<usize>, &'a Self)>
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn trim_matches<'a, P>(&'a self, pat: P) -> &'a Self
        where
            P: Pattern<&'a Self>,
            P::Searcher: DoubleEndedSearcher<&'a Self>;
    
        pub fn trim_left_matches<'a, P>(&'a self, pat: P) -> &'a Self
        where
            P: Pattern<&'a Self>;
    
        pub fn trim_right_matches<'a, P>(&'a self, pat: P) -> &'a Self
        where
            P: Pattern<&'a Self>,
            P::Searcher: ReverseSearcher<&'a Self>;
    
        pub fn replace<'a, P>(&'a self, from: P, to: &'a Self) -> Self::Owned
        where
            P: Pattern<&'a Self>;
    
        pub fn replacen<'a, P>(&'a self, from: P, to: &'a Self, count: usize) -> Self::Owned
        where
            P: Pattern<&'a Self>;
    }

    We also allow slicing an OsStr.

    impl Index<RangeFull> for OsStr { ... }
    impl Index<RangeFrom<usize>> for OsStr { ... }
    impl Index<RangeTo<usize>> for OsStr { ... }
    impl Index<Range<usize>> for OsStr { ... }

    Example:

    // (assume we are on Windows)
    
    let path = OsStr::new(r"C:\Users\Admin\😀\😁😂😃😄.txt");
    // can use starts_with, ends_with
    assert!(path.starts_with(OsStr::new(r"C:\")));
    assert!(path.ends_with(OsStr::new(".txt"));
    // can use rfind_range to get the range of substring
    let last_backslash = path.rfind_range(OsStr::new(r"\")).unwrap();
    assert_eq!(last_backslash, 16..17);
    // can perform slicing.
    let file_name = &path[last_backslash.end..];
    // can perform splitting, even if it results in invalid Unicode!
    let mut parts = file_name.split(&*OsString::from_wide(&[0xd83d]));
    assert_eq!(parts.next(), Some(OsStr::new("")));
    assert_eq!(parts.next(), Some(&*OsString::from_wide(&[0xde01])));
    assert_eq!(parts.next(), Some(&*OsString::from_wide(&[0xde02])));
    assert_eq!(parts.next(), Some(&*OsString::from_wide(&[0xde03])));
    assert_eq!(parts.next(), Some(&*OsString::from_wide(&[0xde04, 0x2e, 0x74, 0x78, 0x74])));
    assert_eq!(parts.next(), None);

    Reference-level explanation

    It is trivial to apply the pattern API to OsStr on platforms where it is just an [u8]. The main difficulty is on Windows where it is an [u16] encoded as WTF-8. This RFC thus focuses on Windows.

    We will generalize the encoding of OsStr to “OMG-WTF-8” which specifies these two capabilities:

    1. Slicing a surrogate pair in half:

      let s = OsStr::new("\u{10000}");
      assert_eq!(&s[..2], &*OsString::from_wide(&[0xd800]));
      assert_eq!(&s[2..], &*OsString::from_wide(&[0xdc00]));
    2. Finding a surrogate code point, no matter paired or unpaired:

      let needle = OsString::from_wide(&[0xdc00]);
      assert_eq!(OsStr::new("\u{10000}").find(&needle), Some(2));
      assert_eq!(OsString::from_wide(&[0x3f, 0xdc00]).find(&needle), Some(1));

    These allow us to implement the “Pattern 1.5” API for all OsStr without panicking. Implementation detail can be found in the omgwtf8 package.

    Slicing

    A surrogate pair is a 4-byte sequence in both UTF-8 and WTF-8. We support slicing it in half by representing the high surrogate by the first 3 bytes, and the low surrogate by the last 3 bytes.

    "\u{10000}"      = f0 90 80 80
    "\u{10000}"[..2] = f0 90 80
    "\u{10000}"[2..] =    90 80 80
    

    The index splitting the surrogate pair will be positioned at the middle of the 4-byte sequence (index “2” in the above example).

    Note that this means:

    1. x[..i] and x[i..] will have overlapping parts. This makes OsStr::split_at_mut (if exists) unable to split a surrogate pair in half. This also means Pattern<&mut OsStr> cannot be implemented for &OsStr.
    2. The length of x[..n] may be longer than n.

    Platform-agnostic guarantees

    If an index points to an invalid position (e.g. \u{1000}[1..] or "\u{10000}"[1..] or "\u{10000}"[3..]), a panic will be raised, similar to that of str. The following are guaranteed to be valid positions on all platforms:

    • 0.
    • self.len().
    • The returned indices from find(), rfind(), match_indices() and rmatch_indices().
    • The returned ranges from find_range(), rfind_range(), match_ranges() and rmatch_ranges().

    Index arithmetic is wrong for OsStr, i.e. i + n may not produce the correct index (see Drawbacks).

    For WTF-8 encoding on Windows, we define:

    • boundary of a character or surrogate byte sequence is Valid.
    • middle (byte 2) of a 4-byte sequence is Valid.
    • interior of a 2- or 3-byte sequence is Invalid.
    • byte 1 or 3 of a 4-byte sequence is Invalid.

    Outside of Windows where the OsStr consists of arbitrary bytes, all numbers within 0 ..= self.len() are considered a valid index. This is because we want to allow os_str.find(OsStr::from_bytes(b"\xff")), and thus cannot use UTF-8 to reason with a Unix OsStr.

    Note that we have never guaranteed the actual OsStr encoding, these should only be considered an implementation detail.

    Comparison and storage

    All OsStr strings with sliced 4-byte sequence can be converted back to proper WTF-8 with an O(1) transformation:

    • If the string starts with [\x80-\xbf]{3}, replace these 3 bytes with the canonical low surrogate encoding.
    • If the string ends with [\xf0-\xf4][\x80-\xbf]{2}, replace these 3 bytes with the canonical high surrogate encoding.

    We can this transformation “canonicalization”.

    All owned OsStr should be canonicalized to contain well-formed WTF-8 only: Box<OsStr>, Rc<OsStr>, Arc<OsStr> and OsString.

    Two OsStr are compared equal if they have the same canonicalization. This may slightly reduce the performance with a constant overhead, since there would be more checking involving the first and last three bytes.

    Matching

    When an OsStr is used for matching, an unpaired low surrogate at the beginning and unpaired high surrogate at the end must be replaced by regular expressions that match all pre-canonicalization possibilities. For instance, matching for xxxx\u{d9ab} would create the following regex:

    xxxx(
        \xed\xa6\xab        # canonical representation
    |
        \xf2\x86[\xb0-\xbf] # split representation
    )
    

    and matching for \u{dcef}xxxx with create the following regex:

    (
        \xed\xb3\xaf                        # canonical representation
    |
        [\x80-\xbf][\x83\x93\xa3\xb3]\xaf   # split representation
    )xxxx
    

    After finding a match, if the end points to the middle of a 4-byte sequence, the search engine should move backward by 2 bytes before continuing. This ensure searching for \u{dc00}\u{d800} in \u{10000}\u{10000}\u{10000} will properly yield 2 matches.

    Pattern API

    As of Rust 1.25, we can search a &str using a character, a character set or another string, powered by RFC #528 a.k.a. “Pattern API 1.0”.

    There are some drafts to generalize this so that we could retain mutability and search in more types such as &[T] and &OsStr, as described in various comments (“v1.5” and “v2.0”). A proper RFC has not been proposed so far.

    This RFC assumes the target of generalizing the Pattern API beyond &str is accepted, enabling us to provide a uniform search API between different types of haystack and needles. However, this RFC does not rely on a generalized Pattern API. If this RFC is stabilized without a generalized Pattern API, the new methods described in the Guide-level explanation section can take &OsStr instead of impl Pattern<&OsStr>, but this may hurt future compatibility due to inference breakage if generalized Pattern API is indeed implemented.

    Assuming we do want to generalize Pattern API, the implementor should note the issue of splitting a surrogate pair:

    1. A match which starts with a low surrogate will point to byte 1 of the 4-byte sequence
    2. An index always point to byte 2 of the 4-byte sequence
    3. A match which ends with a high surrogate will point to byte 3 of the 4-byte sequence

    Implementation should note these different offsets when converting between different kinds of cursors. In the omgwtf8::pattern module, based on the “v1.5” draft, this behavior is enforced in the API design by using distinct types for the start and end cursors.

    The following outlines the generalized Pattern API which could work for &OsStr:

    // in module `core::pattern`:
    
    pub trait Pattern<H: Haystack>: Sized {
        type Searcher: Searcher<H>;
        fn into_searcher(self, haystack: H) -> Self::Searcher;
        fn is_contained_in(self, haystack: H) -> bool;
        fn is_prefix_of(self, haystack: H) -> bool;
        fn is_suffix_of(self, haystack: H) -> bool where Self::Searcher: ReverseSearcher<H>;
    }
    
    pub trait Searcher<H: Haystack> {
        fn haystack(&self) -> H;
        fn next_match(&mut self) -> Option<(H::StartCursor, H::EndCursor)>;
        fn next_reject(&mut self) -> Option<(H::StartCursor, H::EndCursor)>;
    }
    
    pub trait ReverseSearcher<H: Haystack>: Searcher<H> {
        fn next_match_back(&mut self) -> Option<(H::StartCursor, H::EndCursor)>;
        fn next_reject_back(&mut self) -> Option<(H::StartCursor, H::EndCursor)>;
    }
    
    pub trait DoubleEndedSearcher<H: Haystack>: ReverseSearcher<H> {}
    
    // equivalent to SearchPtrs in "Pattern API 1.5"
    // and PatternHaystack in "Pattern API 2.0"
    pub trait Haystack: Sized {
        type StartCursor: Copy + PartialOrd<Self::EndCursor>;
        type EndCursor: Copy + PartialOrd<Self::StartCursor>;
    
        // The following 5 methods are same as those in "Pattern API 1.5"
        // except the cursor type is split into two.
        fn cursor_at_front(hs: &Self) -> Self::StartCursor;
        fn cursor_at_back(hs: &Self) -> Self::EndCursor;
        unsafe fn start_cursor_to_offset(hs: &Self, cur: Self::StartCursor) -> usize;
        unsafe fn end_cursor_to_offset(hs: &Self, cur: Self::EndCursor) -> usize;
        unsafe fn range_to_self(hs: Self, start: Self::StartCursor, end: Self::EndCursor) -> Self;
    
        // And then we want to swap between the two cursor types
        unsafe fn start_to_end_cursor(hs: &Self, cur: Self::StartCursor) -> Self::EndCursor;
        unsafe fn end_to_start_cursor(hs: &Self, cur: Self::EndCursor) -> Self::StartCursor;
    }

    For the &OsStr haystack, we define both StartCursor and EndCursor as *const u8.

    The start_to_end_cursor function will return cur + 2 if we find that cur points to the middle of a 4-byte sequence.

    The start_cursor_to_offset function will return cur - hs + 1 if we find that cur points to the middle of a 4-byte sequenced.

    These type safety measures ensure functions utilizing a generic Pattern can get the correctly overlapping slices when splitting a surrogate pair.

    // (actual code implementing `.split()`)
    match self.matcher.next_match() {
        Some((a, b)) => unsafe {
            let haystack = self.matcher.haystack();
            let a = H::start_to_end_cursor(&haystack, a);
            let b = H::end_to_start_cursor(&haystack, b);
            let elt = H::range_to_self(haystack, self.start, a);
            // ^ without `start_to_end_cursor`, the slice `elt` may be short by 2 bytes
            self.start = b;
            // ^ without `end_to_start_cursor`, the next starting position may skip 2 bytes
            Some(elt)
        },
        None => self.get_end(),
    }

    Drawbacks

    • It breaks the invariant x[..n].len() == n.

      Note that OsStr did not provide a slicing operator, and it already violated the invariant (x + y).len() == x.len() + y.len().

    • A surrogate code point may be 2 or 3 indices long depending on context.

      This means code using x[i..(i+n)] may give wrong result.

      let needle = OsString::from_wide(&[0xdc00]);
      let haystack = OsStr::new("\u{10000}a");
      let index = haystack.find(&needle).unwrap();
      let matched = &haystack[index..(index + needle.len())];
      // `matched` will contain "\u{dc00}a" instead of "\u{dc00}".

      As a workaround, we introduced find_range and match_ranges. Note that this is already a problem to solve if we want to make Regex a pattern of strings.

    Rationale and alternatives

    Indivisible surrogate pair

    This RFC is the only design which allows borrowing a sub-slice of a surrogate code point from a surrogate pair.

    An alternative is keep using the vanilla WTF-8, and treat a surrogate pair as an atomic entity: makes it impossible to split a surrogate pair after it is formed. The advantages are that

    • The pattern API becomes a simple substring search.
    • Slicing behavior is consistent with str.

    There are two potential implementations when we want to match with an unpaired surrogate:

    1. Declare that a surrogate pair does not contain the unpaired surrogate, i.e. make "\u{10000}".find("\u{d800}") return None. An unpaired surrogate can only be used to match another unpaired surrogate.

      If we choose this, it means x.find(z).is_some() does not imply (x + y).find(z).is_some().

    2. Disallow matching when the pattern contains an unpaired surrogate at the boundary, i.e. make "\u{10000}".find("\u{d800}") panic. This is the approach chosen by “Pattern API 2.0”.

    Note that, for consistency, we need to make "\u{10000}".starts_with("\u{d800}") return false or panic.

    Slicing at real byte offset

    The current RFC defines the index that splits a surrogate pair into half at byte 2 of the 4-byte sequence. This has the drawback of "\u{10000}"[..2].len() == 3, and caused index arithmetic to be wrong.

    "\u{10000}"      = f0 90 80 80
    "\u{10000}"[..2] = f0 90 80
    "\u{10000}"[2..] =    90 80 80
    

    The main advantage of this scheme is we could use the same number as the start and end index.

    let s = OsStr::new("\u{10000}");
    assert_eq!(s.len(), 4);
    let index = s.find('\u{dc00}').unwrap();
    let right = &s[index..];  // [90 80 80]
    let left = &s[..index];   // [f0 90 80]

    An alternative make the index refer to the real byte offsets:

    "\u{10000}"      = f0 90 80 80
    "\u{10000}"[..3] = f0 90 80
    "\u{10000}"[1..] =    90 80 80
    

    However the question would be, what should s[..1] do?

    • Panic — But this means we cannot get left. We could inspect the raw bytes of s itself and perform &s[..(index + 2)], but we never explicitly exposed the encoding of OsStr, so we cannot read a single byte and thus impossible to do this.

    • Treat as same as s[..3] — But then this inherits all the disadvantages of using 2 as valid index, plus we need to consider whether s[1..3] and s[3..1] should be valid.

    Given these, we decided not to treat the real byte offsets as valid indices.

    Unresolved questions

    None yet.

    Summary

    This RFC proposes the addition of Option::replace to complete the Option::take method, it replaces the actual value in the option by Some with the value given in parameter, returning the old value if present, without deinitializing either one.

    Motivation

    You can see the Option as a container and other containers already have this kind of method to change a value in-place like the HashMap::replace method.

    How do you replace a value inside an Option, you can use mem::replace but it can be really inconvenient to import the mem module just for that. Why not adding a useful method to do that ?

    This is the symmetry of the already present Option::take method.

    Detailed design

    This method will be added to the core::option::Option type implementation:

    use core::mem::replace;
    
    impl<T> Option<T> {
        // ...
    
        pub fn replace(&mut self, value: T) -> Option<T> {
            mem::replace(self, Some(value))
        }
    }

    Drawbacks

    It increases the size of the standard library by a tiny bit.

    The add of this method could be a breaking change in the case of an already implemented method on the Option enum with the replace name. (i.e. a Trait defining the replace method that has been implemented on the Option type).

    This method behavior could be misinterpreted: Updating the Option only if the variant is Some, doing nothing if its None. This other method could exist too and be named map_in_place or modify, no method having this kind of behavior already exist in the Rust std library.

    Alternatives

    • Don’t use the replace name and use give instead in symmetry with the actual take method.
    • Use directly mem::replace.

    Unresolved questions

    None.

    Summary

    Add a repetition specifier to macros to repeat a pattern at most once: $(pat)?. Here, ? behaves like + or * but represents at most one repetition of pat.

    Motivation

    There are two specific use cases in mind.

    Macro rules with optional parts

    Currently, you just have to write two rules and possibly have one “desugar” to the other.

    macro_rules! foo {
      (do $b:block) => {
        $b
      }
      (do $b1:block and $b2:block) => {
        foo!($b1)
        $b2
      }
    }

    Under this RFC, one would simply write:

    macro_rules! foo {
      (do $b1:block $(and $b2:block)?) => {
        $b1
        $($b2)?
      }
    }

    Trailing commas

    Currently, the best way to make a rule tolerate trailing commas is to create another identical rule that has a comma at the end:

    macro_rules! foo {
      ($(pat),+,) => { foo!( $(pat),+ ) };
      ($(pat),+) => {
        // do stuff
      }
    }

    or to allow multiple trailing commas:

    macro_rules! foo {
      ($(pat),+ $(,)*) => {
        // do stuff
      }
    }

    This is unergonomic and clutters up macro definitions needlessly. Under this RFC, one would simply write:

    macro_rules! foo {
      ($(pat),+ $(,)?) => {
        // do stuff
      }
    }

    Guide-level explanation

    In Rust macros, you specify some “rules” which define how the macro is used and what it transforms to. For each rule, there is a pattern and a body:

    macro_rules! foo {
      (pattern) => { body }
    }

    The pattern portion is composed of zero or more subpatterns concatenated together. One possible subpattern is to repeat another subpattern some number of times. This is useful when writing variadic macros (e.g. println):

    macro_rules! println {
      // Takes a variable number of arguments after the template
      ($template:expr, $($args:expr),*) => { ... }
    }

    which can be invoked like so:

    println!("")           // 0 args
    println!("", foo)      // 1 args
    println!("", foo, bar) // 2 args
    ...

    The * in the pattern of this example indicates “0 or more repetitions”. One can also use + for “at least one repetition” or ? for “at most one repetition”.

    In the body of a rule, one can specify to repeat some code for every occurrence of the pattern in the invocation:

    macro_rules! foo {
      ($($pat:expr),*) => {
        $(
          println!("{}", $pat)
        )* // Repeat for each `expr` passed to the macro
      }
    }

    The same can be done for + and ?.

    The ? operator is particularly useful for making macro rules with optional components in the invocation or for making macros tolerate trailing commas.

    Reference-level explanation

    ? is identical to + and * in use except that it represents “at most once” repetition.

    Introducing ? into the grammar for macro repetition introduces an easily fixable ambiguity, as noted by @kennytm here:

    There is ambiguity: $($x:ident)?+ today matches a?b?c and not a+. Fortunately this is easy to resolve: you just look one more token ahead and always treat ?* and ?+ to mean separate by the question mark token.

    Drawbacks

    While there are grammar ambiguities, they can be easily fixed.

    Also, for patterns that use *, ? is not a perfect solution: $(pat),* $(,)? still allows , which is a bit weird. However, this is still an improvement over $(pat),* $(,)* which allows ,,,,,.

    Rationale and Alternatives

    The implementation of ? ought to be very similar to + and *. Only the parser needs to change; to the author’s knowledge, it would not be technically difficult to implement, nor would it add much complexity to the compiler.

    The ? character is chosen because

    • As noted above, there are grammar ambiguities, but they can be easily fixed
    • It is consistent with common regex syntax, as are + and *
    • It intuitively expresses “this pattern is optional”

    One alternative to alleviate the trailing comma paper cut is to allow trailing commas automatically for any pattern repetitions. This would be a breaking change. Also, it would allow trailing commas in potentially unwanted places. For example:

    macro_rules! foo {
      ($($pat:expr),*; $(foo),*) => {
        $(
          println!("{}", $pat)
        )* // Repeat for each `expr` passed to the macro
      }
    }

    would allow

    foo! {
      x,; foo
    }

    Also, rather than have ? be a repetition operator, we could have the compiler do a “copy/paste” of the rule and insert the optional pattern. Implementation-wise, this might reuse less code than the proposal. Also, it’s probably less easy to teach; this RFC is very easy to teach because ? is another operator like + or *.

    We could use another symbol other than ?, but it’s not clear what other options might be better. ? has the advantage of already being known in common regex syntax as “optional”.

    It has also been suggested to add {M, N} (at least M but no more than N) either in addition to or as an alternative to ?. Like ?, {M, N} is common regex syntax and has the same implementation difficulty level. However, it’s not clear how useful such a pattern would be. In particular, we can’t think of any other language to include this sort of “partially-variadic” argument list. It is also questionable why one would want to syntactically repeat some piece of code between M and N times. Thus, this RFC does not propose to add {M, N} at this time (though we note that it is forward-compatible).

    Finally, we could do nothing and wait for macros 2.0. However, it will be a while (possibly years) before that lands in stable rust. The current implementation and proposals are not very well-defined yet. Having something until that time would be nice to fix this paper cut. This proposal does not add a lot of complexity, but does nicely fill the gap.

    Unresolved Questions

    • Should the ? Kleene operator accept a separator? Adding a separator is completely meaningless (since we don’t accept trailing separators, and ? can accept “at most one” repetition), but allowing it is consistent with + and *. Currently, we allow a separator. We could also make it an error or lint.

    Summary

    The special Self identifier is now permitted in struct, enum, and union type definitions. A simple example struct is:

    enum List<T>
    where
        Self: PartialOrd<Self> // <-- Notice the `Self` instead of `List<T>`
    {
        Nil,
        Cons(T, Box<Self>) // <-- And here.
    }

    Motivation

    Removing exceptions and making the language more uniform

    The contextual identifier Self can already be used in type context in cases such as when defining what an associated type is for a particular type as well as for generic parameters in impls as in:

    trait Foo<T = Self> {
        type Bar;
    
        fn wibble<U>() where Self: Sized;
    }
    
    struct Quux;
    
    impl Foo<Self> for Quux {
        type Bar = Self;
    
        fn wibble<U>() where Self: Sized {}
    }

    But this is not currently possible inside both fields and where clauses of type definitions. This makes the language less consistent with respect to what is allowed in type positions than what it could be.

    Principle of least surprise

    Users, just new to the language and experts in the language alike, also have a reasonable expectations that using Self inside type definitions is in fact already possible. Users may have and have these expectations because Self already works in other places where a type is expected. If a user attempts to use Self today, that attempt will fail, breaking the users intuition of the languages semantics. Avoiding that breakage will reduce the paper cuts newcomers face when using the language. It will also allow the community to focus on answering more important questions.

    Better ergonomics with smaller edit distances

    When you have complex recursive enums with many variants and generic types, and want to rename a type parameter or the type itself, it would make renaming and refactoring the type definitions easier if you did not have to make changes in the variant fields which mention the type. This can be helped by IDEs to some extent, but you do not always have such IDEs and even then, the readability of using Self is superior to repeating the type in variants and fields since it is a more visual cue that can be highlighted for specially.

    Encouraging descriptively named types, type variables, and more generic code

    Making it simpler and more ergonomic to have longer type names and more generic parameters in type definitions can also encourage using more descriptive identifiers for both the type and the type variables used. It may also encourage more generic code altogether.

    Guide-level explanation

    An Obligatory Public Service Announcement: When reading this RFC, keep in mind that these lists are only examples. Always consider if you really need to use linked lists!

    We will now go through a few examples of what you can and can’t do with this RFC.

    Simple example

    Let’s look at a simple cons-list of u8s. Before this RFC, you had to write:

    enum U8List {
        Nil,
        Cons(u8, Box<U8List>)
    }

    But with this RFC, you can now instead write:

    enum U8List {
        Nil,
        Cons(u8, Box<Self>) // <-- Notice 'Self' here
    }

    If you had written this example with Self without this RFC, the compiler would have greeted you with:

    error[E0411]: cannot find type `Self` in this scope
     --> src/main.rs:3:18
      |
    3 |     Cons(u8, Box<Self>) // <-- Notice 'Self' here
      |                  ^^^^ `Self` is only available in traits and impls
    

    With this RFC, the compiler will never do so.

    This new way of writing with Self can be thought of as literally desugaring to the way it is written in the example before it. This also extends to generic types (non-nullary type constructors) that are recursive.

    With generic type parameters

    Continuing with the cons lists, let’s take a look at how the canonical linked-list example can be rewritten using this RFC.

    We start off with:

    enum List<T> {
        Nil,
        Cons(T, Box<List<T>>)
    }

    With this RFC, the snippet above can be rewritten as:

    enum List<T> {
        Nil,
        Cons(T, Box<Self>) // <-- Notice 'Self' here
    }

    Notice in particular how we used just Self for both U8List and List<T>. This applies to types with any number of parameters, including those that are parameterized by lifetimes.

    Examples with lifetimes

    An example of this can be seen in the following cons list:

    enum StackList<'a, T: 'a> {
        Nil,
        Cons(T, &'a StackList<'a, T>)
    }

    which is rewritten with this RFC as:

    enum StackList<'a, T: 'a> {
        Nil,
        Cons(T, &'a Self) // <-- Still using just 'Self'
    }

    Structs and unions

    You can also use Self in structs as in:

    struct NonEmptyList<T> {
        head: T,
        tail: Option<Box<NonEmptyList<T>>>,
    }

    which is written with this RFC as:

    struct NonEmptyList<T> {
        head: T,
        tail: Option<Box<Self>>,
    }

    This also extends to unions.

    where-clauses

    In today’s Rust, it is possible to define a type such as:

    struct Foo<T>
    where
        Foo<T>: SomeTrait
    {
        // Some fields..
    }

    and with some impls:

    trait SomeTrait { ... }
    
    impl SomeTrait for Foo<u32> { ... }
    impl SomeTrait for Foo<String> { ... }

    this idiom bounds the types that the type parameter T can be of but also avoids defining an Auxiliary trait which one bound T with as in:

    struct Foo<T: Auxiliary> {
        // Some fields..
    }

    You could also have the type on the right hand side of the bound in the where clause as in:

    struct Bar<T>
    where
        T: PartialEq<Bar<T>>
    {
        // Some fields..
    }

    with this RFC, you can now redefine Foo<T> and Bar<T> as:

    struct Foo<T>
    where
        Self: SomeTrait // <-- Notice `Self`!
    {
        // Some fields..
    }
    
    struct Bar<T>
    where
        T: PartialEq<Self> // <-- Notice `Self`!
    {
        // Some fields..
    }

    This makes the bound involving Self slightly more clear.

    When Self can not be used

    Consider the following small expression language:

    trait Ty { type Repr: ::std::fmt::Debug; }
    
    #[derive(Debug)]
    struct Int;
    impl Ty for Int { type Repr = usize; }
    
    #[derive(Debug)]
    struct Bool;
    impl Ty for Bool { type Repr = bool; }
    
    #[derive(Debug)]
    enum Expr<T: Ty> {
        Lit(T::Repr),
        Add(Box<Expr<Int>>, Box<Expr<Int>>),
        If(Box<Expr<Bool>>, Box<Expr<T>>, Box<Expr<T>>),
    }
    
    fn main() {
        let expr: Expr<Int> =
            Expr::If(
                Box::new(Expr::Lit(true)),
                Box::new(Expr::Lit(1)),
                Box::new(Expr::Add(
                    Box::new(Expr::Lit(1)),
                    Box::new(Expr::Lit(1))
                ))
            );
        println!("{:#?}", expr);
    }

    You may perhaps reach for this:

    #[derive(Debug)]
    enum Expr<T: Ty> {
        Lit(T::Repr),
        Add(Box<Self>, Box<Self>),
        If(Box<Self>, Box<Self>, Box<Self>),
    }

    But you have now changed the definition of Expr semantically. The changed semantics are due to the fact that Self in this context is not the same type as Expr<Int> or Expr<Bool>. The compiler, when desugaring Self in this context, will simply substitute Self with what it sees in Expr<T: Ty> (with any bounds removed).

    You may at most use Self by changing the definition of Expr<T> to:

    #[derive(Debug)]
    enum Expr<T: Ty> {
        Lit(T::Repr),
        Add(Box<Expr<Int>>, Box<Expr<Int>>),
        If(Box<Expr<Bool>>, Box<Self>, Box<Self>),
    }

    Types of infinite size

    Consider the following example:

    enum List<T> {
        Nil,
        Cons(T, List<T>)
    }

    If you try to compile it this today, the compiler will greet you with:

    error[E0072]: recursive type `List` has infinite size
     --> src/main.rs:1:1
      |
    1 | enum List<T> {
      | ^^^^^^^^^^^^ recursive type has infinite size
    2 |     Nil,
    3 |     Cons(T, List<T>)
      |             -------- recursive without indirection
      |
      = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `List` representable
    

    If we use the syntax introduced by this RFC as in:

    enum List<T> {
        Nil,
        Cons(T, Self)
    }

    you will still get an error since it is fundamentally impossible to construct a type of infinite size. The error message would however use Self as you wrote it instead of List<T> as seen in this snippet:

    error[E0072]: recursive type `List` has infinite size
     --> src/main.rs:1:1
      |
    1 | enum List<T> {
      | ^^^^^^^^^^^^ recursive type has infinite size
    2 |     Nil,
    3 |     Cons(T, Self)
      |             ----- recursive without indirection
      |
      = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `List` representable
    

    Teaching the contents of this RFC

    When talking about and teaching recursive types in Rust, since it is now possible to use Self, the ability to use Self in this context should be taught along side those types. An example of where this can be introduced is the “Learning Rust With Entirely Too Many Linked Lists” guide.

    Reference-level explanation

    The identifier Self is (now) allowed in type contexts in fields of structs, unions, and the variants of enums. The identifier Self is also allowed as the left hand side of a bound in a where clause and as a type argument to a trait bound on the right hand side of a where clause.

    Desugaring

    When the compiler encounters Self in type contexts inside the places described above, it will substitute them with the header of the type definition but remove any bounds on generic parameters prior.

    An example: the following cons list:

    enum StackList<'a, T: 'a + InterestingTrait> {
        Nil,
        Cons(T, &'a Self)
    }

    desugars into:

    enum StackList<'a, T: 'a + InterestingTrait> {
        Nil,
        Cons(T, &'a StackList<'a, T>)
    }

    Note in particular that the source code is not desugared into:

    enum StackList<'a, T: 'a + InterestingTrait> {
        Nil,
        Cons(T, &'a StackList<'a, T: 'a + InterestingTrait>)
    }

    An example of Self in where bounds is:

    struct Foo<T>
    where
        Self: PartialEq<Self>
    {
        // Some fields..
    }

    which desugars into:

    struct Foo<T>
    where
        Foo<T>: PartialEq<Foo<T>>
    {
        // Some fields..
    }

    In relation to RFC 2102 and what Self refers to.

    It should be noted that Self always refers to the top level type and not the inner unnamed struct or union because those are unnamed. Specifically, Self always applies to the innermost nameable type. In type definitions in particular, this is equivalent: Self always applies to the top level type.

    Error messages

    When Self is used to construct an infinite type as in:

    enum List<T> {
        Nil,
        Cons(T, Self)
    }

    The compiler will emit error E0072 as in:

    error[E0072]: recursive type `List` has infinite size
     --> src/main.rs:1:1
      |
    1 | enum List<T> {
      | ^^^^^^^^^^^^ recursive type has infinite size
    2 |     Nil,
    3 |     Cons(T, Self)
      |             ----- recursive without indirection
      |
      = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `List` representable
    

    Note in particular that Self is used and not List<T> on line 3.

    In relation to other RFCs

    This RFC expands on RFC 593 and RFC 1647 with respect to where the keyword Self is allowed.

    Drawbacks

    Some may argue that we shouldn’t have many ways to do the same thing and that it introduces new syntax whereby making the surface language more complex. However, the RFC may equally be said to simplify the surface language since it removes exceptional cases especially in the users mental model.

    Using Self in a type definition makes it harder to search for all positions in which a pattern can appear in an AST.

    Rationale and alternatives

    The rationale for this particular design is straightforward as it would be uneconomic, confusing, and inconsistent to use other keywords.

    The consistency of what Self refers to

    As explained in the [reference-level explanation], we said that:

    Self always applies to the innermost nameable type.

    We arrive at this conclusion by examining a few different cases and what they have in common.

    Current Rust - Shadowing in impls

    First, let’s take a look at shadowing in impls.

    fn main() { Foo {}.foo(); }
    
    #[derive(Debug)]
    struct Foo;
    
    impl Foo {
        fn foo(&self) {
            // Prints "Foo", which is the innermost type.
            println!("{:?}", Self {});
    
            #[derive(Debug)]
            struct Bar;
    
            impl Bar {
                fn bar(&self) {
                    // Prints "Bar", also the innermost type in this context.
                    println!("{:?}", Self {});
                }
            }
            Bar {}.bar();
        }
    }

    Let’s also consider trait impls instead of inherent impls:

    impl Trait for Foo {
        fn foo(&self) {
            impl Trait for Bar {
                // Self is shadowed here...
            }
        }
    }

    We see that the conclusion holds for both examples.

    In relation to RFC 2102

    Let’s consider a modified example from RFC 2102:

    #[repr(C)]
    struct S {
        a: u32,
        _: union {
            b: Box<Self>,
            c: f32,
        },
        d: u64,
    }

    In this example, the inner union is not nameable, and so Self refers to the only nameable introduced type S. Therefore, the conclusion holds.

    Type definitions inside impls

    If in the future we decide to permit type definitions inside impls as in:

    impl Trait for Foo {
        struct Bar {
            head: u8,
            tail: Option<Box<Self>>,
        }
    }

    as sugar for:

    enum _Bar {
        head: u8,
        tail: Option<Box<Self>>,
    }
    impl Trait for Foo {
        type Bar = _Bar;
    }

    In the desugared example, we see that the only possible meaning of Self is that it refers to _Bar and not Foo. To be consistent with the desugared form, the sugared variant should have the same meaning and so Self refers to Bar there.

    Let’s now consider an alternative possible syntax:

    impl Trait for Foo {
        type Bar = struct /* there is no ident here */ {
            outer: Option<Box<Self>>,
            inner: Option<Box<Self::Item>>,
        }
    }

    Notice here in particular that there is no identifier after the keyword struct. Because of this, it is reasonable to say that the struct assigned to the associated type Bar is not directly nameable as Bar. Instead, a user must qualify Bar with Self::Bar. With this in mind, we arrive at the following interpretation:

    impl Trait for Foo {
        type Bar = struct /* there is no ident here */ {
            outer: Option<Box<Foo>>,
            inner: Option<Box<Foo::Bar>>,
        }
    }

    Conclusion

    We’ve now examined a few cases and seen that indeed, the meaning of Self is consistent in all of them as well as with what the meaning in today’s Rust.

    Doing nothing

    One alternative to the changes proposed in this RFC is to simply not implement those changes. However, this has the downsides of not increasing the ergonomics and keeping the language less consistent than what it could be. Not improving the ergonomics here may be especially problematic when dealing with “recursive” types that have long names and/or many generic parameters and may encourage developers to use type names which are less descriptive and keep their code less generic than what is appropriate.

    Internal scoped type aliases

    Another alternative is to allow users to specify type aliases inside type definitions and use any generic parameters specified in that definition. An example is:

    enum Tree<T> {
        type S = Box<Tree<T>>;
    
        Nil,
        Node(T, S, S),
    }

    instead of:

    enum Tree<T> {
        Nil,
        Node(T, Box<Self>, Box<Self>),
    }

    When dealing with generic associated types (GATs), we can then write:

    enum Tree<T, P: PointerFamily> {
        type S = P::Pointer<Tree<T>>;
    
        Nil,
        Node(T, S, S),
    }

    instead of:

    enum Tree<T, P: PointerFamily> {
        Nil,
        Node(T, P::Pointer<Tree<T>>, P::Pointer<Tree<T>>),
    }

    As we can see, this approach and alternative is more flexible compared to what is proposed in this RFC, particularly in the case of GATs. However, this alternative requires introducing and teaching more concepts compared to this RFC, which comparatively builds more on what users already know. Mixing ; and , has also proven to be controversial in the past. The alternative also opens up questions such as if the type alias should be permitted before the variants, or after the variants.

    For simpler cases such as the first tree-example, using Self is also more readable as it is a special construct that you can easily syntax-highlight for in a more noticeable way. Further, while there is an expectation from some users that Self already works, as discussed in the motivation, the expectation that this alternative already works has not been brought forth by anyone as far as this RFC’s author is aware.

    It is also unclear how internal scoped type aliases would syntactically work with where bounds.

    Strictly speaking, this particular alternative is not in conflict with this RFC in that both can be supported technically. The alternative should be considered interesting future work, but for now, a more conservative approach is preferred.

    Unresolved questions

    • This syntax creates ambiguity if we ever permit types to be declared directly within impls (for example, as the value for an associated type). Do we ever want to support that, and if so, how should we resolve the ambiguity? A possible, interpretation and way to solve the ambiguity consistently is discussed in the rationale.

    Summary

    Tuple structs can now be constructed and pattern matched with Self(v1, v2, ..). A simple example:

    struct TheAnswer(usize);
    
    impl Default for TheAnswer {
        fn default() -> Self { Self(42) }
    }

    Similarly, unit structs can also be constructed and pattern matched with Self.

    Motivation

    This RFC proposes a consistency fix allowing Self to be used in more places to better match the users’ intuition of the language and to get closer to feature parity between tuple structs and structs with named fields.

    Currently, only structs with named fields can be constructed inside impls using Self like so:

    struct Mascot { name: String, age: usize }
    
    impl Default for Mascot {
        fn default() -> Self {
            Self {
                name: "Ferris the Crab".into(),
                age: 3
            }
        }
    }

    while the following is not allowed:

    struct Mascot(String, usize);
    
    impl Default for Mascot {
        fn default() -> Self {
            Self("Ferris the Crab".into(), 3)
        }
    }

    This discrepancy is unfortunate as many users reach for Self(v0, v1, ..) from time to time, only to find that it doesn’t work. This creates a break in the users intuition and becomes a papercut. It also has the effect that each user must remember this exception, making the rule-set to remember larger wherefore the language becomes more complex.

    There are good reasons why Self { f0: v0, f1: v1, .. } is allowed. Chiefly among those is that it becomes easier to refactor the code when one wants to rename type names. Another important reason is that only having to keep Self in mind means that a developer does not need to keep the type name fresh in their working memory. This is beneficial for users with shorter working memory such as the author of this RFC.

    Since Self { f0: v0, .. } is well motivated, those benefits and motivations will also extend to tuple and unit structs. Eliminating this discrepancy between tuple structs and those with named fields will therefore have all the benefits associated with this feature for structs with named fields.

    Guide-level explanation

    Basic concept

    For structs with named fields such as:

    struct Person {
        name: String,
        ssn: usize,
        age: usize
    }

    You may use the syntax Self { field0: value0, .. } as seen below instead of writing TypeName { field0: value0, .. }:

    impl Person {
        /// Make a newborn person.
        fn newborn(name: String, ssn: usize) -> Self {
            Self { name, ssn, age: 0 }
        }
    }

    Through type aliases

    This ability does not extend to tuple structs however in current Rust but will with this RFC. To continue on with the previous example, you can now also write:

    struct Person(String, usize, usize);
    
    impl Person {
        /// Make a newborn person.
        fn newborn(name: String, ssn: usize) -> Self {
            Self(name, ssn, 0)
        }
    }

    As with structs with named fields, you may also use Self when you are impling on a type alias of a struct as seen here:

    struct FooBar(u8);
    
    type BarFoo = FooBar;
    
    impl Default for BarFoo {
        fn default() -> Self {
            Self(42) // <-- Not allowed before.
        }
    }

    Patterns

    Currently, you can pattern match using Self { .. } on a named struct as in the following example:

    struct Person {
        ssn: usize,
        age: usize
    }
    
    impl Person {
        /// Make a newborn person.
        fn newborn(ssn: usize) -> Self {
            match { Self { ssn, age: 0 } } {
                Self { ssn, age } // `Self { .. }` is permitted as a pattern!
                    => Self { ssn, age }
            }
        }
    }

    This RFC extends this to tuple structs:

    struct Person(usize, usize);
    
    impl Person {
        /// Make a newborn person.
        fn newborn(ssn: usize) -> Self {
            match { Self(ssn, 0) } {
                Self(ssn, age) // `Self(..)` is permitted as a pattern!
                    => Self(ssn, age)
            }
        }
    }

    Of course, this redundant reconstruction is not recommended in actual code, but illustrates what you can do.

    Self as a function pointer

    When you define a tuple struct today such as:

    struct Foo<T>(T);
    
    impl<T> Foo<T> {
        fn fooify_iter(iter: impl Iterator<Item = T>) -> impl Iterator<Item = Foo<T>> {
            iter.map(Foo)
        }
    }

    you can use Foo as a function pointer typed at: for<T> fn(T) -> T as seen in the example above.

    This RFC extends that such that Self can also be used as a function pointer for tuple structs. Modifying the example above gives us:

    impl<T> Foo<T> {
        fn fooify_iter(iter: impl Iterator<Item = T>) -> impl Iterator<Item = Foo<T>> {
            iter.map(Self)
        }
    }

    Unit structs

    With this RFC, you can also use Self in pattern and expression contexts when dealing with unit structs. For example:

    struct TheAnswer;
    
    impl Default for TheAnswer {
        fn default() -> Self {
            match { Self } { Self => Self }
        }
    }

    Teaching the contents

    This RFC should not require additional effort other than spreading the news that this now is possible as well as the reference. The changes are seen as intuitive enough that it supports what the user already assumes should work and will probably try at some point.

    Reference-level explanation

    When entering one of the following contexts, a Rust compiler will extend the value namespace with Self which maps to the tuple constructor fn in the case of tuple struct, or a constant, in the case of a unit struct:

    • inherent impls where the Self type is a tuple or unit struct
    • trait impls where the Self type is a tuple or unit struct

    As a result, when referring to a tuple struct, Self can be legally coerced into an fn pointer which accepts and returns expressions of the same type as the function pointer Self is referring to accepts.

    Another consequence is that Self(p_0, .., p_n) and Self become legal patterns. This works since TupleCtor(p_0, .., p_n) patterns are handled by resolving them in the value namespace and checking that they resolve to a tuple constructor. Since by definition, Self referring to a tuple struct resolves to a tuple constructor, this is OK.

    Implementation notes

    As an additional check on the sanity of a Rust compiler implementation, a well formed expression Self(v0, v1, ..), must be semantically equivalent to Self { 0: v0, 1: v1, .. } and must also be permitted when the latter would. Likewise the pattern Self(p0, p1, ..) must match exactly the same set of values as Self { 0: p0, 1: p1, .. } would and must be permitted when Self { 0: p0, 1: p1, .. } is well formed.

    Furthermore, a well formed expression or pattern Self must be semantically equivalent to Self {} and permitted when Self {} is well formed in the same context.

    For example for tuple structs, we have the typing rule:

    Δ ⊢ τ_0  type .. Δ ⊢ τ_n  type
    Δ ⊢ Self type
    Γ ⊢ x_0 : τ_0 .. Γ ⊢ x_n : τ_n
    Γ ⊢ Self { 0: x_0, .. n: x_n } : Self
    -----------------------------------------
    Γ ⊢ Self (    x_0, ..,   x_n ) : Self
    

    and the operational semantics:

    Γ ⊢ Self { 0: e_0, .., n: e_n } ⇓ v
    -------------------------------------
    Γ ⊢ Self {    e_0, ..,    e_n } ⇓ v
    

    for unit structs, the following holds:

    Δ ⊢ Self type
    Γ ⊢ Self {} : Self
    -----------------------------------------
    Γ ⊢ Self    : Self
    

    with the operational semantics:

    Γ ⊢ Self {} ⇓ v
    -------------------------------------
    Γ ⊢ Self    ⇓ v
    

    In relation to other RFCs

    This RFC expands on RFC 593 and RFC 1647 with respect to where the keyword Self is allowed.

    Drawbacks

    There are potentially some, but the author could not think of any.

    Rationale and alternatives

    This is the only design that makes sense in the sense that there really aren’t any other. Potentially, Self(v0, ..) should only work when the impled type is not behind a type alias. However, since structs with named fields supports type aliases in this respect, so should tuple structs.

    Not providing this feature would preserve papercuts and unintuitive surprises for developers.

    Unresolved questions

    There are none.

    Summary

    Adds an identity function pub const fn identity<T>(x: T) -> T { x } as core::convert::identity. The function is also re-exported to std::convert::identity.

    Motivation

    The identity function is useful

    While it might seem strange to have a function that just returns back the input, there are some cases where the function is useful.

    Using identity to do nothing among a collection of mappers

    When you have collections such as maps or arrays of mapping functions like below and you watch to dispatch to those you sometimes need the identity function as a way of not transforming the input. You can use the identity function to achieve this.

    // Let's assume that this and other functions do something non-trivial.
    fn do_interesting_stuff(x: u32) -> u32 { .. }
    
    // A dispatch-map of mapping functions:
    let mut map = HashMap::new();
    map.insert("foo", do_interesting_stuff);
    map.insert("bar", other_stuff);
    map.insert("baz", identity);

    Using identity as a no-op function in a conditional

    This reasoning also applies to simpler yes/no dispatch as below:

    let mapper = if condition { some_manipulation } else { identity };
    
    // do more interesting stuff inbetween..
    
    do_stuff(42);

    Using identity to concatenate an iterator of iterators

    We can use the identity function to concatenate an iterator of iterators into a single iterator.

    let vec_vec = vec![vec![1, 3, 4], vec![5, 6]];
    let iter_iter = vec_vec.into_iter().map(Vec::into_iter);
    let concatenated = iter_iter.flat_map(identity).collect::<Vec<_>>();
    assert_eq!(vec![1, 3, 4, 5, 6], concatenated);

    While the standard library has recently added Iterator::flatten, which you should use instead, to achieve the same semantics, similar situations are likely in the wild and the identity function can be used in those cases.

    Using identity to keep the Some variants of an iterator of Option<T>

    We can keep all the maybe variants by simply iter.filter_map(identity).

    let iter = vec![Some(1), None, Some(3)].into_iter();
    let filtered = iter.filter_map(identity).collect::<Vec<_>>();
    assert_eq!(vec![1, 3], filtered);

    To be clear that you intended to use an identity conversion

    If you instead use a closure as in |x| x when you need an identity conversion, it is less clear that this was intentional. With identity, this intent becomes clearer.

    The drop function as a precedent

    The drop function in core::mem is defined as pub fn drop<T>(_x: T) { }. The same effect can be achieved by writing { _x; }. This presents us with a precedent that such trivial functions are considered useful and includable inside the standard library even though they can be written easily inside a user’s crate.

    Avoiding repetition in user crates

    Here are a few examples of the identity function being defined and used:

    • https://docs.rs/functils/0.0.2/functils/fn.identity.html
    • https://docs.rs/tool/0.2.0/tool/fn.id.html
    • https://github.com/hephex/api/blob/ef67b209cd88d0af40af10b4a9f3e0e61a5924da/src/lib.rs

    There’s a smattering of more examples. To reduce duplication, it should be provided in the standard library as a common place it is defined.

    Precedent from other languages

    There are other languages that include an identity function in their standard libraries, among these are:

    • Haskell, which also exports this to the prelude.
    • Scala, which also exports this to the prelude.
    • Java, which is a widely used language.
    • Idris, which also exports this to the prelude.
    • Ruby, which exports it to what amounts to the top type.
    • Racket
    • Julia
    • R
    • F#
    • Clojure
    • Agda
    • Elm

    Guide-level explanation

    An identity function is a mapping of one type onto itself such that the output is the same as the input. In other words, a function identity : T -> T for some type T defined as identity(x) = x. This RFC adds such a function for all Sized types in Rust into libcore at the module core::convert and defines it as:

    pub const fn identity<T>(x: T) -> T { x }

    This function is also re-exported to std::convert::identity.

    It is important to note that the input x passed to the function is moved since Rust uses move semantics by default.

    Reference-level explanation

    An identity function defined as pub const fn identity<T>(x: T) -> T { x } exists as core::convert::identity. The function is also re-exported as std::convert::identity-

    Note that the identity function is not always equivalent to a closure such as |x| x since the closure may coerce x into a different type while the identity function never changes the type.

    Drawbacks

    It is already possible to do this in user code by:

    • using an identity closure: |x| x.
    • writing the identity function as defined in the RFC yourself.

    These are contrasted with the motivation for including the function in the standard library.

    Rationale and alternatives

    The rationale for including this in convert and not mem is that the former generally deals with conversions, and identity conversion“ is a used phrase. Meanwhile, mem does not relate to identity other than that both deal with move semantics. Therefore, convert is the better choice. Including it in mem is still an alternative, but as explained, it isn’t fitting.

    Naming the function id instead of identity is a possibility. This name is however ambiguous with “identifier” and less clear wherefore identifier was opted for.

    Unresolved questions

    There are no unresolved questions.

    Possible future work

    A previous iteration of this RFC proposed that the identity function should be added to prelude of both libcore and libstd. However, the library team decided that for the time being, it was not sold on this inclusion. As we gain usage experience with using this function, it is possible to revisit this in the future if the team chances its mind.

    The section below details, for posterity, the argument for inclusion that was previously in the motivation.

    The case for inclusion in the prelude

    Let’s compare the effort required, assuming that each letter typed has a uniform cost with respect to effort.

    use std::convert::identity; iter.filter_map(identity)
    
    fn identity<T>(x: T) -> T { x } iter.filter_map(identity)
    
    iter.filter_map(::std::convert::identity)
    
    iter.filter_map(identity)

    Comparing the length of these lines, we see that there’s not much difference in length when defining the function yourself or when importing or using an absolute path. But the prelude-using variant is considerably shorter. To encourage the use of the function, exporting to the prelude is therefore a good idea.

    In addition, there’s an argument to be made from similarity to other things in core::convert as well as drop all of which are in the prelude. This is especially relevant in the case of drop which is also a trivial function.

    Summary

    Add std::num::NonZeroU32 and eleven other concrete types (one for each primitive integer type) to replace and deprecate core::nonzero::NonZero<T>. (Non-zero/non-null raw pointers are available through std::ptr::NonNull<U>.)

    Background

    The &T and &mut T types are represented in memory as pointers, and the type system ensures that they’re always valid. In particular, they can never be NULL. Since at least 2013, rustc has taken advantage of that fact to optimize the memory representation of Option<&T> and Option<&mut T> to be the same as &T and &mut T, with the forbidden NULL value indicating Option::None.

    Later (still before Rust 1.0), a core::nonzero::NonZero<T> generic wrapper type was added to extend this optimization to raw pointers (as used in types like Box<T> or Vec<T>) and integers, encoding in the type system that they can not be null/zero. Its API today is:

    #[lang = "non_zero"]
    #[unstable]
    pub struct NonZero<T: Zeroable>(T);
    
    #[unstable]
    impl<T: Zeroable> NonZero<T> {
        pub const unsafe fn new_unchecked(x: T) -> Self { NonZero(x) }
        pub fn new(x: T) -> Option<Self> { if x.is_zero() { None } else { Some(NonZero(x)) }}
        pub fn get(self) -> T { self.0 }
    }
    
    #[unstable]
    pub unsafe trait Zeroable {
        fn is_zero(&self) -> bool;
    }
    
    impl Zeroable for /* {{i,u}{8, 16, 32, 64, 128, size}, *{const,mut} T where T: ?Sized} */

    The tracking issue for these unstable APIs is rust#27730.

    std::ptr::NonNull was stabilized in in Rust 1.25, wrapping NonZero further for raw pointers and adding pointer-specific APIs.

    Motivation

    With NonNull covering pointers, the remaining use cases for NonZero are integers.

    One problem of the current API is that it is unclear what happens or what should happen to NonZero<T> or Option<NonZero<T>> when T is some type other than a raw pointer or a primitive integer. In particular, crates outside of std can implement Zeroable for their arbitrary types since it is a public trait.

    To avoid this question entirely, this RFC proposes replacing the generic type and trait with twelve concrete types in std::num, one for each primitive integer type. This is similar to the existing atomic integer types like std::sync::atomic::AtomicU32.

    Guide-level explanation

    When an integer value can never be zero because of the way an algorithm works, this fact can be encoded in the type system by using for example the NonZeroU32 type instead of u32.

    This enables code receiving such a value to safely make some assumptions, for example that dividing by this value will not cause a attempt to divide by zero panic. This may also enable the compiler to make some memory optimizations, for example Option<NonZeroU32> might take no more space than u32 (with None represented as zero).

    Reference-level explanation

    A new private macro_rules! macro is defined and used in core::num that expands to twelve sets of items like below, one for each of:

    • u8
    • u16
    • u32
    • u64
    • u128
    • usize
    • i8
    • i16
    • i32
    • i64
    • i128
    • isize

    These types are also re-exported in std::num.

    #[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
    pub struct NonZeroU32(NonZero<u32>);
    
    impl NonZeroU32 {
        pub const unsafe fn new_unchecked(n: u32) -> Self { Self(NonZero(n)) }
        pub fn new(n: u32) -> Option<Self> { if n == 0 { None } else { Some(Self(NonZero(n))) }}
        pub fn get(self) -> u32 { self.0.0 }
    }
    
    impl fmt::Debug for NonZeroU32 {
        fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
            fmt::Debug::fmt(&self.get(), f)
        }
    }
    
    // Similar impls for Display, Binary, Octal, LowerHex, and UpperHex

    Additionally, the core::nonzero module and its contents (NonZero and Zeroable) are deprecated with a warning message that suggests using ptr::NonNull or num::NonZero* instead.

    A couple release cycles later, the module is made private to libcore and reduced to:

    /// Implementation detail of `ptr::NonNull` and `num::NonZero*`
    #[lang = "non_zero"]
    #[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
    pub(crate) struct NonZero(pub(crate) T);
    
    impl<T: CoerceUnsized<U>> CoerceUnsized<NonZero<U>> for NonZero<T> {}

    The memory layout of Option<&T> is a documented guarantee of the Rust language. This RFC does not propose extending this guarantee to these new types. For example, size_of::<Option<NonZeroU32>>() == size_of::<NonZeroU32>() may or may not be true. It happens to be in current rustc, but an alternative Rust implementation could define num::NonZero* purely as library types.

    Drawbacks

    This adds to the ever-expanding API surface of the standard library.

    Rationale and alternatives

    • Memory layout optimization for non-zero integers mostly exist in rustc today because their implementation is very close (or the same) as for non-null pointers. But maybe they’re not useful enough to justify any dedicated public API. core::nonzero could be deprecated and made private without adding num::NonZero*, with only ptr::NonNull exposing such functionality.

    • On the other hand, maybe zero is “less special” for integers than NULL is for pointers. Maybe instead of num::NonZero* we should consider some other feature to enable creating integer wrapper types that restrict values to an arbitrary sub-range (making this known to the compiler for memory layout optimizations), similar to how PR #45225 restricts the primitive type char to 0 ..= 0x10FFFF. Making entire bits available unlocks more potential future optimizations than a single value.

      However no design for such a feature has been proposed, whereas NonZero is already implemented. The author’s position is that num::NonZero* should be added as it is still useful and can be stabilized such sooner, and it does not prevent adding another language feature later.

    • In response to “what if Zeroable is implemented for other types” it was suggested to prevent such impls by making the trait permanently-unstable, or effectively private (by moving it in a private module and keeping it pub trait to fool the private in public lint). The author feels that such abuses of the stability or privacy systems do not belong in stable APIs. (Stable APIs that mention traits like RangeArgument that are not stable yet but have a path to stabilization are less of an abuse.)

    • Still, we could decide on some answer to “Zeroable for arbitrary types”, implement and test it, stabilize NonZero<T> and Zeroable as-is (re-exported in std), and not add num::NonZero*.

    • Instead of std::num the new types could be in some other location, such as the modules named after their respective primitive types. For example std::u32::NonZeroU32 or std::u32::NonZero. The former looks redundant, and the latter might lead to code that looks ambiguous if the type itself is imported instead of importing the module and using a qualified u32::NonZero path.

    • We could drop the NonZeroI* wrappers for signed integers. They’re included in this RFC because it’s easy, but every use of non-zero integers the author has seen so far has been with unsigned ones. This would cut the number of new types from 12 to 6.

    Unresolved questions

    Should the memory layout of e.g. Option<NonZeroU32> be a language guarantee?

    Discussion of the design of a new language feature for integer types restricted to an arbitrary sub-range (see second unresolved question) is out of scope for this RFC. Discussing the potential existence of such a feature as a reason not to add non-zero integer types is in scope.

    Summary

    This RFC sets the Rust 2018 Roadmap, in accordance with RFC 1728. This year’s goals are:

    • Ship an edition release: Rust 2018.
    • Build resources for intermediate Rustaceans.
    • Connect and empower Rust’s global community.
    • Grow Rust’s teams and new leaders within them.

    In pursuing these goals, we will focus particularly on four target domains for Rust:

    • Network services.
    • WebAssembly.
    • CLI apps.
    • Embedded devices.

    Motivation

    This proposal is a synthesis drawing from several sources:

    The motivation and detailed rationale of each piece of the roadmap proposal is explained in-line throughout the RFC; the closing section covers the high-level rationale.

    Guide-level explanation

    I believe that Rust has the potential to be an exceptionally empowering technology for people writing programs. I am trying to focus on providing the ‘last mile’ of user experience to take the core technological achievements of Rust and make them generally ergonomic and usable by working programmers. (@withoutboats)

    This year will be a focused one for the Rust community, with two overall technical goals, and two social ones. Here we’ll give a brief overview of each goal and some overarching themes, and in the Reference section below we’ll provide full detail.

    • Ship Rust 2018. We will ship a major marketing (edition) release in the final third of the year, with the unifying message of productivity. We will continue to focus on compiler performance, both from-scratch and incremental rebuilds. We will polish and stabilize a number of already-implemented language features like impl Trait, macros 2.0, SIMD, generators, non-lexical lifetimes and the modules revamp—and very few new ones. We will also drive critical tools (like the RLS and rustfmt), libraries, and documentation to 1.0 status. We will overhaul the http://rust-lang.org/ site to help market the release and to support programmer productivity.

    • Build resources for intermediate Rustaceans. We will write documentation and build examples that help programmers go from basic knowledge of Rust’s mechanics to knowing how to wield it effectively.

    • Connect and empower Rust’s global community. We will pursue internationalization as a first-class concern, and proactively work to build ties between Rust subcommunities currently separated by location or region. We will spin up and support Rust events worldwide, including further growth of the RustBridge program.

    • Grow Rust’s teams and new leaders within them. We will refactor the Rust team structure to support more scale, agility, and leadership growth. We will systematically invest in mentoring, both by creating more on-ramp resources and through direct mentorship relationships.

    To make our product successful, we should build and market it with an eye toward specific user stories, ensuring that we have a coherent and compelling end-to-end experience. Thus, investment in ecosystem, marketing, and feature prioritization will emphasize the following four domains in 2018:

    • Network services. The predominant domain for current production usage.
    • WebAssembly. An emerging market where Rust is strongly positioned for success.
    • CLI apps. A place where Rust’s portability, reliability, and ergonomics come together to great effect.
    • Embedded devices. A domain with a great deal of potential that is not yet first-class.

    Looking at the year as a whole, with our second marketing release of Rust, @nrc perhaps put it best:

    At the end of the year I want Rust to feel like a really solid, reliable choice for people choosing a programming language. (@nrc)

    Reference-level explanation

    Goals

    Ship Rust edition 2018

    Aiming for a major product release gives us an opportunity, as a community, to come together and do something big that goes well beyond the usual six week cycle.

    Releasing “Rust 2018” gives us a chance to say to the world that “Rust has taken a major step since 1.0; it’s time to take another look”. (@aturon)

    The Rust edition 2018 release encompasses every aspect of the work we do, so we’ll look at each area in turn. This RFC is not intended as a promise about what will ship, but rather a strong (and realistic) intention. The core team will ultimately oversee the precise timing and feature set of the release.

    It’s important to keep in mind two additional factors:

    • “Shipping” features in this context means they must be stable. We may land additional unstable features this year (like const generics), but these are separate from the Rust 2018 product. We will stabilize features individually as they become ready, not in a rush before the edition release.

    • The intent is to ship Rust 2018 in the latter part of the year. The tentative target date is the 1.29 release, which goes into beta on 2018-08-02 and ships on 2018-09-13. That gives us approximately six months to put the product together.

    These two factors together suggest that Rust 2018 will ship largely with language features that are already in nightly in some form today. Other, faster-moving areas of the product will be developing new material throughout the year.

    As always, we will continue to push out new Rust releases on a six week cadence, so a given feature missing the edition release is by no means fatal. On the other hand, we need to carefully coordinate the work so that the features we do ship sit together coherently across the compiler, tools, documentation, libraries and marketing materials.

    Language

    Rust 2018: Consolidation (@killercup)

    The most prominent language work in the pipeline stems from 2017’s ergonomics initiative. Almost all of the accepted RFCs from the initiative are available on nightly, but polish, testing, and consensus work will take time:

    I’d like to reach a final decision to ship or drop all of the ergonomics RFCs that were accepted. I hope to see this completed over the next several months . . . I hope we can have a clearer (and more spaced out) schedule for this so that their FCPs are staggered. (@withoutboats)

    Among these productivity features are a few “headliners” that will form the backbone of the release:

    • Non-lexical lifetimes. Currently in “alpha” state on nightly, with work ongoing.
    • impl Trait. Nearing readiness for stabilization FCP.
    • Generators. “Beta” state on nightly, but some design issues need resolution.
    • Module system changes. Largely usable on nightly; will need testing, feedback, and bikeshedding.

    In addition, there are some other headlining features which are nearing stabilization and should ship prior to the edition:

    • SIMD. The core SIMD intrinsics are nearing readiness for stabilization, and with luck we may be able to stabilize some vendor-agnostic primitives as well.
    • Custom allocators. The machinery has been in place for some time. Let’s settle the details and ship.
    • Macros 2.0. This feature is implemented and working, but stabilization will require us to reach a comfort level with the handling of hygiene and a few other core issues.

    Between generators and macros 2.0, we will have some support for async/await on stable Rust (possibly using macros, possibly some other way).

    Finally, there are several highly-awaited features that are unlikely to ship in the Rust 2018 edition release (though they may ship later in the year):

    • Generic associated types. This feature will almost certainly land in nightly in 2018, and may even stabilize during the year. However, enough implementation work remains that it’s unlikely to be stable prior to the edition release.
    • Specialization. Stabilization is blocked on a number of extremely subtle issues, including a revamp of the trait system and finding a route to soundness. We cannot afford to spend time on these issues until after the edition release ships.
    • const generics. This feature is likely to land in nightly in 2018, but will not be ready to stabilize this year given the substantial work that remains.

    Compiler

    Give me non-embarrassing compilation speed! (@matthewkmayer)

    Compiler work will center on:

    • A steady focus on compiler performance leading up to the edition release. We will pursue two strategies in parallel: continuing to push incremental recompilation into earlier stages of the compiler, but also looking for general improvements that help even with from-scratch compilation. For the latter, avenues include compiler parallelization and MIR-only rlibs, amongst others. We will formulate a comprehensive set of compilation scenarios and corresponding benchmarks and set targets for the edition release (see the tracking issue for some details). Finally, we will spin up a dedicated Compiler Performance Working Group to focus on this area.
    • Completing and polishing the language features mentioned above.
    • Another push on improving error messages.
    • Edition tooling: adding an edition flag and building rustfix, likely by leveraging lints.

    Libraries

    It is often stated that rust’s ecosystem is immature. While this is somewhat true, the real issue is in finding and using the pieces you need. (@vitiral)

    Obviously, we cannot force people to choose one project over another, but it would be great if we could somehow focus our collective resources on fewer standard high-quality crates. (@llogiq)

    We need more 1.0 production-ready crates to get people productive. (@killercup)

    The core team should participate in prioritizing and implementing quality crates for productivity needs. (@nimtiazm)

    In preparation for the edition release, we will continue to invest in Rust’s library ecosystem in three ways:

    • Quality. Building on our 2017 work, we will bring the API Guidelines to a 1.0 status and build out additional resources to aid library authors.
    • Discoverability. We will continue to work with the crates.io team on discoverability improvements, as well as push the Cookbook (or something like it) to 1.0 status as a means of discovering libraries.
    • Domain-specific content. We will work with library authors in the four domains of focus this year to sharpen our offerings in each domain (elaborated more below).

    Documentation

    Documentation plays a critical role in the edition release, as it’s often an entry point for people who are taking a look at Rust thanks to our marketing push. With regards to the edition specifically, this mostly means updating the online version of “The Rust Programming Language” to include all of the new things that are being stabilized in the first part of the year.

    We’ll also be doing a lot of work on Rust By Example. This resource is both critical for our users and also slightly neglected; we’re starting to put together a small team to give it some love, and so we hope to improve it significantly from there.

    There are two additional areas of vital documentation work for 2018, which are not necessarily tied to the edition release:

    • Resources for intermediate Rustaceans. This topic is covered in detail below. It’s possible that some of these resources will be ready to go by the edition release.
    • Overhauled rustdoc. There’s ongoing work on an RLS-based edition of rustdoc with internationalization support, and the ability to seamlessly integrate “guide” and “reference”-style documentation. As a stretch goal for the edition, we could aim to have this shipped and major libraries using it to provide a better documentation experience.

    Tools

    As part of the Rust 2018 edition release, we will:

    • Ship 1.0 editions of the RLS and rustfmt, distributed via rustup.
    • Distribute Clippy via rustup.
    • Stabilize custom registries for Cargo.
    • Implement and stabilize public dependencies in Cargo.
    • Revise Cargo profiles.

    Beyond these clear-cut items, there are a number of ongoing efforts, some of which may ship as part of the edition:

    • Xargo/Cargo integration. Alternatively, this can be viewed as allowing std to be treated as an explicit dependency in Cargo, which has long been a requested feature and which is very helpful for cross-compilation (and hence for embedded device work).
    • Build system integration improvements. Seek to incrementally deliver on the work laid out in 2017. It’s unclear what pieces might be ready for stabilization prior to the edition release.

    And a couple of goals that are probably a stretch for 2018 at all, let alone for the edition release:

    • Custom test frameworks. There’s been a lot of interest in this area, and it may be possible that with a dedicated working group we can implement and stabilize test frameworks in 2018.
    • Compiler-driven code completion for the RLS. Today the RLS still uses a purely heuristic approach for auto-completion. If the compiler’s new “query-based” architecture can be pushed far enough during the year, it maybe become feasible to start using it to deliver precise auto-complete information.

    Web site

    Many, many of the #Rust2018 posts talked about improving our web presence and the marketing therein:

    Having a consistent, approachable, discoverable, and well designed web presence makes it easier for visitors to find what they’re looking for and adds signals of credibility, attention to detail, and production readiness to the project. (@wezm)

    Goal 2: Explain on rust-lang.org who the Rust programming language is for (@jvns)

    We think it’s time to trumpet from the mountaintops what the Rust community has known for a while: Rust is production ready. (Integer 32)

    We should have a polished web site that works for both engineers and CTOs, offering white papers and directing companies to sources of training, consulting, and support. (@aturon)

    Promote Rust as a language that makes large codebases maintainable. (@killercup)

    I suggest in 2018, we kick the idea of wrestling with the Rust compiler to the curb and focus on how it helps us rather than the idea of it beating us down. (@jonathandturner)

    As part of the 2018 edition release, we will completely overhaul the main Rust web site with:

    • A new, striking visual design, which will eventually be used across all of our web sites (including crates.io).
    • Vastly improved marketing materials, including dedicated pages for all four of this year’s “user stories”.
    • Much more extensive resources useful for being productive with Rust, e.g. dedicated pages for Rust’s tooling story that make it easy to discover the state of the art and choose the best tools for you. Also links to various media resources (videos etc.)

    Build resources for intermediate Rustaceans

    We, as a community, should work on creating the next level of learning resources to help folks deploy Rust to production with confidence. (@integer32)

    This includes discussions on how to structure big projects in Rust and Rust-specific design patterns. I want to read more about professional Rust usage and see case-studies from various industries. (@mre)

    Once you have a grasp of what knobs do what