rust-analyzer and libraryification

rust-analyzer and libraryification


Goal of the meeting is to update on rust-analyzer plans and discuss taking the next step towards extracting “standalone libraries” that can be shared between rustc and rust-analyzer.

Update on rust-analyzer

Rust-analyzer has made big strides and now includes

  • name resolution
  • a partial type checker
  • preliminary integration with chalk for trait solving
  • as well as a shared lexer with rustc

What does library-ification mean?

The goal is not just to create multiple crates, but to identify reusable components that could be combined to build new tools, new compilers, etc.

Some guidelines we came up with:

  • separating out the IR definition from the logic
  • trying to separate infrastructure (e.g., the query system “plumbing”, or diagnostic printing)
  • trying to draw boundaries that let us express tests as “input files”, ideally ones that are independent from the details of the implementation, which are expected to change
  • the crates should hande cases like invalid or incomplete AST

Note that it is ok for these libaries to depend on one another, but we would prefer to keep the dependencies rather shallow, and where possible to split out IR definition from logic.


The distinction between unit/integration tests comes into relief here. We want to avoid tests that will bitrot too easily, and yet one of the advantages of separate libraries is the ability to write more targeted tests.

One way to help with this is to have our libraries support “something like LLVM’s model, where everything is a transformation from data to data”. This can make unit tests more stable. Chalk kind of works like this, and we are experimenting with taking advantage of this in rust-analyzer.

Some contested topics

Stable vs nightly. rust-analyzer prefers everything to build on stable, to make contribution easier. However, nightly builds enable dogfooding, though we must be careful drawling too many lessons from that because compilers are an unusual use-case (but sometimes that’s enough).

Mono. vs poly repo. Mono repos make it easier to make “atomic” commits. They may make it easier to audit the effects of language changes. A lot depends ultimately on workflow. Polyrepos enable us to sidestep bors queue times. We didn’t definitely settle much here, except that there is no reason to rush to polyrepos: particularly when extracting libaries from rustc, we should stick with a monorepo setup until things mature.

Initial libraries

We discussed a bit three possible places to start:

  • lexer/parser
  • name resolution
  • traits + type constraint solving

We concluded that name resolution would be nice, but we don’t have anyone to spearhead that project right now.

The meeting also included a fair amount of discussion about the specifics of what lexer/parser extraction might mean (starting around here), which eventually moved to a separate Zulip topic.