Theme

What it does

Checks for usage of items through absolute paths, like std::env::current_dir.

Why restrict this?

Many codebases have their own style when it comes to importing, but one that is seldom used is using absolute paths everywhere. This is generally considered unidiomatic, and you should add a use statement.

The default maximum segments (2) is pretty strict, you may want to increase this in clippy.toml.

Note: One exception to this is code from macro expansion - this does not lint such cases, as using absolute paths is the proper way of referencing items in one.

Known issues

There are currently a few cases which are not caught by this lint:

  • Macro calls. e.g. path::to::macro!()
  • Derive macros. e.g. #[derive(path::to::macro)]
  • Attribute macros. e.g. #[path::to::macro]

Example

let x = std::f64::consts::PI;

Use any of the below instead, or anything else:

use std::f64;
use std::f64::consts;
use std::f64::consts::PI;
let x = f64::consts::PI;
let x = consts::PI;
let x = PI;
use std::f64::consts as f64_consts;
let x = f64_consts::PI;

Configuration

  • absolute-paths-allowed-crates: Which crates to allow absolute paths from

    (default: [])

  • absolute-paths-max-segments: The maximum number of segments a path can have before being linted, anything above this will be linted.

    (default: 2)

Applicability: Unspecified(?)
Added in: 1.73.0

What it does

Checks for comparisons where one side of the relation is either the minimum or maximum value for its type and warns if it involves a case that is always true or always false. Only integer and boolean types are checked.

Why is this bad?

An expression like min <= x may misleadingly imply that it is possible for x to be less than the minimum. Expressions like max < x are probably mistakes.

Known problems

For usize the size of the current compile target will be assumed (e.g., 64 bits on 64 bit systems). This means code that uses such a comparison to detect target pointer width will trigger this lint. One can use mem::sizeof and compare its value or conditional compilation attributes like #[cfg(target_pointer_width = "64")] .. instead.

Example

let vec: Vec<isize> = Vec::new();
if vec.len() <= 0 {}
if 100 > i32::MAX {}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Finds items imported through alloc when available through core.

Why restrict this?

Crates which have no_std compatibility and may optionally require alloc may wish to ensure types are imported from core to ensure disabling alloc does not cause the crate to fail to compile. This lint is also useful for crates migrating to become no_std compatible.

Known problems

The lint is only partially aware of the required MSRV for items that were originally in std but moved to core.

Example

use alloc::slice::from_ref;

Use instead:

use core::slice::from_ref;
Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for usage of the #[allow] attribute and suggests replacing it with the #[expect] (See RFC 2383)

This lint only warns outer attributes (#[allow]), as inner attributes (#![allow]) are usually used to enable or disable lints on a global scale.

Why is this bad?

#[expect] attributes suppress the lint emission, but emit a warning, if the expectation is unfulfilled. This can be useful to be notified when the lint is no longer triggered.

Example

#[allow(unused_mut)]
fn foo() -> usize {
    let mut a = Vec::new();
    a.len()
}

Use instead:

#[expect(unused_mut)]
fn foo() -> usize {
    let mut a = Vec::new();
    a.len()
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.70.0

What it does

Checks for attributes that allow lints without a reason.

Why restrict this?

Justifying each allow helps readers understand the reasoning, and may allow removing allow attributes if their purpose is obsolete.

Example

#![allow(clippy::some_lint)]

Use instead:

#![allow(clippy::some_lint, reason = "False positive rust-lang/rust-clippy#1002020")]

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: Unspecified(?)
Added in: 1.61.0

What it does

Checks for ranges which almost include the entire range of letters from ‘a’ to ‘z’ or digits from ‘0’ to ‘9’, but don’t because they’re a half open range.

Why is this bad?

This ('a'..'z') is almost certainly a typo meant to include all letters.

Example

let _ = 'a'..'z';

Use instead:

let _ = 'a'..='z';

Past names

  • almost_complete_letter_range

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.68.0

What it does

Checks for foo = bar; bar = foo sequences.

Why is this bad?

This looks like a failed attempt to swap.

Example

a = b;
b = a;

If swapping is intended, use swap() instead:

std::mem::swap(&mut a, &mut b);
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for floating point literals that approximate constants which are defined in std::f32::consts or std::f64::consts, respectively, suggesting to use the predefined constant.

Why is this bad?

Usually, the definition in the standard library is more precise than what people come up with. If you find that your definition is actually more precise, please file a Rust issue.

Example

let x = 3.14;
let y = 1_f64 / x;

Use instead:

let x = std::f32::consts::PI;
let y = std::f64::consts::FRAC_1_PI;

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Confirms that items are sorted in source files as per configuration.

Why restrict this?

Keeping a consistent ordering throughout the codebase helps with working as a team, and possibly improves maintainability of the codebase. The idea is that by defining a consistent and enforceable rule for how source files are structured, less time will be wasted during reviews on a topic that is (under most circumstances) not relevant to the logic implemented in the code. Sometimes this will be referred to as “bikeshedding”.

Default Ordering and Configuration

As there is no generally applicable rule, and each project may have different requirements, the lint can be configured with high granularity. The configuration is split into two stages:

  1. Which item kinds that should have an internal order enforced.
  2. Individual ordering rules per item kind.

The item kinds that can be linted are:

  • Module (with customized groupings, alphabetical within)
  • Trait (with customized order of associated items, alphabetical within)
  • Enum, Impl, Struct (purely alphabetical)

Module Item Order

Due to the large variation of items within modules, the ordering can be configured on a very granular level. Item kinds can be grouped together arbitrarily, items within groups will be ordered alphabetically. The following table shows the default groupings:

GroupItem Kinds
modules“mod”, “foreign_mod”
use“use”
macros“macro”
global_asm“global_asm”
UPPER_SNAKE_CASE“static”, “const”
PascalCase“ty_alias”, “opaque_ty”, “enum”, “struct”, “union”, “trait”, “trait_alias”, “impl”
lower_snake_case“fn”

All item kinds must be accounted for to create an enforceable linting rule set.

Known Problems

Performance Impact

Keep in mind, that ordering source code alphabetically can lead to reduced performance in cases where the most commonly used enum variant isn’t the first entry anymore, and similar optimizations that can reduce branch misses, cache locality and such. Either don’t use this lint if that’s relevant, or disable the lint in modules or items specifically where it matters. Other solutions can be to use profile guided optimization (PGO), post-link optimization (e.g. using BOLT for LLVM), or other advanced optimization methods. A good starting point to dig into optimization is cargo-pgo.

Lints on a Contains basis

The lint can be disabled only on a “contains” basis, but not per element within a “container”, e.g. the lint works per-module, per-struct, per-enum, etc. but not for “don’t order this particular enum variant”.

Module documentation

Module level rustdoc comments are not part of the resulting syntax tree and as such cannot be linted from within check_mod. Instead, the rustdoc::missing_documentation lint may be used.

Module Tests

This lint does not implement detection of module tests (or other feature dependent elements for that matter). To lint the location of mod tests, the lint items_after_test_module can be used instead.

Example

trait TraitUnordered {
    const A: bool;
    const C: bool;
    const B: bool;

    type SomeType;

    fn a();
    fn c();
    fn b();
}

Use instead:

trait TraitOrdered {
    const A: bool;
    const B: bool;
    const C: bool;

    type SomeType;

    fn a();
    fn b();
    fn c();
}

Configuration

  • module-item-order-groupings: The named groupings of different source item kinds within modules.

    (default: [["modules", ["extern_crate", "mod", "foreign_mod"]], ["use", ["use"]], ["macros", ["macro"]], ["global_asm", ["global_asm"]], ["UPPER_SNAKE_CASE", ["static", "const"]], ["PascalCase", ["ty_alias", "enum", "struct", "union", "trait", "trait_alias", "impl"]], ["lower_snake_case", ["fn"]]])

  • source-item-ordering: Which kind of elements should be ordered internally, possible values being enum, impl, module, struct, trait.

    (default: ["enum", "impl", "module", "struct", "trait"])

  • trait-assoc-item-kinds-order: The order of associated items in traits.

    (default: ["const", "type", "fn"])

Applicability: Unspecified(?)
Added in: 1.82.0

What it does.

This lint warns when you use Arc with a type that does not implement Send or Sync.

Why is this bad?

Arc<T> is a thread-safe Rc<T> and guarantees that updates to the reference counter use atomic operations. To send an Arc<T> across thread boundaries and share ownership between multiple threads, T must be both Send and Sync, so either T should be made Send + Sync or an Rc should be used instead of an Arc.

Example


fn main() {
    // This is fine, as `i32` implements `Send` and `Sync`.
    let a = Arc::new(42);

    // `RefCell` is `!Sync`, so either the `Arc` should be replaced with an `Rc`
    // or the `RefCell` replaced with something like a `RwLock`
    let b = Arc::new(RefCell::new(42));
}
Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks any kind of arithmetic operation of any type.

Operators like +, -, * or << are usually capable of overflowing according to the Rust Reference, or can panic (/, %).

Known safe built-in types like Wrapping or Saturating, floats, operations in constant environments, allowed types and non-constant operations that won’t overflow are ignored.

Why restrict this?

For integers, overflow will trigger a panic in debug builds or wrap the result in release mode; division by zero will cause a panic in either mode. As a result, it is desirable to explicitly call checked, wrapping or saturating arithmetic methods.

Example

// `n` can be any number, including `i32::MAX`.
fn foo(n: i32) -> i32 {
    n + 1
}

Third-party types can also overflow or present unwanted side-effects.

Example

use rust_decimal::Decimal;
let _n = Decimal::MAX + Decimal::MAX;

Past names

  • integer_arithmetic

Configuration

  • arithmetic-side-effects-allowed: Suppress checking of the passed type names in all types of operations.

If a specific operation is desired, consider using arithmetic_side_effects_allowed_binary or arithmetic_side_effects_allowed_unary instead.

Example

arithmetic-side-effects-allowed = ["SomeType", "AnotherType"]

Noteworthy

A type, say SomeType, listed in this configuration has the same behavior of ["SomeType" , "*"], ["*", "SomeType"] in arithmetic_side_effects_allowed_binary.

(default: [])

  • arithmetic-side-effects-allowed-binary: Suppress checking of the passed type pair names in binary operations like addition or multiplication.

Supports the “*” wildcard to indicate that a certain type won’t trigger the lint regardless of the involved counterpart. For example, ["SomeType", "*"] or ["*", "AnotherType"].

Pairs are asymmetric, which means that ["SomeType", "AnotherType"] is not the same as ["AnotherType", "SomeType"].

Example

arithmetic-side-effects-allowed-binary = [["SomeType" , "f32"], ["AnotherType", "*"]]

(default: [])

  • arithmetic-side-effects-allowed-unary: Suppress checking of the passed type names in unary operations like “negation” (-).

Example

arithmetic-side-effects-allowed-unary = ["SomeType", "AnotherType"]

(default: [])

Applicability: Unspecified(?)
Added in: 1.64.0

What it does

Checks for usage of as conversions.

Note that this lint is specialized in linting every single use of as regardless of whether good alternatives exist or not. If you want more precise lints for as, please consider using these separate lints: unnecessary_cast, cast_lossless/cast_possible_truncation/cast_possible_wrap/cast_precision_loss/cast_sign_loss, fn_to_numeric_cast(_with_truncation), char_lit_as_u8, ref_to_mut and ptr_as_ptr. There is a good explanation the reason why this lint should work in this way and how it is useful in this issue.

Why restrict this?

as conversions will perform many kinds of conversions, including silently lossy conversions and dangerous coercions. There are cases when it makes sense to use as, so the lint is Allow by default.

Example

let a: u32;
...
f(a as u16);

Use instead:

f(a.try_into()?);

// or

f(a.try_into().expect("Unexpected u16 overflow in f"));
Applicability: Unspecified(?)
Added in: 1.41.0

What it does

Checks for the usage of as *const _ or as *mut _ conversion using inferred type.

Why restrict this?

The conversion might include a dangerous cast that might go undetected due to the type being inferred.

Example

fn as_usize<T>(t: &T) -> usize {
    // BUG: `t` is already a reference, so we will here
    // return a dangling pointer to a temporary value instead
    &t as *const _ as usize
}

Use instead:

fn as_usize<T>(t: &T) -> usize {
    t as *const T as usize
}
Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

Checks for the result of a &self-taking as_ptr being cast to a mutable pointer.

Why is this bad?

Since as_ptr takes a &self, the pointer won’t have write permissions unless interior mutability is used, making it unlikely that having it as a mutable pointer is correct.

Example

let mut vec = Vec::<u8>::with_capacity(1);
let ptr = vec.as_ptr() as *mut u8;
unsafe { ptr.write(4) }; // UNDEFINED BEHAVIOUR

Use instead:

let mut vec = Vec::<u8>::with_capacity(1);
let ptr = vec.as_mut_ptr();
unsafe { ptr.write(4) };
Applicability: MaybeIncorrect(?)
Added in: 1.66.0

What it does

Checks for the usage of as _ conversion using inferred type.

Why restrict this?

The conversion might include lossy conversion or a dangerous cast that might go undetected due to the type being inferred.

The lint is allowed by default as using _ is less wordy than always specifying the type.

Example

fn foo(n: usize) {}
let n: u16 = 256;
foo(n as _);

Use instead:

fn foo(n: usize) {}
let n: u16 = 256;
foo(n as usize);
Applicability: MachineApplicable(?)
Added in: 1.63.0

What it does

Checks for assert!(true) and assert!(false) calls.

Why is this bad?

Will be optimized out by the compiler or should probably be replaced by a panic!() or unreachable!()

Example

assert!(false)
assert!(true)
const B: bool = false;
assert!(B)
Applicability: Unspecified(?)
Added in: 1.34.0

What it does

Checks for assert!(r.is_ok()) or assert!(r.is_err()) calls.

Why restrict this?

This form of assertion does not show any of the information present in the Result other than which variant it isn’t.

Known problems

The suggested replacement decreases the readability of code and log output.

Example

assert!(r.is_ok());
assert!(r.is_err());

Use instead:

r.unwrap();
r.unwrap_err();
Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for a = a op b or a = b commutative_op a patterns.

Why is this bad?

These can be written as the shorter a op= b.

Known problems

While forbidden by the spec, OpAssign traits may have implementations that differ from the regular Op impl.

Example

let mut a = 5;
let b = 0;
// ...

a = a + b;

Use instead:

let mut a = 5;
let b = 0;
// ...

a += b;
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Nothing. This lint has been deprecated

Deprecation reason

Compound operators are harmless and linting on them is not in scope for clippy.

Applicability: Unspecified(?)
Deprecated in: 1.30.0

What it does

Checks for code like foo = bar.clone();

Why is this bad?

Custom Clone::clone_from() or ToOwned::clone_into implementations allow the objects to share resources and therefore avoid allocations.

Example

struct Thing;

impl Clone for Thing {
    fn clone(&self) -> Self { todo!() }
    fn clone_from(&mut self, other: &Self) { todo!() }
}

pub fn assign_to_ref(a: &mut Thing, b: Thing) {
    *a = b.clone();
}

Use instead:

struct Thing;

impl Clone for Thing {
    fn clone(&self) -> Self { todo!() }
    fn clone_from(&mut self, other: &Self) { todo!() }
}

pub fn assign_to_ref(a: &mut Thing, b: Thing) {
    a.clone_from(&b);
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Checks for async blocks that yield values of types that can themselves be awaited.

Why is this bad?

An await is likely missing.

Example

async fn foo() {}

fn bar() {
  let x = async {
    foo()
  };
}

Use instead:

async fn foo() {}

fn bar() {
  let x = async {
    foo().await
  };
}
Applicability: MaybeIncorrect(?)
Added in: 1.48.0

What it does

Allows users to configure types which should not be held across await suspension points.

Why is this bad?

There are some types which are perfectly safe to use concurrently from a memory access perspective, but that will cause bugs at runtime if they are held in such a way.

Example

await-holding-invalid-types = [
  # You can specify a type name
  "CustomLockType",
  # You can (optionally) specify a reason
  { path = "OtherCustomLockType", reason = "Relies on a thread local" }
]
struct CustomLockType;
struct OtherCustomLockType;
async fn foo() {
  let _x = CustomLockType;
  let _y = OtherCustomLockType;
  baz().await; // Lint violation
}

Configuration

  • await-holding-invalid-types: The list of types which may not be held across an await point.

    (default: [])

Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for calls to await while holding a non-async-aware MutexGuard.

Why is this bad?

The Mutex types found in std::sync and parking_lot are not designed to operate in an async context across await points.

There are two potential solutions. One is to use an async-aware Mutex type. Many asynchronous foundation crates provide such a Mutex type. The other solution is to ensure the mutex is unlocked before calling await, either by introducing a scope or an explicit call to Drop::drop.

Known problems

Will report false positive for explicitly dropped guards (#6446). A workaround for this is to wrap the .lock() call in a block instead of explicitly dropping the guard.

Example

async fn foo(x: &Mutex<u32>) {
  let mut guard = x.lock().unwrap();
  *guard += 1;
  baz().await;
}

async fn bar(x: &Mutex<u32>) {
  let mut guard = x.lock().unwrap();
  *guard += 1;
  drop(guard); // explicit drop
  baz().await;
}

Use instead:

async fn foo(x: &Mutex<u32>) {
  {
    let mut guard = x.lock().unwrap();
    *guard += 1;
  }
  baz().await;
}

async fn bar(x: &Mutex<u32>) {
  {
    let mut guard = x.lock().unwrap();
    *guard += 1;
  } // guard dropped here at end of scope
  baz().await;
}
Applicability: Unspecified(?)
Added in: 1.45.0

What it does

Checks for calls to await while holding a RefCell, Ref, or RefMut.

Why is this bad?

RefCell refs only check for exclusive mutable access at runtime. Holding a RefCell ref across an await suspension point risks panics from a mutable ref shared while other refs are outstanding.

Known problems

Will report false positive for explicitly dropped refs (#6353). A workaround for this is to wrap the .borrow[_mut]() call in a block instead of explicitly dropping the ref.

Example

async fn foo(x: &RefCell<u32>) {
  let mut y = x.borrow_mut();
  *y += 1;
  baz().await;
}

async fn bar(x: &RefCell<u32>) {
  let mut y = x.borrow_mut();
  *y += 1;
  drop(y); // explicit drop
  baz().await;
}

Use instead:

async fn foo(x: &RefCell<u32>) {
  {
     let mut y = x.borrow_mut();
     *y += 1;
  }
  baz().await;
}

async fn bar(x: &RefCell<u32>) {
  {
    let mut y = x.borrow_mut();
    *y += 1;
  } // y dropped here at end of scope
  baz().await;
}
Applicability: Unspecified(?)
Added in: 1.49.0

What it does

Checks for incompatible bit masks in comparisons.

The formula for detecting if an expression of the type _ <bit_op> m <cmp_op> c (where <bit_op> is one of {&, |} and <cmp_op> is one of {!=, >=, >, !=, >=, >}) can be determined from the following table:

ComparisonBit OpExampleis alwaysFormula
== or !=&x & 2 == 3falsec & m != c
< or >=&x & 2 < 3truem < c
> or <=&x & 1 > 1falsem <= c
== or !=|x | 1 == 0falsec | m != c
< or >=|x | 1 < 1falsem >= c
<= or >|x | 1 > 0truem > c

Why is this bad?

If the bits that the comparison cares about are always set to zero or one by the bit mask, the comparison is constant true or false (depending on mask, compared value, and operators).

So the code is actively misleading, and the only reason someone would write this intentionally is to win an underhanded Rust contest or create a test-case for this lint.

Example

if (x & 1 == 2) { }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for the usage of the to_be_bytes method and/or the function from_be_bytes.

Why restrict this?

To ensure use of little-endian or the target’s endianness rather than big-endian.

Example

let _x = 2i32.to_be_bytes();
let _y = 2i64.to_be_bytes();
Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for usage of _.and_then(|x| Some(y)), _.and_then(|x| Ok(y)) or _.or_else(|x| Err(y)).

Why is this bad?

This can be written more concisely as _.map(|x| y) or _.map_err(|x| y).

Example

let _ = opt().and_then(|s| Some(s.len()));
let _ = res().and_then(|s| if s.len() == 42 { Ok(10) } else { Ok(20) });
let _ = res().or_else(|s| if s.len() == 42 { Err(10) } else { Err(20) });

The correct use would be:

let _ = opt().map(|s| s.len());
let _ = res().map(|s| if s.len() == 42 { 10 } else { 20 });
let _ = res().map_err(|s| if s.len() == 42 { 10 } else { 20 });

Past names

  • option_and_then_some
Applicability: MachineApplicable(?)
Added in: 1.45.0

What it does

Checks for warn/deny/forbid attributes targeting the whole clippy::restriction category.

Why is this bad?

Restriction lints sometimes are in contrast with other lints or even go against idiomatic rust. These lints should only be enabled on a lint-by-lint basis and with careful consideration.

Example

#![deny(clippy::restriction)]

Use instead:

#![deny(clippy::as_conversions)]
Applicability: Unspecified(?)
Added in: 1.47.0

What it does

Checks for if and match conditions that use blocks containing an expression, statements or conditions that use closures with blocks.

Why is this bad?

Style, using blocks in the condition makes it hard to read.

Examples

if { true } { /* ... */ }

if { let x = somefunc(); x } { /* ... */ }

match { let e = somefunc(); e } {
    // ...
}

Use instead:

if true { /* ... */ }

let res = { let x = somefunc(); x };
if res { /* ... */ }

let res = { let e = somefunc(); e };
match res {
    // ...
}

Past names

  • block_in_if_condition_expr
  • block_in_if_condition_stmt
  • blocks_in_if_conditions
Applicability: MachineApplicable(?)
Added in: 1.45.0

What it does

This lint warns about boolean comparisons in assert-like macros.

Why is this bad?

It is shorter to use the equivalent.

Example

assert_eq!("a".is_empty(), false);
assert_ne!("a".is_empty(), true);

Use instead:

assert!(!"a".is_empty());
Applicability: MachineApplicable(?)
Added in: 1.53.0

What it does

Checks for expressions of the form x == true, x != true and order comparisons such as x < true (or vice versa) and suggest using the variable directly.

Why is this bad?

Unnecessary code.

Example

if x == true {}
if y == false {}

use x directly:

if x {}
if !y {}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Instead of using an if statement to convert a bool to an int, this lint suggests using a from() function or an as coercion.

Why is this bad?

Coercion or from() is another way to convert bool to a number. Both methods are guaranteed to return 1 for true, and 0 for false.

See https://doc.rust-lang.org/std/primitive.bool.html#impl-From%3Cbool%3E

Example

if condition {
    1_i64
} else {
    0
};

Use instead:

i64::from(condition);

or

condition as i64;
Applicability: MaybeIncorrect(?)
Added in: 1.65.0

What it does

Checks for the usage of &expr as *const T or &mut expr as *mut T, and suggest using &raw const or &raw mut instead.

Why is this bad?

This would improve readability and avoid creating a reference that points to an uninitialized value or unaligned place. Read the &raw explanation in the Reference for more information.

Example

let val = 1;
let p = &val as *const i32;

let mut val_mut = 1;
let p_mut = &mut val_mut as *mut i32;

Use instead:

let val = 1;
let p = &raw const val;

let mut val_mut = 1;
let p_mut = &raw mut val_mut;

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.60.0

What it does

Checks for &*(&T).

Why is this bad?

Dereferencing and then borrowing a reference value has no effect in most cases.

Known problems

False negative on such code:

let x = &12;
let addr_x = &x as *const _ as usize;
let addr_y = &&*x as *const _ as usize; // assert ok now, and lint triggered.
                                        // But if we fix it, assert will fail.
assert_ne!(addr_x, addr_y);

Example

let s = &String::new();

let a: &String = &* s;

Use instead:

let a: &String = s;
Applicability: MachineApplicable(?)
Added in: 1.63.0

What it does

Checks if const items which is interior mutable (e.g., contains a Cell, Mutex, AtomicXxxx, etc.) has been borrowed directly.

Why is this bad?

Consts are copied everywhere they are referenced, i.e., every time you refer to the const a fresh instance of the Cell or Mutex or AtomicXxxx will be created, which defeats the whole purpose of using these types in the first place.

The const value should be stored inside a static item.

Known problems

When an enum has variants with interior mutability, use of its non interior mutable variants can generate false positives. See issue #3962

Types that have underlying or potential interior mutability trigger the lint whether the interior mutable field is used or not. See issues #5812 and #3825

Example

use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
const CONST_ATOM: AtomicUsize = AtomicUsize::new(12);

CONST_ATOM.store(6, SeqCst); // the content of the atomic is unchanged
assert_eq!(CONST_ATOM.load(SeqCst), 12); // because the CONST_ATOM in these lines are distinct

Use instead:

use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
const CONST_ATOM: AtomicUsize = AtomicUsize::new(12);

static STATIC_ATOM: AtomicUsize = CONST_ATOM;
STATIC_ATOM.store(9, SeqCst);
assert_eq!(STATIC_ATOM.load(SeqCst), 9); // use a `static` item to refer to the same instance

Configuration

  • ignore-interior-mutability: A list of paths to types that should be treated as if they do not contain interior mutability

    (default: ["bytes::Bytes"])

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of &Box<T> anywhere in the code. Check the Box documentation for more information.

Why is this bad?

A &Box<T> parameter requires the function caller to box T first before passing it to a function. Using &T defines a concrete type for the parameter and generalizes the function, this would also auto-deref to &T at the function call site if passed a &Box<T>.

Example

fn foo(bar: &Box<T>) { ... }

Better:

fn foo(bar: &T) { ... }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of Box<T> where T is a collection such as Vec anywhere in the code. Check the Box documentation for more information.

Why is this bad?

Collections already keeps their contents in a separate area on the heap. So if you Box them, you just add another level of indirection without any benefit whatsoever.

Example

struct X {
    values: Box<Vec<Foo>>,
}

Better:

struct X {
    values: Vec<Foo>,
}

Past names

  • box_vec

Configuration

  • avoid-breaking-exported-api: Suppress lints whenever the suggested change would cause breakage for other crates.

    (default: true)

Applicability: Unspecified(?)
Added in: 1.57.0

What it does

checks for Box::new(Default::default()), which can be written as Box::default().

Why is this bad?

Box::default() is equivalent and more concise.

Example

let x: Box<String> = Box::new(Default::default());

Use instead:

let x: Box<String> = Box::default();
Applicability: MachineApplicable(?)
Added in: 1.66.0

What it does

Checks for usage of Box<T> where an unboxed T would work fine.

Why is this bad?

This is an unnecessary allocation, and bad for performance. It is only necessary to allocate if you wish to move the box into something.

Example

fn foo(x: Box<u32>) {}

Use instead:

fn foo(x: u32) {}

Configuration

  • too-large-for-stack: The maximum size of objects (in bytes) that will be linted. Larger objects are ok on the heap

    (default: 200)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks if the if and else block contain shared code that can be moved out of the blocks.

Why is this bad?

Duplicate code is less maintainable.

Known problems

  • The lint doesn’t check if the moved expressions modify values that are being used in the if condition. The suggestion can in that case modify the behavior of the program. See rust-clippy#7452

Example

let foo = if … {
    println!("Hello World");
    13
} else {
    println!("Hello World");
    42
};

Use instead:

println!("Hello World");
let foo = if … {
    13
} else {
    42
};
Applicability: Unspecified(?)
Added in: 1.53.0

What it does

Warns if a generic shadows a built-in type.

Why is this bad?

This gives surprising type errors.

Example

impl<u32> Foo<u32> {
    fn impl_func(&self) -> u32 {
        42
    }
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for hard to read slices of byte characters, that could be more easily expressed as a byte string.

Why is this bad?

Potentially makes the string harder to read.

Example

&[b'H', b'e', b'l', b'l', b'o'];

Use instead:

b"Hello"
Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

It checks for str::bytes().count() and suggests replacing it with str::len().

Why is this bad?

str::bytes().count() is longer and may not be as performant as using str::len().

Example

"hello".bytes().count();
String::from("hello").bytes().count();

Use instead:

"hello".len();
String::from("hello").len();
Applicability: MachineApplicable(?)
Added in: 1.62.0

What it does

Checks for the use of .bytes().nth().

Why is this bad?

.as_bytes().get() is more efficient and more readable.

Example

"Hello".bytes().nth(3);

Use instead:

"Hello".as_bytes().get(3);
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

Checks to see if all common metadata is defined in Cargo.toml. See: https://rust-lang-nursery.github.io/api-guidelines/documentation.html#cargotoml-includes-all-common-metadata-c-metadata

Why is this bad?

It will be more difficult for users to discover the purpose of the crate, and key information related to it.

Example

[package]
name = "clippy"
version = "0.0.212"
repository = "https://github.com/rust-lang/rust-clippy"
readme = "README.md"
license = "MIT OR Apache-2.0"
keywords = ["clippy", "lint", "plugin"]
categories = ["development-tools", "development-tools::cargo-plugins"]

Should include a description field like:

[package]
name = "clippy"
version = "0.0.212"
description = "A bunch of helpful lints to avoid common pitfalls in Rust"
repository = "https://github.com/rust-lang/rust-clippy"
readme = "README.md"
license = "MIT OR Apache-2.0"
keywords = ["clippy", "lint", "plugin"]
categories = ["development-tools", "development-tools::cargo-plugins"]

Configuration

  • cargo-ignore-publish: For internal testing only, ignores the current publish settings in the Cargo manifest.

    (default: false)

Applicability: Unspecified(?)
Added in: 1.32.0

What it does

Checks for calls to ends_with with possible file extensions and suggests to use a case-insensitive approach instead.

Why is this bad?

ends_with is case-sensitive and may not detect files with a valid extension.

Example

fn is_rust_file(filename: &str) -> bool {
    filename.ends_with(".rs")
}

Use instead:

fn is_rust_file(filename: &str) -> bool {
    let filename = std::path::Path::new(filename);
    filename.extension()
        .map_or(false, |ext| ext.eq_ignore_ascii_case("rs"))
}
Applicability: MaybeIncorrect(?)
Added in: 1.51.0

What it does

Checks for usage of the abs() method that cast the result to unsigned.

Why is this bad?

The unsigned_abs() method avoids panic when called on the MIN value.

Example

let x: i32 = -42;
let y: u32 = x.abs() as u32;

Use instead:

let x: i32 = -42;
let y: u32 = x.unsigned_abs();

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.62.0

What it does

Checks for casts from an enum tuple constructor to an integer.

Why is this bad?

The cast is easily confused with casting a c-like enum value to an integer.

Example

enum E { X(i32) };
let _ = E::X as usize;
Applicability: Unspecified(?)
Added in: 1.61.0

What it does

Checks for casts from an enum type to an integral type that will definitely truncate the value.

Why is this bad?

The resulting integral value will not match the value of the variant it came from.

Example

enum E { X = 256 };
let _ = E::X as u8;
Applicability: Unspecified(?)
Added in: 1.61.0

What it does

Checks for casts between numeric types that can be replaced by safe conversion functions.

Why is this bad?

Rust’s as keyword will perform many kinds of conversions, including silently lossy conversions. Conversion functions such as i32::from will only perform lossless conversions. Using the conversion functions prevents conversions from becoming silently lossy if the input types ever change, and makes it clear for people reading the code that the conversion is lossless.

Example

fn as_u64(x: u8) -> u64 {
    x as u64
}

Using ::from would look like this:

fn as_u64(x: u8) -> u64 {
    u64::from(x)
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for a known NaN float being cast to an integer

Why is this bad?

NaNs are cast into zero, so one could simply use this and make the code more readable. The lint could also hint at a programmer error.

Example

let _ = (0.0_f32 / 0.0) as u64;

Use instead:

let _ = 0_u64;
Applicability: Unspecified(?)
Added in: 1.66.0

What it does

Checks for casts between numeric types that may truncate large values. This is expected behavior, so the cast is Allow by default. It suggests user either explicitly ignore the lint, or use try_from() and handle the truncation, default, or panic explicitly.

Why is this bad?

In some problem domains, it is good practice to avoid truncation. This lint can be activated to help assess where additional checks could be beneficial.

Example

fn as_u8(x: u64) -> u8 {
    x as u8
}

Use instead:

fn as_u8(x: u64) -> u8 {
    if let Ok(x) = u8::try_from(x) {
        x
    } else {
        todo!();
    }
}
// Or
#[allow(clippy::cast_possible_truncation)]
fn as_u16(x: u64) -> u16 {
    x as u16
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for casts from an unsigned type to a signed type of the same size, or possibly smaller due to target-dependent integers. Performing such a cast is a no-op for the compiler (that is, nothing is changed at the bit level), and the binary representation of the value is reinterpreted. This can cause wrapping if the value is too big for the target signed type. However, the cast works as defined, so this lint is Allow by default.

Why is this bad?

While such a cast is not bad in itself, the results can be surprising when this is not the intended behavior:

Example

u32::MAX as i32; // will yield a value of `-1`
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for casts from any numeric type to a float type where the receiving type cannot store all values from the original type without rounding errors. This possible rounding is to be expected, so this lint is Allow by default.

Basically, this warns on casting any integer with 32 or more bits to f32 or any 64-bit integer to f64.

Why is this bad?

It’s not bad at all. But in some applications it can be helpful to know where precision loss can take place. This lint can help find those places in the code.

Example

let x = u64::MAX;
x as f64;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for casts, using as or pointer::cast, from a less strictly aligned pointer to a more strictly aligned pointer.

Why is this bad?

Dereferencing the resulting pointer may be undefined behavior.

Known problems

Using std::ptr::read_unaligned and std::ptr::write_unaligned or similar on the resulting pointer is fine. Is over-zealous: casts with manual alignment checks or casts like u64 -> u8 -> u16 can be fine. Miri is able to do a more in-depth analysis.

Example

let _ = (&1u8 as *const u8) as *const u16;
let _ = (&mut 1u8 as *mut u8) as *mut u16;

(&1u8 as *const u8).cast::<u16>();
(&mut 1u8 as *mut u8).cast::<u16>();
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for casts from a signed to an unsigned numeric type. In this case, negative values wrap around to large positive values, which can be quite surprising in practice. However, since the cast works as defined, this lint is Allow by default.

Why is this bad?

Possibly surprising results. You can activate this lint as a one-time check to see where numeric wrapping can arise.

Example

let y: i8 = -1;
y as u128; // will return 18446744073709551615
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for as casts between raw pointers to slices with differently sized elements.

Why is this bad?

The produced raw pointer to a slice does not update its length metadata. The produced pointer will point to a different number of bytes than the original pointer because the length metadata of a raw slice pointer is in elements rather than bytes. Producing a slice reference from the raw pointer will either create a slice with less data (which can be surprising) or create a slice with more data and cause Undefined Behavior.

Example

// Missing data

let a = [1_i32, 2, 3, 4];
let p = &a as *const [i32] as *const [u8];
unsafe {
    println!("{:?}", &*p);
}

// Undefined Behavior (note: also potential alignment issues)

let a = [1_u8, 2, 3, 4];
let p = &a as *const [u8] as *const [u32];
unsafe {
    println!("{:?}", &*p);
}

Instead use ptr::slice_from_raw_parts to construct a slice from a data pointer and the correct length

let a = [1_i32, 2, 3, 4];
let old_ptr = &a as *const [i32];
// The data pointer is cast to a pointer to the target `u8` not `[u8]`
// The length comes from the known length of 4 i32s times the 4 bytes per i32
let new_ptr = core::ptr::slice_from_raw_parts(old_ptr as *const u8, 16);
unsafe {
    println!("{:?}", &*new_ptr);
}
Applicability: HasPlaceholders(?)
Added in: 1.61.0

What it does

Checks for a raw slice being cast to a slice pointer

Why is this bad?

This can result in multiple &mut references to the same location when only a pointer is required. ptr::slice_from_raw_parts is a safe alternative that doesn’t require the same safety requirements to be upheld.

Example

let _: *const [u8] = std::slice::from_raw_parts(ptr, len) as *const _;
let _: *mut [u8] = std::slice::from_raw_parts_mut(ptr, len) as *mut _;

Use instead:

let _: *const [u8] = std::ptr::slice_from_raw_parts(ptr, len);
let _: *mut [u8] = std::ptr::slice_from_raw_parts_mut(ptr, len);
Applicability: MachineApplicable(?)
Added in: 1.65.0

What it does

Checks for usage of cfg that excludes code from test builds. (i.e., #[cfg(not(test))])

Why is this bad?

This may give the false impression that a codebase has 100% coverage, yet actually has untested code. Enabling this also guards against excessive mockery as well, which is an anti-pattern.

Example

#[cfg(not(test))]
important_check(); // I'm not actually tested, but not including me will falsely increase coverage!

Use instead:

important_check();
Applicability: Unspecified(?)
Added in: 1.81.0

What it does

Checks for expressions where a character literal is cast to u8 and suggests using a byte literal instead.

Why is this bad?

In general, casting values to smaller types is error-prone and should be avoided where possible. In the particular case of converting a character literal to u8, it is easy to avoid by just using a byte literal instead. As an added bonus, b'a' is also slightly shorter than 'a' as u8.

Example

'x' as u8

A better version, using the byte literal:

b'x'
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of _.chars().last() or _.chars().next_back() on a str to check if it ends with a given char.

Why is this bad?

Readability, this can be written more concisely as _.ends_with(_).

Example

name.chars().last() == Some('_') || name.chars().next_back() == Some('-');

Use instead:

name.ends_with('_') || name.ends_with('-');
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of .chars().next() on a str to check if it starts with a given char.

Why is this bad?

Readability, this can be written more concisely as _.starts_with(_).

Example

let name = "foo";
if name.chars().next() == Some('_') {};

Use instead:

let name = "foo";
if name.starts_with('_') {};
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for explicit bounds checking when casting.

Why is this bad?

Reduces the readability of statements & is error prone.

Example

foo <= i32::MAX as u32;

Use instead:

i32::try_from(foo).is_ok();

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.37.0

What it does

Checks for usage of .drain(..) for the sole purpose of clearing a container.

Why is this bad?

This creates an unnecessary iterator that is dropped immediately.

Calling .clear() also makes the intent clearer.

Example

let mut v = vec![1, 2, 3];
v.drain(..);

Use instead:

let mut v = vec![1, 2, 3];
v.clear();
Applicability: MachineApplicable(?)
Added in: 1.70.0

What it does

Checks for usage of .clone() on a Copy type.

Why is this bad?

The only reason Copy types implement Clone is for generics, not for using the clone method on a concrete type.

Example

42u64.clone();
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of .clone() on a ref-counted pointer, (Rc, Arc, rc::Weak, or sync::Weak), and suggests calling Clone via unified function syntax instead (e.g., Rc::clone(foo)).

Why restrict this?

Calling .clone() on an Rc, Arc, or Weak can obscure the fact that only the pointer is being cloned, not the underlying data.

Example

let x = Rc::new(1);

x.clone();

Use instead:

Rc::clone(&x);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of cloned() on an Iterator or Option where copied() could be used instead.

Why is this bad?

copied() is better because it guarantees that the type being cloned implements Copy.

Example

[1, 2, 3].iter().cloned();

Use instead:

[1, 2, 3].iter().copied();

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.53.0

What it does

This lint checks for equality comparisons with ptr::null

Why is this bad?

It’s easier and more readable to use the inherent .is_null() method instead

Example

use std::ptr;

if x == ptr::null {
    // ..
}

Use instead:

if x.is_null() {
    // ..
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for conversions to owned values just for the sake of a comparison.

Why is this bad?

The comparison can operate on a reference, so creating an owned value effectively throws it away directly afterwards, which is needlessly consuming code and heap space.

Example

if x.to_owned() == y {}

Use instead:

if x == y {}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for methods with high cognitive complexity.

Why is this bad?

Methods of high cognitive complexity tend to be hard to both read and maintain. Also LLVM will tend to optimize small methods better.

Known problems

Sometimes it’s hard to find a way to reduce the complexity.

Example

You’ll see it when you get the warning.

Past names

  • cyclomatic_complexity

Configuration

  • cognitive-complexity-threshold: The maximum cognitive complexity a function can have

    (default: 25)

Applicability: Unspecified(?)
Added in: 1.35.0

What it does

Checks for collapsible else { if ... } expressions that can be collapsed to else if ....

Why is this bad?

Each if-statement adds one level of nesting, which makes code look more complex than it really is.

Example


if x {
    …
} else {
    if y {
        …
    }
}

Should be written:

if x {
    …
} else if y {
    …
}
Applicability: MachineApplicable(?)
Added in: 1.51.0

What it does

Checks for nested if statements which can be collapsed by &&-combining their conditions.

Why is this bad?

Each if-statement adds one level of nesting, which makes code look more complex than it really is.

Example

if x {
    if y {
        // …
    }
}

Use instead:

if x && y {
    // …
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Finds nested match or if let expressions where the patterns may be “collapsed” together without adding any branches.

Note that this lint is not intended to find all cases where nested match patterns can be merged, but only cases where merging would most likely make the code more readable.

Why is this bad?

It is unnecessarily verbose and complex.

Example

fn func(opt: Option<Result<u64, String>>) {
    let n = match opt {
        Some(n) => match n {
            Ok(n) => n,
            _ => return,
        }
        None => return,
    };
}

Use instead:

fn func(opt: Option<Result<u64, String>>) {
    let n = match opt {
        Some(Ok(n)) => n,
        _ => return,
    };
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: Unspecified(?)
Added in: 1.50.0

What it does

Checks for consecutive calls to str::replace (2 or more) that can be collapsed into a single call.

Why is this bad?

Consecutive str::replace calls scan the string multiple times with repetitive code.

Example

let hello = "hesuo worpd"
    .replace('s', "l")
    .replace("u", "l")
    .replace('p', "l");

Use instead:

let hello = "hesuo worpd".replace(['s', 'u', 'p'], "l");

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.65.0

What it does

Checks for collections that are never queried.

Why is this bad?

Putting effort into constructing a collection but then never querying it might indicate that the author forgot to do whatever they intended to do with the collection. Example: Clone a vector, sort it for iteration, but then mistakenly iterate the original vector instead.

Example

let mut sorted_samples = samples.clone();
sorted_samples.sort();
for sample in &samples { // Oops, meant to use `sorted_samples`.
    println!("{sample}");
}

Use instead:

let mut sorted_samples = samples.clone();
sorted_samples.sort();
for sample in &sorted_samples {
    println!("{sample}");
}
Applicability: Unspecified(?)
Added in: 1.70.0

What it does

Checks comparison chains written with if that can be rewritten with match and cmp.

Why is this bad?

if is not guaranteed to be exhaustive and conditionals can get repetitive

Known problems

The match statement may be slower due to the compiler not inlining the call to cmp. See issue #5354

Example

fn f(x: u8, y: u8) {
    if x > y {
        a()
    } else if x < y {
        b()
    } else {
        c()
    }
}

Use instead:

use std::cmp::Ordering;
fn f(x: u8, y: u8) {
     match x.cmp(&y) {
         Ordering::Greater => a(),
         Ordering::Less => b(),
         Ordering::Equal => c()
     }
}
Applicability: HasPlaceholders(?)
Added in: 1.40.0

What it does

Checks for comparing to an empty slice such as "" or [], and suggests using .is_empty() where applicable.

Why is this bad?

Some structures can answer .is_empty() much faster than checking for equality. So it is good to get into the habit of using .is_empty(), and having it is cheap. Besides, it makes the intent clearer than a manual comparison in some contexts.

Example

if s == "" {
    ..
}

if arr == [] {
    ..
}

Use instead:

if s.is_empty() {
    ..
}

if arr.is_empty() {
    ..
}
Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

It identifies calls to .is_empty() on constant values.

Why is this bad?

String literals and constant values are known at compile time. Checking if they are empty will always return the same value. This might not be the intention of the expression.

Example

let value = "";
if value.is_empty() {
    println!("the string is empty");
}

Use instead:

println!("the string is empty");
Applicability: Unspecified(?)
Added in: 1.79.0

What it does

Checks for types that implement Copy as well as Iterator.

Why is this bad?

Implicit copies can be confusing when working with iterator combinators.

Example

#[derive(Copy, Clone)]
struct Countdown(u8);

impl Iterator for Countdown {
    // ...
}

let a: Vec<_> = my_iterator.take(1).collect();
let b: Vec<_> = my_iterator.collect();
Applicability: Unspecified(?)
Added in: 1.30.0

What it does

Checks for usage of crate as opposed to $crate in a macro definition.

Why is this bad?

crate refers to the macro call’s crate, whereas $crate refers to the macro definition’s crate. Rarely is the former intended. See: https://doc.rust-lang.org/reference/macros-by-example.html#hygiene

Example

#[macro_export]
macro_rules! print_message {
    () => {
        println!("{}", crate::MESSAGE);
    };
}
pub const MESSAGE: &str = "Hello!";

Use instead:

#[macro_export]
macro_rules! print_message {
    () => {
        println!("{}", $crate::MESSAGE);
    };
}
pub const MESSAGE: &str = "Hello!";

Note that if the use of crate is intentional, an allow attribute can be applied to the macro definition, e.g.:

#[allow(clippy::crate_in_macro_def)]
macro_rules! ok { ... crate::foo ... }
Applicability: MachineApplicable(?)
Added in: 1.62.0

What it does

Checks usage of std::fs::create_dir and suggest using std::fs::create_dir_all instead.

Why restrict this?

Sometimes std::fs::create_dir is mistakenly chosen over std::fs::create_dir_all, resulting in failure when more than one directory needs to be created or when the directory already exists. Crates which never need to specifically create a single directory may wish to prevent this mistake.

Example

std::fs::create_dir("foo");

Use instead:

std::fs::create_dir_all("foo");
Applicability: MaybeIncorrect(?)
Added in: 1.48.0

What it does

Checks for transmutes between a type T and *T.

Why is this bad?

It’s easy to mistakenly transmute between a type and a pointer to that type.

Example

core::intrinsics::transmute(t) // where the result type is the same as
                               // `*t` or `&t`'s
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of the dbg! macro.

Why restrict this?

The dbg! macro is intended as a debugging tool. It should not be present in released software or committed to a version control system.

Example

dbg!(true)

Use instead:

true

Configuration

  • allow-dbg-in-tests: Whether dbg! should be allowed in test functions or #[cfg(test)]

    (default: false)

Applicability: MachineApplicable(?)
Added in: 1.34.0

What it does

Checks for function/method calls with a mutable parameter in debug_assert!, debug_assert_eq! and debug_assert_ne! macros.

Why is this bad?

In release builds debug_assert! macros are optimized out by the compiler. Therefore mutating something in a debug_assert! macro results in different behavior between a release and debug build.

Example

debug_assert_eq!(vec![3].pop(), Some(3));

// or

debug_assert!(takes_a_mut_parameter(&mut x));
Applicability: Unspecified(?)
Added in: 1.40.0

What it does

Warns if there is a better representation for a numeric literal.

Why restrict this?

Especially for big powers of 2, a hexadecimal representation is usually more readable than a decimal representation.

Example

`255` => `0xFF`
`65_535` => `0xFFFF`
`4_042_322_160` => `0xF0F0_F0F0`

Configuration

  • literal-representation-threshold: The lower bound for linting decimal literals

    (default: 16384)

Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for declaration of const items which is interior mutable (e.g., contains a Cell, Mutex, AtomicXxxx, etc.).

Why is this bad?

Consts are copied everywhere they are referenced, i.e., every time you refer to the const a fresh instance of the Cell or Mutex or AtomicXxxx will be created, which defeats the whole purpose of using these types in the first place.

The const should better be replaced by a static item if a global variable is wanted, or replaced by a const fn if a constructor is wanted.

Known problems

A “non-constant” const item is a legacy way to supply an initialized value to downstream static items (e.g., the std::sync::ONCE_INIT constant). In this case the use of const is legit, and this lint should be suppressed.

Even though the lint avoids triggering on a constant whose type has enums that have variants with interior mutability, and its value uses non interior mutable variants (see #3962 and #3825 for examples); it complains about associated constants without default values only based on its types; which might not be preferable. There’re other enums plus associated constants cases that the lint cannot handle.

Types that have underlying or potential interior mutability trigger the lint whether the interior mutable field is used or not. See issue #5812

Example

use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};

const CONST_ATOM: AtomicUsize = AtomicUsize::new(12);
CONST_ATOM.store(6, SeqCst); // the content of the atomic is unchanged
assert_eq!(CONST_ATOM.load(SeqCst), 12); // because the CONST_ATOM in these lines are distinct

Use instead:

static STATIC_ATOM: AtomicUsize = AtomicUsize::new(15);
STATIC_ATOM.store(9, SeqCst);
assert_eq!(STATIC_ATOM.load(SeqCst), 9); // use a `static` item to refer to the same instance

Configuration

  • ignore-interior-mutability: A list of paths to types that should be treated as if they do not contain interior mutability

    (default: ["bytes::Bytes"])

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for construction on unit struct using default.

Why is this bad?

This adds code complexity and an unnecessary function call.

Example

#[derive(Default)]
struct S<T> {
    _marker: PhantomData<T>
}

let _: S<i32> = S {
    _marker: PhantomData::default()
};

Use instead:

struct S<T> {
    _marker: PhantomData<T>
}

let _: S<i32> = S {
    _marker: PhantomData
};
Applicability: MachineApplicable(?)
Added in: 1.71.0

What it does

It checks for std::iter::Empty::default() and suggests replacing it with std::iter::empty().

Why is this bad?

std::iter::empty() is the more idiomatic way.

Example

let _ = std::iter::Empty::<usize>::default();
let iter: std::iter::Empty<usize> = std::iter::Empty::default();

Use instead:

let _ = std::iter::empty::<usize>();
let iter: std::iter::Empty<usize> = std::iter::empty();
Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for usage of unconstrained numeric literals which may cause default numeric fallback in type inference.

Default numeric fallback means that if numeric types have not yet been bound to concrete types at the end of type inference, then integer type is bound to i32, and similarly floating type is bound to f64.

See RFC0212 for more information about the fallback.

Why restrict this?

To ensure that every numeric type is chosen explicitly rather than implicitly.

Known problems

This lint is implemented using a custom algorithm independent of rustc’s inference, which results in many false positives and false negatives.

Example

let i = 10;
let f = 1.23;

Use instead:

let i = 10_i32;
let f = 1.23_f64;
Applicability: MaybeIncorrect(?)
Added in: 1.52.0

What it does

Checks for literal calls to Default::default().

Why is this bad?

It’s easier for the reader if the name of the type is used, rather than the generic Default.

Example

let s: String = Default::default();

Use instead:

let s = String::default();
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Displays a warning when a union is declared with the default representation (without a #[repr(C)] attribute).

Why restrict this?

Unions in Rust have unspecified layout by default, despite many people thinking that they lay out each field at the start of the union (like C does). That is, there are no guarantees about the offset of the fields for unions with multiple non-ZST fields without an explicitly specified layout. These cases may lead to undefined behavior in unsafe blocks.

Example

union Foo {
    a: i32,
    b: u32,
}

fn main() {
    let _x: u32 = unsafe {
        Foo { a: 0_i32 }.b // Undefined behavior: `b` is allowed to be padding
    };
}

Use instead:

#[repr(C)]
union Foo {
    a: i32,
    b: u32,
}

fn main() {
    let _x: u32 = unsafe {
        Foo { a: 0_i32 }.b // Now defined behavior, this is just an i32 -> u32 transmute
    };
}
Applicability: Unspecified(?)
Added in: 1.60.0

What it does

Checks for #[cfg_attr(rustfmt, rustfmt_skip)] and suggests to replace it with #[rustfmt::skip].

Why is this bad?

Since tool_attributes (rust-lang/rust#44690) are stable now, they should be used instead of the old cfg_attr(rustfmt) attributes.

Known problems

This lint doesn’t detect crate level inner attributes, because they get processed before the PreExpansionPass lints get executed. See #3123

Example

#[cfg_attr(rustfmt, rustfmt_skip)]
fn main() { }

Use instead:

#[rustfmt::skip]
fn main() { }

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.32.0

What it does

Checks for #[cfg_attr(feature = "cargo-clippy", ...)] and for #[cfg(feature = "cargo-clippy")] and suggests to replace it with #[cfg_attr(clippy, ...)] or #[cfg(clippy)].

Why is this bad?

This feature has been deprecated for years and shouldn’t be used anymore.

Example

#[cfg(feature = "cargo-clippy")]
struct Bar;

Use instead:

#[cfg(clippy)]
struct Bar;
Applicability: MachineApplicable(?)
Added in: 1.78.0

What it does

Checks for #[deprecated] annotations with a since field that is not a valid semantic version. Also allows “TBD” to signal future deprecation.

Why is this bad?

For checking the version of the deprecation, it must be a valid semver. Failing that, the contained information is useless.

Example

#[deprecated(since = "forever")]
fn something_else() { /* ... */ }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of *& and *&mut in expressions.

Why is this bad?

Immediately dereferencing a reference is no-op and makes the code less clear.

Known problems

Multiple dereference/addrof pairs are not handled so the suggested fix for x = **&&y is x = *&y, which is still incorrect.

Example

let a = f(*&mut b);
let c = *&d;

Use instead:

let a = f(b);
let c = d;
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for slicing expressions which are equivalent to dereferencing the value.

Why restrict this?

Some people may prefer to dereference rather than slice.

Example

let vec = vec![1, 2, 3];
let slice = &vec[..];

Use instead:

let vec = vec![1, 2, 3];
let slice = &*vec;
Applicability: MachineApplicable(?)
Added in: 1.61.0

What it does

Detects manual std::default::Default implementations that are identical to a derived implementation.

Why is this bad?

It is less concise.

Example

struct Foo {
    bar: bool
}

impl Default for Foo {
    fn default() -> Self {
        Self {
            bar: false
        }
    }
}

Use instead:

#[derive(Default)]
struct Foo {
    bar: bool
}

Known problems

Derive macros sometimes use incorrect bounds in generic types and the user defined impl may be more generalized or specialized than what derive will produce. This lint can’t detect the manual impl has exactly equal bounds, and therefore this lint is disabled for types with generic parameters.

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.57.0

What it does

Lints against manual PartialOrd and Ord implementations for types with a derived Ord or PartialOrd implementation.

Why is this bad?

The implementation of these traits must agree (for example for use with sort) so it’s probably a bad idea to use a default-generated Ord implementation with an explicitly defined PartialOrd. In particular, the following must hold for any type implementing Ord:

k1.cmp(&k2) == k1.partial_cmp(&k2).unwrap()

Example

#[derive(Ord, PartialEq, Eq)]
struct Foo;

impl PartialOrd for Foo {
    ...
}

Use instead:

#[derive(PartialEq, Eq)]
struct Foo;

impl PartialOrd for Foo {
    fn partial_cmp(&self, other: &Foo) -> Option<Ordering> {
       Some(self.cmp(other))
    }
}

impl Ord for Foo {
    ...
}

or, if you don’t need a custom ordering:

#[derive(Ord, PartialOrd, PartialEq, Eq)]
struct Foo;
Applicability: Unspecified(?)
Added in: 1.47.0

What it does

Checks for types that derive PartialEq and could implement Eq.

Why is this bad?

If a type T derives PartialEq and all of its members implement Eq, then T can always implement Eq. Implementing Eq allows T to be used in APIs that require Eq types. It also allows structs containing T to derive Eq themselves.

Example

#[derive(PartialEq)]
struct Foo {
    i_am_eq: i32,
    i_am_eq_too: Vec<String>,
}

Use instead:

#[derive(PartialEq, Eq)]
struct Foo {
    i_am_eq: i32,
    i_am_eq_too: Vec<String>,
}
Applicability: MachineApplicable(?)
Added in: 1.63.0

What it does

Lints against manual PartialEq implementations for types with a derived Hash implementation.

Why is this bad?

The implementation of these traits must agree (for example for use with HashMap) so it’s probably a bad idea to use a default-generated Hash implementation with an explicitly defined PartialEq. In particular, the following must hold for any type:

k1 == k2 ⇒ hash(k1) == hash(k2)

Example

#[derive(Hash)]
struct Foo;

impl PartialEq for Foo {
    ...
}

Past names

  • derive_hash_xor_eq
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Denies the configured macros in clippy.toml

Note: Even though this lint is warn-by-default, it will only trigger if macros are defined in the clippy.toml file.

Why is this bad?

Some macros are undesirable in certain contexts, and it’s beneficial to lint for them as needed.

Example

An example clippy.toml configuration:

disallowed-macros = [
    # Can use a string as the path of the disallowed macro.
    "std::print",
    # Can also use an inline table with a `path` key.
    { path = "std::println" },
    # When using an inline table, can add a `reason` for why the macro
    # is disallowed.
    { path = "serde::Serialize", reason = "no serializing" },
]
use serde::Serialize;

println!("warns");

// The diagnostic will contain the message "no serializing"
#[derive(Serialize)]
struct Data {
    name: String,
    value: usize,
}

Configuration

  • disallowed-macros: The list of disallowed macros, written as fully qualified paths.

    (default: [])

Applicability: Unspecified(?)
Added in: 1.66.0

What it does

Denies the configured methods and functions in clippy.toml

Note: Even though this lint is warn-by-default, it will only trigger if methods are defined in the clippy.toml file.

Why is this bad?

Some methods are undesirable in certain contexts, and it’s beneficial to lint for them as needed.

Example

An example clippy.toml configuration:

disallowed-methods = [
    # Can use a string as the path of the disallowed method.
    "std::boxed::Box::new",
    # Can also use an inline table with a `path` key.
    { path = "std::time::Instant::now" },
    # When using an inline table, can add a `reason` for why the method
    # is disallowed.
    { path = "std::vec::Vec::leak", reason = "no leaking memory" },
]
let xs = vec![1, 2, 3, 4];
xs.leak(); // Vec::leak is disallowed in the config.
// The diagnostic contains the message "no leaking memory".

let _now = Instant::now(); // Instant::now is disallowed in the config.

let _box = Box::new(3); // Box::new is disallowed in the config.

Use instead:

let mut xs = Vec::new(); // Vec::new is _not_ disallowed in the config.
xs.push(123); // Vec::push is _not_ disallowed in the config.

Past names

  • disallowed_method

Configuration

  • disallowed-methods: The list of disallowed methods, written as fully qualified paths.

    (default: [])

Applicability: Unspecified(?)
Added in: 1.49.0

What it does

Checks for usage of disallowed names for variables, such as foo.

Why is this bad?

These names are usually placeholder names and should be avoided.

Example

let foo = 3.14;

Past names

  • blacklisted_name

Configuration

  • disallowed-names: The list of disallowed names to lint about. NB: bar is not here since it has legitimate uses. The value ".." can be used as part of the list to indicate that the configured values should be appended to the default configuration of Clippy. By default, any configuration will replace the default value.

    (default: ["foo", "baz", "quux"])

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of unicode scripts other than those explicitly allowed by the lint config.

This lint doesn’t take into account non-text scripts such as Unknown and Linear_A. It also ignores the Common script type. While configuring, be sure to use official script name aliases from the list of supported scripts.

See also: non_ascii_idents.

Why restrict this?

It may be not desired to have many different scripts for identifiers in the codebase.

Note that if you only want to allow typical English, you might want to use built-in non_ascii_idents lint instead.

Example

// Assuming that `clippy.toml` contains the following line:
// allowed-scripts = ["Latin", "Cyrillic"]
let counter = 10; // OK, latin is allowed.
let счётчик = 10; // OK, cyrillic is allowed.
let zähler = 10; // OK, it's still latin.
let カウンタ = 10; // Will spawn the lint.

Configuration

  • allowed-scripts: The list of unicode scripts allowed to be used in the scope.

    (default: ["Latin"])

Applicability: Unspecified(?)
Added in: 1.55.0

What it does

Denies the configured types in clippy.toml.

Note: Even though this lint is warn-by-default, it will only trigger if types are defined in the clippy.toml file.

Why is this bad?

Some types are undesirable in certain contexts.

Example:

An example clippy.toml configuration:

disallowed-types = [
    # Can use a string as the path of the disallowed type.
    "std::collections::BTreeMap",
    # Can also use an inline table with a `path` key.
    { path = "std::net::TcpListener" },
    # When using an inline table, can add a `reason` for why the type
    # is disallowed.
    { path = "std::net::Ipv4Addr", reason = "no IPv4 allowed" },
]
use std::collections::BTreeMap;
// or its use
let x = std::collections::BTreeMap::new();

Use instead:

// A similar type that is allowed by the config
use std::collections::HashMap;

Past names

  • disallowed_type

Configuration

  • disallowed-types: The list of disallowed types, written as fully qualified paths.

    (default: [])

Applicability: Unspecified(?)
Added in: 1.55.0

What it does

Checks for diverging calls that are not match arms or statements.

Why is this bad?

It is often confusing to read. In addition, the sub-expression evaluation order for Rust is not well documented.

Known problems

Someone might want to use some_bool || panic!() as a shorthand.

Example

let a = b() || panic!() || c();
// `c()` is dead, `panic!()` is only called if `b()` returns `false`
let x = (a, b, c, panic!());
// can simply be replaced by `panic!()`
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks if included files in doc comments are included only for cfg(doc).

Why restrict this?

These files are not useful for compilation but will still be included. Also, if any of these non-source code file is updated, it will trigger a recompilation.

Known problems

Excluding this will currently result in the file being left out if the item’s docs are inlined from another crate. This may be fixed in a future version of rustdoc.

Example

#![doc = include_str!("some_file.md")]

Use instead:

#![cfg_attr(doc, doc = include_str!("some_file.md"))]
Applicability: MachineApplicable(?)
Added in: 1.84.0

What it does

In CommonMark Markdown, the language used to write doc comments, a paragraph nested within a list or block quote does not need any line after the first one to be indented or marked. The specification calls this a “lazy paragraph continuation.”

Why is this bad?

This is easy to write but hard to read. Lazy continuations makes unintended markers hard to see, and make it harder to deduce the document’s intended structure.

Example

This table is probably intended to have two rows, but it does not. It has zero rows, and is followed by a block quote.

/// Range | Description
/// ----- | -----------
/// >= 1  | fully opaque
/// < 1   | partially see-through
fn set_opacity(opacity: f32) {}

Fix it by escaping the marker:

/// Range | Description
/// ----- | -----------
/// \>= 1 | fully opaque
/// < 1   | partially see-through
fn set_opacity(opacity: f32) {}

This example is actually intended to be a list:

/// * Do nothing.
/// * Then do something. Whatever it is needs done,
/// it should be done right now.

Fix it by indenting the list contents:

/// * Do nothing.
/// * Then do something. Whatever it is needs done,
///   it should be done right now.
Applicability: MachineApplicable(?)
Added in: 1.80.0

What it does

Checks for the presence of _, :: or camel-case words outside ticks in documentation.

Why is this bad?

Rustdoc supports markdown formatting, _, :: and camel-case probably indicates some code which should be included between ticks. _ can also be used for emphasis in markdown, this lint tries to consider that.

Known problems

Lots of bad docs won’t be fixed, what the lint checks for is limited, and there are still false positives. HTML elements and their content are not linted.

In addition, when writing documentation comments, including [] brackets inside a link text would trip the parser. Therefore, documenting link with [SmallVec<[T; INLINE_CAPACITY]>] and then [SmallVec<[T; INLINE_CAPACITY]>]: SmallVec would fail.

Examples

/// Do something with the foo_bar parameter. See also
/// that::other::module::foo.
// ^ `foo_bar` and `that::other::module::foo` should be ticked.
fn doit(foo_bar: usize) {}
// Link text with `[]` brackets should be written as following:
/// Consume the array and return the inner
/// [`SmallVec<[T; INLINE_CAPACITY]>`][SmallVec].
/// [SmallVec]: SmallVec
fn main() {}

Configuration

  • doc-valid-idents: The list of words this lint should not consider as identifiers needing ticks. The value ".." can be used as part of the list to indicate, that the configured values should be appended to the default configuration of Clippy. By default, any configuration will replace the default value. For example:
  • doc-valid-idents = ["ClipPy"] would replace the default list with ["ClipPy"].

  • doc-valid-idents = ["ClipPy", ".."] would append ClipPy to the default list.

    (default: ["KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "MHz", "GHz", "THz", "AccessKit", "CoAP", "CoreFoundation", "CoreGraphics", "CoreText", "DevOps", "Direct2D", "Direct3D", "DirectWrite", "DirectX", "ECMAScript", "GPLv2", "GPLv3", "GitHub", "GitLab", "IPv4", "IPv6", "ClojureScript", "CoffeeScript", "JavaScript", "PostScript", "PureScript", "TypeScript", "WebAssembly", "NaN", "NaNs", "OAuth", "GraphQL", "OCaml", "OpenAL", "OpenDNS", "OpenGL", "OpenMP", "OpenSSH", "OpenSSL", "OpenStreetMap", "OpenTelemetry", "OpenType", "WebGL", "WebGL2", "WebGPU", "WebRTC", "WebSocket", "WebTransport", "WebP", "OpenExr", "YCbCr", "sRGB", "TensorFlow", "TrueType", "iOS", "macOS", "FreeBSD", "NetBSD", "OpenBSD", "TeX", "LaTeX", "BibTeX", "BibLaTeX", "MinGW", "CamelCase"])

Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Warns if a link reference definition appears at the start of a list item or quote.

Why is this bad?

This is probably intended as an intra-doc link. If it is really supposed to be a reference definition, it can be written outside of the list item or quote.

Example

//! - [link]: description

Use instead:

//! - [link][]: description (for intra-doc link)
//!
//! [link]: destination (for link reference definition)
Applicability: MaybeIncorrect(?)
Added in: 1.84.0

What it does

Checks for double comparisons that could be simplified to a single expression.

Why is this bad?

Readability.

Example

if x == y || x < y {}

Use instead:

if x <= y {}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for a #[must_use] attribute without further information on functions and methods that return a type already marked as #[must_use].

Why is this bad?

The attribute isn’t needed. Not using the result will already be reported. Alternatively, one can add some text to the attribute to improve the lint message.

Examples

#[must_use]
fn double_must_use() -> Result<(), ()> {
    unimplemented!();
}
Applicability: Unspecified(?)
Added in: 1.40.0

What it does

Detects expressions of the form --x.

Why is this bad?

It can mislead C/C++ programmers to think x was decremented.

Example

let mut x = 3;
--x;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for unnecessary double parentheses.

Why is this bad?

This makes code harder to read and might indicate a mistake.

Example

fn simple_double_parens() -> i32 {
    ((0))
}

foo((0));

Use instead:

fn simple_no_parens() -> i32 {
    0
}

foo(0);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for calls to .drain() that clear the collection, immediately followed by a call to .collect().

“Collection” in this context refers to any type with a drain method: Vec, VecDeque, BinaryHeap, HashSet,HashMap, String

Why is this bad?

Using mem::take is faster as it avoids the allocation. When using mem::take, the old collection is replaced with an empty one and ownership of the old collection is returned.

Known issues

mem::take(&mut vec) is almost equivalent to vec.drain(..).collect(), except that it also moves the capacity. The user might have explicitly written it this way to keep the capacity on the original Vec.

Example

fn remove_all(v: &mut Vec<i32>) -> Vec<i32> {
    v.drain(..).collect()
}

Use instead:

use std::mem;
fn remove_all(v: &mut Vec<i32>) -> Vec<i32> {
    mem::take(v)
}
Applicability: MachineApplicable(?)
Added in: 1.72.0

What it does

Checks for calls to std::mem::drop with a value that does not implement Drop.

Why is this bad?

Calling std::mem::drop is no different than dropping such a type. A different value may have been intended.

Example

struct Foo;
let x = Foo;
std::mem::drop(x);
Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for files that are included as modules multiple times.

Why is this bad?

Loading a file as a module more than once causes it to be compiled multiple times, taking longer and putting duplicate content into the module tree.

Example

// lib.rs
mod a;
mod b;
// a.rs
#[path = "./b.rs"]
mod b;

Use instead:

// lib.rs
mod a;
mod b;
// a.rs
use crate::b;
Applicability: Unspecified(?)
Added in: 1.63.0

What it does

Checks for function arguments having the similar names differing by an underscore.

Why is this bad?

It affects code readability.

Example

fn foo(a: i32, _a: i32) {}

Use instead:

fn bar(a: i32, _b: i32) {}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for attributes that appear two or more times.

Why is this bad?

Repeating an attribute on the same item (or globally on the same crate) is unnecessary and doesn’t have an effect.

Example

#[allow(dead_code)]
#[allow(dead_code)]
fn foo() {}

Use instead:

#[allow(dead_code)]
fn foo() {}
Applicability: Unspecified(?)
Added in: 1.79.0

What it does

Checks for calculation of subsecond microseconds or milliseconds from other Duration methods.

Why is this bad?

It’s more concise to call Duration::subsec_micros() or Duration::subsec_millis() than to calculate them.

Example

let micros = duration.subsec_nanos() / 1_000;
let millis = duration.subsec_nanos() / 1_000_000;

Use instead:

let micros = duration.subsec_micros();
let millis = duration.subsec_millis();
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for integer validity checks, followed by a transmute that is (incorrectly) evaluated eagerly (e.g. using bool::then_some).

Why is this bad?

Eager evaluation means that the transmute call is executed regardless of whether the condition is true or false. This can introduce unsoundness and other subtle bugs.

Example

Consider the following function which is meant to convert an unsigned integer to its enum equivalent via transmute.

#[repr(u8)]
enum Opcode {
    Add = 0,
    Sub = 1,
    Mul = 2,
    Div = 3
}

fn int_to_opcode(op: u8) -> Option<Opcode> {
    (op < 4).then_some(unsafe { std::mem::transmute(op) })
}

This may appear fine at first given that it checks that the u8 is within the validity range of the enum, however the transmute is evaluated eagerly, meaning that it executes even if op >= 4!

This makes the function unsound, because it is possible for the caller to cause undefined behavior (creating an enum with an invalid bitpattern) entirely in safe code only by passing an incorrect value, which is normally only a bug that is possible in unsafe code.

One possible way in which this can go wrong practically is that the compiler sees it as:

let temp: Foo = unsafe { std::mem::transmute(op) };
(0 < 4).then_some(temp)

and optimizes away the (0 < 4) check based on the assumption that since a Foo was created from op with the validity range 0..3, it is impossible for this condition to be false.

In short, it is possible for this function to be optimized in a way that makes it never return None, even if passed the value 4.

This can be avoided by instead using lazy evaluation. For the example above, this should be written:

fn int_to_opcode(op: u8) -> Option<Opcode> {
    (op < 4).then(|| unsafe { std::mem::transmute(op) })
             ^^^^ ^^ `bool::then` only executes the closure if the condition is true!
}
Applicability: MaybeIncorrect(?)
Added in: 1.77.0

What it does

Checks for usage of if expressions with an else if branch, but without a final else branch.

Why restrict this?

Some coding guidelines require this (e.g., MISRA-C:2004 Rule 14.10).

Example

if x.is_positive() {
    a();
} else if x.is_negative() {
    b();
}

Use instead:

if x.is_positive() {
    a();
} else if x.is_negative() {
    b();
} else {
    // We don't care about zero.
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Detects documentation that is empty.

Why is this bad?

Empty docs clutter code without adding value, reducing readability and maintainability.

Example

///
fn returns_true() -> bool {
    true
}

Use instead:

fn returns_true() -> bool {
    true
}
Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Checks for empty Drop implementations.

Why restrict this?

Empty Drop implementations have no effect when dropping an instance of the type. They are most likely useless. However, an empty Drop implementation prevents a type from being destructured, which might be the intention behind adding the implementation as a marker.

Example

struct S;

impl Drop for S {
    fn drop(&mut self) {}
}

Use instead:

struct S;
Applicability: MaybeIncorrect(?)
Added in: 1.62.0

What it does

Checks for enums with no variants, which therefore are uninhabited types (cannot be instantiated).

As of this writing, the never_type is still a nightly-only experimental API. Therefore, this lint is only triggered if #![feature(never_type)] is enabled.

Why is this bad?

  • If you only want a type which can’t be instantiated, you should use ! (the primitive type “never”), because ! has more extensive compiler support (type inference, etc.) and implementations of common traits.

  • If you need to introduce a distinct type, consider using a newtype struct containing ! instead (struct MyType(pub !)), because it is more idiomatic to use a struct rather than an enum when an enum is unnecessary.

    If you do this, note that the visibility of the ! field determines whether the uninhabitedness is visible in documentation, and whether it can be pattern matched to mark code unreachable. If the field is not visible, then the struct acts like any other struct with private fields.

  • If the enum has no variants only because all variants happen to be disabled by conditional compilation, then it would be appropriate to allow the lint, with #[allow(empty_enum)].

For further information, visit the never type’s documentation.

Example

enum CannotExist {}

Use instead:

#![feature(never_type)]

/// Use the `!` type directly...
type CannotExist = !;

/// ...or define a newtype which is distinct.
struct CannotExist2(pub !);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Finds enum variants without fields that are declared with empty brackets.

Why restrict this?

Empty brackets after a enum variant declaration are redundant and can be omitted, and it may be desirable to do so consistently for style.

However, removing the brackets also introduces a public constant named after the variant, so this is not just a syntactic simplification but an API change, and adding them back is a breaking API change.

Example

enum MyEnum {
    HasData(u8),
    HasNoData(),       // redundant parentheses
    NoneHereEither {}, // redundant braces
}

Use instead:

enum MyEnum {
    HasData(u8),
    HasNoData,
    NoneHereEither,
}
Applicability: MaybeIncorrect(?)
Added in: 1.77.0

What it does

Checks for empty lines after doc comments.

Why is this bad?

The doc comment may have meant to be an inner doc comment, regular comment or applied to some old code that is now commented out. If it was intended to be a doc comment, then the empty line should be removed.

Example

/// Some doc comment with a blank line after it.

fn f() {}

/// Docs for `old_code`
// fn old_code() {}

fn new_code() {}

Use instead:

//! Convert it to an inner doc comment

// Or a regular comment

/// Or remove the empty line
fn f() {}

// /// Docs for `old_code`
// fn old_code() {}

fn new_code() {}
Applicability: MaybeIncorrect(?)
Added in: 1.70.0

What it does

Checks for empty lines after outer attributes

Why is this bad?

The attribute may have meant to be an inner attribute (#![attr]). If it was meant to be an outer attribute (#[attr]) then the empty line should be removed

Example

#[allow(dead_code)]

fn not_quite_good_code() {}

Use instead:

// Good (as inner attribute)
#![allow(dead_code)]

fn this_is_fine() {}

// or

// Good (as outer attribute)
#[allow(dead_code)]
fn this_is_fine_too() {}
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for empty loop expressions.

Why is this bad?

These busy loops burn CPU cycles without doing anything. It is almost always a better idea to panic! than to have a busy loop.

If panicking isn’t possible, think of the environment and either:

  • block on something
  • sleep the thread for some microseconds
  • yield or pause the thread

For std targets, this can be done with std::thread::sleep or std::thread::yield_now.

For no_std targets, doing this is more complicated, especially because #[panic_handler]s can’t panic. To stop/pause the thread, you will probably need to invoke some target-specific intrinsic. Examples include:

Example

loop {}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Finds structs without fields (a so-called “empty struct”) that are declared with brackets.

Why restrict this?

Empty brackets after a struct declaration can be omitted, and it may be desirable to do so consistently for style.

However, removing the brackets also introduces a public constant named after the struct, so this is not just a syntactic simplification but an API change, and adding them back is a breaking API change.

Example

struct Cookie {}
struct Biscuit();

Use instead:

struct Cookie;
struct Biscuit;
Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for C-like enumerations that are repr(isize/usize) and have values that don’t fit into an i32.

Why is this bad?

This will truncate the variant value on 32 bit architectures, but works fine on 64 bit.

Example

#[repr(usize)]
enum NonPortable {
    X = 0x1_0000_0000,
    Y = 0,
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for use Enum::*.

Why is this bad?

It is usually better style to use the prefixed name of an enumeration variant, rather than importing variants.

Known problems

Old-style enumerations that prefix the variants are still around.

Example

use std::cmp::Ordering::*;

foo(Less);

Use instead:

use std::cmp::Ordering;

foo(Ordering::Less)
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Detects enumeration variants that are prefixed or suffixed by the same characters.

Why is this bad?

Enumeration variant names should specify their variant, not repeat the enumeration name.

Limitations

Characters with no casing will be considered when comparing prefixes/suffixes This applies to numbers and non-ascii characters without casing e.g. Foo1 and Foo2 is considered to have different prefixes (the prefixes are Foo1 and Foo2 respectively), as also Bar螃, Bar蟹

Example

enum Cake {
    BlackForestCake,
    HummingbirdCake,
    BattenbergCake,
}

Use instead:

enum Cake {
    BlackForest,
    Hummingbird,
    Battenberg,
}

Configuration

  • avoid-breaking-exported-api: Suppress lints whenever the suggested change would cause breakage for other crates.

    (default: true)

  • enum-variant-name-threshold: The minimum number of enum variants for the lints about variant names to trigger

    (default: 3)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for equal operands to comparison, logical and bitwise, difference and division binary operators (==, >, etc., &&, ||, &, |, ^, - and /).

Why is this bad?

This is usually just a typo or a copy and paste error.

Known problems

False negatives: We had some false positives regarding calls (notably racer had one instance of x.pop() && x.pop()), so we removed matching any function or method calls. We may introduce a list of known pure functions in the future.

Example

if x + 1 == x + 1 {}

// or

assert_eq!(a, a);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for pattern matchings that can be expressed using equality.

Why is this bad?

  • It reads better and has less cognitive load because equality won’t cause binding.
  • It is a Yoda condition. Yoda conditions are widely criticized for increasing the cognitive load of reading the code.
  • Equality is a simple bool expression and can be merged with && and || and reuse if blocks

Example

if let Some(2) = x {
    do_thing();
}

Use instead:

if x == Some(2) {
    do_thing();
}
Applicability: MachineApplicable(?)
Added in: 1.57.0

What it does

Checks for erasing operations, e.g., x * 0.

Why is this bad?

The whole expression can be replaced by zero. This is most likely not the intended outcome and should probably be corrected

Example

let x = 1;
0 / x;
0 * x;
x & 0;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for .err().expect() calls on the Result type.

Why is this bad?

.expect_err() can be called directly to avoid the extra type conversion from err().

Example

let x: Result<u32, &str> = Ok(10);
x.err().expect("Testing err().expect()");

Use instead:

let x: Result<u32, &str> = Ok(10);
x.expect_err("Testing expect_err");

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.62.0

What it does

Checks for types named Error that implement Error.

Why restrict this?

It can become confusing when a codebase has 20 types all named Error, requiring either aliasing them in the use statement or qualifying them like my_module::Error. This hinders comprehension, as it requires you to memorize every variation of importing Error used across a codebase.

Example

#[derive(Debug)]
pub enum Error { ... }

impl std::fmt::Display for Error { ... }

impl std::error::Error for Error { ... }
Applicability: Unspecified(?)
Added in: 1.73.0

What it does

Checks for blocks which are nested beyond a certain threshold.

Note: Even though this lint is warn-by-default, it will only trigger if a maximum nesting level is defined in the clippy.toml file.

Why is this bad?

It can severely hinder readability.

Example

An example clippy.toml configuration:

excessive-nesting-threshold = 3
// lib.rs
pub mod a {
    pub struct X;
    impl X {
        pub fn run(&self) {
            if true {
                // etc...
            }
        }
    }
}

Use instead:

// a.rs
fn private_run(x: &X) {
    if true {
        // etc...
    }
}

pub struct X;
impl X {
    pub fn run(&self) {
        private_run(self);
    }
}
// lib.rs
pub mod a;

Configuration

  • excessive-nesting-threshold: The maximum amount of nesting a block can reside in

    (default: 0)

Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for float literals with a precision greater than that supported by the underlying type.

Why is this bad?

Rust will truncate the literal silently.

Example

let v: f32 = 0.123_456_789_9;
println!("{}", v); //  0.123_456_789

Use instead:

let v: f64 = 0.123_456_789_9;
println!("{}", v); //  0.123_456_789_9
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Warns on any exported enums that are not tagged #[non_exhaustive]

Why restrict this?

Making an enum exhaustive is a stability commitment: adding a variant is a breaking change. A project may wish to ensure that there are no exhaustive enums or that every exhaustive enum is explicitly #[allow]ed.

Example

enum Foo {
    Bar,
    Baz
}

Use instead:

#[non_exhaustive]
enum Foo {
    Bar,
    Baz
}
Applicability: MaybeIncorrect(?)
Added in: 1.51.0

What it does

Warns on any exported structs that are not tagged #[non_exhaustive]

Why restrict this?

Making a struct exhaustive is a stability commitment: adding a field is a breaking change. A project may wish to ensure that there are no exhaustive structs or that every exhaustive struct is explicitly #[allow]ed.

Example

struct Foo {
    bar: u8,
    baz: String,
}

Use instead:

#[non_exhaustive]
struct Foo {
    bar: u8,
    baz: String,
}
Applicability: MaybeIncorrect(?)
Added in: 1.51.0

What it does

Detects calls to the exit() function which terminates the program.

Why restrict this?

exit() immediately terminates the program with no information other than an exit code. This provides no means to troubleshoot a problem, and may be an unexpected side effect.

Codebases may use this lint to require that all exits are performed either by panicking (which produces a message, a code location, and optionally a backtrace) or by returning from main() (which is a single place to look).

Example

std::process::exit(0)

Use instead:

// To provide a stacktrace and additional information
panic!("message");

// or a main method with a return
fn main() -> Result<(), i32> {
    Ok(())
}
Applicability: Unspecified(?)
Added in: 1.41.0

What it does

Checks for calls to .expect(&format!(...)), .expect(foo(..)), etc., and suggests to use unwrap_or_else instead

Why is this bad?

The function will always be called.

Known problems

If the function has side-effects, not calling it will change the semantics of the program, but you shouldn’t rely on that anyway.

Example

foo.expect(&format!("Err {}: {}", err_code, err_msg));

// or

foo.expect(format!("Err {}: {}", err_code, err_msg).as_str());

Use instead:

foo.unwrap_or_else(|| panic!("Err {}: {}", err_code, err_msg));
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for .expect() or .expect_err() calls on Results and .expect() call on Options.

Why restrict this?

Usually it is better to handle the None or Err case. Still, for a lot of quick-and-dirty code, expect is a good choice, which is why this lint is Allow by default.

result.expect() will let the thread panic on Err values. Normally, you want to implement more sophisticated error handling, and propagate errors upwards with ? operator.

Examples

option.expect("one");
result.expect("one");

Use instead:

option?;

// or

result?;

Past names

  • option_expect_used
  • result_expect_used

Configuration

  • allow-expect-in-tests: Whether expect should be allowed in test functions or #[cfg(test)]

    (default: false)

Applicability: Unspecified(?)
Added in: 1.45.0

What it does

Checks for explicit Clone implementations for Copy types.

Why is this bad?

To avoid surprising behavior, these traits should agree and the behavior of Copy cannot be overridden. In almost all situations a Copy type should have a Clone implementation that does nothing more than copy the object, which is what #[derive(Copy, Clone)] gets you.

Example

#[derive(Copy)]
struct Foo;

impl Clone for Foo {
    // ..
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for dereferencing expressions which would be covered by auto-deref.

Why is this bad?

This unnecessarily complicates the code.

Example

let x = String::new();
let y: &str = &*x;

Use instead:

let x = String::new();
let y: &str = &x;
Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for loops over slices with an explicit counter and suggests the use of .enumerate().

Why is this bad?

Using .enumerate() makes the intent more clear, declutters the code and may be faster in some instances.

Example

let mut i = 0;
for item in &v {
    bar(i, *item);
    i += 1;
}

Use instead:

for (i, item) in v.iter().enumerate() { bar(i, *item); }
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for explicit deref() or deref_mut() method calls.

Why is this bad?

Dereferencing by &*x or &mut *x is clearer and more concise, when not part of a method chain.

Example

use std::ops::Deref;
let a: &mut String = &mut String::from("foo");
let b: &str = a.deref();

Use instead:

let a: &mut String = &mut String::from("foo");
let b = &*a;

This lint excludes all of:

let _ = d.unwrap().deref();
let _ = Foo::deref(&foo);
let _ = <Foo as Deref>::deref(&foo);
Applicability: MachineApplicable(?)
Added in: 1.44.0

What it does

Checks for loops on y.into_iter() where y will do, and suggests the latter.

Why is this bad?

Readability.

Example

// with `y` a `Vec` or slice:
for x in y.into_iter() {
    // ..
}

can be rewritten to

for x in y {
    // ..
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for loops on x.iter() where &x will do, and suggests the latter.

Why is this bad?

Readability.

Known problems

False negatives. We currently only warn on some known types.

Example

// with `y` a `Vec` or slice:
for x in y.iter() {
    // ..
}

Use instead:

for x in &y {
    // ..
}

Configuration

  • enforce-iter-loop-reborrow: Whether to recommend using implicit into iter for reborrowed values.

Example

let mut vec = vec![1, 2, 3];
let rmvec = &mut vec;
for _ in rmvec.iter() {}
for _ in rmvec.iter_mut() {}

Use instead:

let mut vec = vec![1, 2, 3];
let rmvec = &mut vec;
for _ in &*rmvec {}
for _ in &mut *rmvec {}

(default: false)

Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of write!() / writeln()! which can be replaced with (e)print!() / (e)println!()

Why is this bad?

Using (e)println! is clearer and more concise

Example

writeln!(&mut std::io::stderr(), "foo: {:?}", bar).unwrap();
writeln!(&mut std::io::stdout(), "foo: {:?}", bar).unwrap();

Use instead:

eprintln!("foo: {:?}", bar);
println!("foo: {:?}", bar);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Nothing. This lint has been deprecated

Deprecation reason

Vec::extend_from_slice is no longer faster than Vec::extend due to specialization.

Applicability: Unspecified(?)
Deprecated in: pre 1.29.0

What it does

Checks for occurrences where one vector gets extended instead of append

Why is this bad?

Using append instead of extend is more concise and faster

Example

let mut a = vec![1, 2, 3];
let mut b = vec![4, 5, 6];

a.extend(b.drain(..));

Use instead:

let mut a = vec![1, 2, 3];
let mut b = vec![4, 5, 6];

a.append(&mut b);
Applicability: MachineApplicable(?)
Added in: 1.55.0

What it does

Checks for lifetimes in generics that are never used anywhere else.

Why is this bad?

The additional lifetimes make the code look more complicated, while there is nothing out of the ordinary going on. Removing them leads to more readable code.

Example

// unnecessary lifetimes
fn unused_lifetime<'a>(x: u8) {
    // ..
}

Use instead:

fn no_lifetime(x: u8) {
    // ...
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for type parameters in generics that are never used anywhere else.

Why is this bad?

Functions cannot infer the value of unused type parameters; therefore, calling them requires using a turbofish, which serves no purpose but to satisfy the compiler.

Example

fn unused_ty<T>(x: u8) {
    // ..
}

Use instead:

fn no_unused_ty(x: u8) {
    // ..
}
Applicability: MachineApplicable(?)
Added in: 1.69.0

What it does

Checks for impls of From<..> that contain panic!() or unwrap()

Why is this bad?

TryFrom should be used if there’s a possibility of failure.

Example

struct Foo(i32);

impl From<String> for Foo {
    fn from(s: String) -> Self {
        Foo(s.parse().unwrap())
    }
}

Use instead:

struct Foo(i32);

impl TryFrom<String> for Foo {
    type Error = ();
    fn try_from(s: String) -> Result<Self, Self::Error> {
        if let Ok(parsed) = s.parse() {
            Ok(Foo(parsed))
        } else {
            Err(())
        }
    }
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for immediate reassignment of fields initialized with Default::default().

Why is this bad?

It’s more idiomatic to use the functional update syntax.

Known problems

Assignments to patterns that are of tuple type are not linted.

Example

let mut a: A = Default::default();
a.i = 42;

Use instead:

let a = A {
    i: 42,
    .. Default::default()
};
Applicability: Unspecified(?)
Added in: 1.49.0

What it does

Checks for usage of scoped visibility modifiers, like pub(crate), on fields. These make a field visible within a scope between public and private.

Why restrict this?

Scoped visibility modifiers cause a field to be accessible within some scope between public and private, potentially within an entire crate. This allows for fields to be non-private while upholding internal invariants, but can be a code smell. Scoped visibility requires checking a greater area, potentially an entire crate, to verify that an invariant is upheld, and global analysis requires a lot of effort.

Example

pub mod public_module {
    struct MyStruct {
        pub(crate) first_field: bool,
        pub(super) second_field: bool
    }
}

Use instead:

pub mod public_module {
    struct MyStruct {
        first_field: bool,
        second_field: bool
    }
    impl MyStruct {
        pub(crate) fn get_first_field(&self) -> bool {
            self.first_field
        }
        pub(super) fn get_second_field(&self) -> bool {
            self.second_field
        }
    }
}
Applicability: Unspecified(?)
Added in: 1.81.0

What it does

Checks for FileType::is_file().

Why restrict this?

When people testing a file type with FileType::is_file they are testing whether a path is something they can get bytes from. But is_file doesn’t cover special file types in unix-like systems, and doesn’t cover symlink in windows. Using !FileType::is_dir() is a better way to that intention.

Example

let metadata = std::fs::metadata("foo.txt")?;
let filetype = metadata.file_type();

if filetype.is_file() {
    // read file
}

should be written as:

let metadata = std::fs::metadata("foo.txt")?;
let filetype = metadata.file_type();

if !filetype.is_dir() {
    // read file
}
Applicability: Unspecified(?)
Added in: 1.42.0

What it does

Checks for usage of bool::then in Iterator::filter_map.

Why is this bad?

This can be written with filter then map instead, which would reduce nesting and separates the filtering from the transformation phase. This comes with no cost to performance and is just cleaner.

Limitations

Does not lint bool::then_some, as it eagerly evaluates its arguments rather than lazily. This can create differing behavior, so better safe than sorry.

Example

_ = v.into_iter().filter_map(|i| (i % 2 == 0).then(|| really_expensive_fn(i)));

Use instead:

_ = v.into_iter().filter(|i| i % 2 == 0).map(|i| really_expensive_fn(i));
Applicability: MachineApplicable(?)
Added in: 1.73.0

What it does

Checks for usage of filter_map(|x| x).

Why is this bad?

Readability, this can be written more concisely by using flatten.

Example

iter.filter_map(|x| x);

Use instead:

iter.flatten();
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

Checks for usage of _.filter_map(_).next().

Why is this bad?

Readability, this can be written more concisely as _.find_map(_).

Example

 (0..3).filter_map(|x| if x == 2 { Some(x) } else { None }).next();

Can be written as

 (0..3).find_map(|x| if x == 2 { Some(x) } else { None });

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.36.0

What it does

Checks for usage of _.filter(_).next().

Why is this bad?

Readability, this can be written more concisely as _.find(_).

Example

vec.iter().filter(|x| **x == 0).next();

Use instead:

vec.iter().find(|x| **x == 0);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of flat_map(|x| x).

Why is this bad?

Readability, this can be written more concisely by using flatten.

Example

iter.flat_map(|x| x);

Can be written as

iter.flatten();
Applicability: MachineApplicable(?)
Added in: 1.39.0

What it does

Checks for usage of Iterator::flat_map() where filter_map() could be used instead.

Why is this bad?

filter_map() is known to always produce 0 or 1 output items per input item, rather than however many the inner iterator type produces. Therefore, it maintains the upper bound in Iterator::size_hint(), and communicates to the reader that the input items are not being expanded into multiple output items without their having to notice that the mapping function returns an Option.

Example

let nums: Vec<i32> = ["1", "2", "whee!"].iter().flat_map(|x| x.parse().ok()).collect();

Use instead:

let nums: Vec<i32> = ["1", "2", "whee!"].iter().filter_map(|x| x.parse().ok()).collect();
Applicability: MachineApplicable(?)
Added in: 1.53.0

What it does

Checks for float arithmetic.

Why restrict this?

For some embedded systems or kernel development, it can be useful to rule out floating-point numbers.

Example

a + 1.0;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for (in-)equality comparisons on floating-point values (apart from zero), except in functions called *eq* (which probably implement equality for a type involving floats).

Why is this bad?

Floating point calculations are usually imprecise, so asking if two values are exactly equal is asking for trouble because arriving at the same logical result via different routes (e.g. calculation versus constant) may yield different values.

Example

let a: f64 = 1000.1;
let b: f64 = 0.2;
let x = a + b;
let y = 1000.3; // Expected value.

// Actual value: 1000.3000000000001
println!("{x}");

let are_equal = x == y;
println!("{are_equal}"); // false

The correct way to compare floating point numbers is to define an allowed error margin. This may be challenging if there is no “natural” error margin to permit. Broadly speaking, there are two cases:

  1. If your values are in a known range and you can define a threshold for “close enough to be equal”, it may be appropriate to define an absolute error margin. For example, if your data is “length of vehicle in centimeters”, you may consider 0.1 cm to be “close enough”.
  2. If your code is more general and you do not know the range of values, you should use a relative error margin, accepting e.g. 0.1% of error regardless of specific values.

For the scenario where you can define a meaningful absolute error margin, consider using:

let a: f64 = 1000.1;
let b: f64 = 0.2;
let x = a + b;
let y = 1000.3; // Expected value.

const ALLOWED_ERROR_VEHICLE_LENGTH_CM: f64 = 0.1;
let within_tolerance = (x - y).abs() < ALLOWED_ERROR_VEHICLE_LENGTH_CM;
println!("{within_tolerance}"); // true

NB! Do not use f64::EPSILON - while the error margin is often called “epsilon”, this is a different use of the term that is not suitable for floating point equality comparison. Indeed, for the example above using f64::EPSILON as the allowed error would return false.

For the scenario where no meaningful absolute error can be defined, refer to the floating point guide for a reference implementation of relative error based comparison of floating point values. MIN_NORMAL in the reference implementation is equivalent to MIN_POSITIVE in Rust.

Applicability: HasPlaceholders(?)
Added in: pre 1.29.0

What it does

Checks for (in-)equality comparisons on constant floating-point values (apart from zero), except in functions called *eq* (which probably implement equality for a type involving floats).

Why restrict this?

Floating point calculations are usually imprecise, so asking if two values are exactly equal is asking for trouble because arriving at the same logical result via different routes (e.g. calculation versus constant) may yield different values.

Example

let a: f64 = 1000.1;
let b: f64 = 0.2;
let x = a + b;
const Y: f64 = 1000.3; // Expected value.

// Actual value: 1000.3000000000001
println!("{x}");

let are_equal = x == Y;
println!("{are_equal}"); // false

The correct way to compare floating point numbers is to define an allowed error margin. This may be challenging if there is no “natural” error margin to permit. Broadly speaking, there are two cases:

  1. If your values are in a known range and you can define a threshold for “close enough to be equal”, it may be appropriate to define an absolute error margin. For example, if your data is “length of vehicle in centimeters”, you may consider 0.1 cm to be “close enough”.
  2. If your code is more general and you do not know the range of values, you should use a relative error margin, accepting e.g. 0.1% of error regardless of specific values.

For the scenario where you can define a meaningful absolute error margin, consider using:

let a: f64 = 1000.1;
let b: f64 = 0.2;
let x = a + b;
const Y: f64 = 1000.3; // Expected value.

const ALLOWED_ERROR_VEHICLE_LENGTH_CM: f64 = 0.1;
let within_tolerance = (x - Y).abs() < ALLOWED_ERROR_VEHICLE_LENGTH_CM;
println!("{within_tolerance}"); // true

NB! Do not use f64::EPSILON - while the error margin is often called “epsilon”, this is a different use of the term that is not suitable for floating point equality comparison. Indeed, for the example above using f64::EPSILON as the allowed error would return false.

For the scenario where no meaningful absolute error can be defined, refer to the floating point guide for a reference implementation of relative error based comparison of floating point values. MIN_NORMAL in the reference implementation is equivalent to MIN_POSITIVE in Rust.

Applicability: HasPlaceholders(?)
Added in: pre 1.29.0

What it does

Checks for statements of the form (a - b) < f32::EPSILON or (a - b) < f64::EPSILON. Notes the missing .abs().

Why is this bad?

The code without .abs() is more likely to have a bug.

Known problems

If the user can ensure that b is larger than a, the .abs() is technically unnecessary. However, it will make the code more robust and doesn’t have any large performance implications. If the abs call was deliberately left out for performance reasons, it is probably better to state this explicitly in the code, which then can be done with an allow.

Example

pub fn is_roughly_equal(a: f32, b: f32) -> bool {
    (a - b) < f32::EPSILON
}

Use instead:

pub fn is_roughly_equal(a: f32, b: f32) -> bool {
    (a - b).abs() < f32::EPSILON
}
Applicability: MaybeIncorrect(?)
Added in: 1.48.0

What it does

Checks for comparisons with an address of a function item.

Why is this bad?

Function item address is not guaranteed to be unique and could vary between different code generation units. Furthermore different function items could have the same address after being merged together.

Example

type F = fn();
fn a() {}
let f: F = a;
if f == a {
    // ...
}
Applicability: Unspecified(?)
Added in: 1.44.0

What it does

Checks for excessive use of bools in function definitions.

Why is this bad?

Calls to such functions are confusing and error prone, because it’s hard to remember argument order and you have no type system support to back you up. Using two-variant enums instead of bools often makes API easier to use.

Example

fn f(is_round: bool, is_hot: bool) { ... }

Use instead:

enum Shape {
    Round,
    Spiky,
}

enum Temperature {
    Hot,
    IceCold,
}

fn f(shape: Shape, temperature: Temperature) { ... }

Configuration

  • max-fn-params-bools: The maximum number of bool parameters a function can have

    (default: 3)

Applicability: Unspecified(?)
Added in: 1.43.0

What it does

Checks for casts of function pointers to something other than usize.

Why is this bad?

Casting a function pointer to anything other than usize/isize is not portable across architectures. If the target type is too small the address would be truncated, and target types larger than usize are unnecessary.

Casting to isize also doesn’t make sense, since addresses are never signed.

Example

fn fun() -> i32 { 1 }
let _ = fun as i64;

Use instead:

let _ = fun as usize;
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for casts of a function pointer to any integer type.

Why restrict this?

Casting a function pointer to an integer can have surprising results and can occur accidentally if parentheses are omitted from a function call. If you aren’t doing anything low-level with function pointers then you can opt out of casting functions to integers in order to avoid mistakes. Alternatively, you can use this lint to audit all uses of function pointer casts in your code.

Example

// fn1 is cast as `usize`
fn fn1() -> u16 {
    1
};
let _ = fn1 as usize;

Use instead:

// maybe you intended to call the function?
fn fn2() -> u16 {
    1
};
let _ = fn2() as usize;

// or

// maybe you intended to cast it to a function type?
fn fn3() -> u16 {
    1
}
let _ = fn3 as fn() -> u16;
Applicability: MaybeIncorrect(?)
Added in: 1.58.0

What it does

Checks for casts of a function pointer to a numeric type not wide enough to store an address.

Why is this bad?

Such a cast discards some bits of the function’s address. If this is intended, it would be more clearly expressed by casting to usize first, then casting the usize to the intended type (with a comment) to perform the truncation.

Example

fn fn1() -> i16 {
    1
};
let _ = fn1 as i32;

Use instead:

// Cast to usize first, then comment with the reason for the truncation
fn fn1() -> i16 {
    1
};
let fn_ptr = fn1 as usize;
let fn_ptr_truncated = fn_ptr as i32;
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for iterating a map (HashMap or BTreeMap) and ignoring either the keys or values.

Why is this bad?

Readability. There are keys and values methods that can be used to express that don’t need the values or keys.

Example

for (k, _) in &map {
    ..
}

could be replaced by

for k in map.keys() {
    ..
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for calls to std::mem::forget with a value that does not implement Drop.

Why is this bad?

Calling std::mem::forget is no different than dropping such a type. A different value may have been intended.

Example

struct Foo;
let x = Foo;
std::mem::forget(x);
Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for usage of .map(|_| format!(..)).collect::<String>().

Why is this bad?

This allocates a new string for every element in the iterator. This can be done more efficiently by creating the String once and appending to it in Iterator::fold, using either the write! macro which supports exactly the same syntax as the format! macro, or concatenating with + in case the iterator yields &str/String.

Note also that write!-ing into a String can never fail, despite the return type of write! being std::fmt::Result, so it can be safely ignored or unwrapped.

Example

fn hex_encode(bytes: &[u8]) -> String {
    bytes.iter().map(|b| format!("{b:02X}")).collect()
}

Use instead:

use std::fmt::Write;
fn hex_encode(bytes: &[u8]) -> String {
    bytes.iter().fold(String::new(), |mut output, b| {
        let _ = write!(output, "{b:02X}");
        output
    })
}
Applicability: Unspecified(?)
Added in: 1.73.0

What it does

Detects format! within the arguments of another macro that does formatting such as format! itself, write! or println!. Suggests inlining the format! call.

Why is this bad?

The recommended code is both shorter and avoids a temporary allocation.

Example

println!("error: {}", format!("something failed at {}", Location::caller()));

Use instead:

println!("error: something failed at {}", Location::caller());
Applicability: Unspecified(?)
Added in: 1.58.0

What it does

Detects cases where the result of a format! call is appended to an existing String.

Why restrict this?

Introduces an extra, avoidable heap allocation.

Known problems

format! returns a String but write! returns a Result. Thus you are forced to ignore the Err variant to achieve the same API.

While using write! in the suggested way should never fail, this isn’t necessarily clear to the programmer.

Example

let mut s = String::new();
s += &format!("0x{:X}", 1024);
s.push_str(&format!("0x{:X}", 1024));

Use instead:

use std::fmt::Write as _; // import without risk of name clashing

let mut s = String::new();
let _ = write!(s, "0x{:X}", 1024);
Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for outer doc comments written with 4 forward slashes (////).

Why is this bad?

This is (probably) a typo, and results in it not being a doc comment; just a regular comment.

Example

//// My amazing data structure
pub struct Foo {
    // ...
}

Use instead:

/// My amazing data structure
pub struct Foo {
    // ...
}
Applicability: MachineApplicable(?)
Added in: 1.73.0

What it does

Checks for from_iter() function calls on types that implement the FromIterator trait.

Why is this bad?

If it’s needed to create a collection from the contents of an iterator, the Iterator::collect(_) method is preferred. However, when it’s needed to specify the container type, Vec::from_iter(_) can be more readable than using a turbofish (e.g. _.collect::<Vec<_>>()). See FromIterator documentation

Example

let five_fives = std::iter::repeat(5).take(5);

let v = Vec::from_iter(five_fives);

assert_eq!(v, vec![5, 5, 5, 5, 5]);

Use instead:

let five_fives = std::iter::repeat(5).take(5);

let v: Vec<i32> = five_fives.collect();

assert_eq!(v, vec![5, 5, 5, 5, 5]);

but prefer to use

let numbers: Vec<i32> = FromIterator::from_iter(1..=5);

instead of

let numbers = (1..=5).collect::<Vec<_>>();
Applicability: MaybeIncorrect(?)
Added in: 1.49.0

What it does

Searches for implementations of the Into<..> trait and suggests to implement From<..> instead.

Why is this bad?

According the std docs implementing From<..> is preferred since it gives you Into<..> for free where the reverse isn’t true.

Example

struct StringWrapper(String);

impl Into<StringWrapper> for String {
    fn into(self) -> StringWrapper {
        StringWrapper(self)
    }
}

Use instead:

struct StringWrapper(String);

impl From<String> for StringWrapper {
    fn from(s: String) -> StringWrapper {
        StringWrapper(s)
    }
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.51.0

What it does

Checks if we’re passing a c_void raw pointer to {Box,Rc,Arc,Weak}::from_raw(_)

Why is this bad?

When dealing with c_void raw pointers in FFI, it is easy to run into the pitfall of calling from_raw with the c_void pointer. The type signature of Box::from_raw is fn from_raw(raw: *mut T) -> Box<T>, so if you pass a *mut c_void you will get a Box<c_void> (and similarly for Rc, Arc and Weak). For this to be safe, c_void would need to have the same memory layout as the original type, which is often not the case.

Example

let ptr = Box::into_raw(Box::new(42usize)) as *mut c_void;
let _ = unsafe { Box::from_raw(ptr) };

Use instead:

let _ = unsafe { Box::from_raw(ptr as *mut usize) };
Applicability: Unspecified(?)
Added in: 1.67.0

What it does

Checks for function invocations of the form primitive::from_str_radix(s, 10)

Why is this bad?

This specific common use case can be rewritten as s.parse::<primitive>() (and in most cases, the turbofish can be removed), which reduces code length and complexity.

Known problems

This lint may suggest using (&<expression>).parse() instead of <expression>.parse() directly in some cases, which is correct but adds unnecessary complexity to the code.

Example

let input: &str = get_input();
let num = u16::from_str_radix(input, 10)?;

Use instead:

let input: &str = get_input();
let num: u16 = input.parse()?;
Applicability: MaybeIncorrect(?)
Added in: 1.52.0

What it does

This lint requires Future implementations returned from functions and methods to implement the Send marker trait, ignoring type parameters.

If a function is generic and its Future conditionally implements Send based on a generic parameter then it is considered Send and no warning is emitted.

This can be used by library authors (public and internal) to ensure their functions are compatible with both multi-threaded runtimes that require Send futures, as well as single-threaded runtimes where callers may choose !Send types for generic parameters.

Why is this bad?

A Future implementation captures some state that it needs to eventually produce its final value. When targeting a multithreaded executor (which is the norm on non-embedded devices) this means that this state may need to be transported to other threads, in other words the whole Future needs to implement the Send marker trait. If it does not, then the resulting Future cannot be submitted to a thread pool in the end user’s code.

Especially for generic functions it can be confusing to leave the discovery of this problem to the end user: the reported error location will be far from its cause and can in many cases not even be fixed without modifying the library where the offending Future implementation is produced.

Example

async fn not_send(bytes: std::rc::Rc<[u8]>) {}

Use instead:

async fn is_send(bytes: std::sync::Arc<[u8]>) {}
Applicability: Unspecified(?)
Added in: 1.44.0

What it does

Checks for usage of x.get(0) instead of x.first() or x.front().

Why is this bad?

Using x.first() for Vecs and slices or x.front() for VecDeques is easier to read and has the same result.

Example

let x = vec![2, 3, 5];
let first_element = x.get(0);

Use instead:

let x = vec![2, 3, 5];
let first_element = x.first();
Applicability: MachineApplicable(?)
Added in: 1.63.0

What it does

Checks for usage of x.get(x.len() - 1) instead of x.last().

Why is this bad?

Using x.last() is easier to read and has the same result.

Note that using x[x.len() - 1] is semantically different from x.last(). Indexing into the array will panic on out-of-bounds accesses, while x.get() and x.last() will return None.

There is another lint (get_unwrap) that covers the case of using x.get(index).unwrap() instead of x[index].

Example

let x = vec![2, 3, 5];
let last_element = x.get(x.len() - 1);

Use instead:

let x = vec![2, 3, 5];
let last_element = x.last();
Applicability: MachineApplicable(?)
Added in: 1.37.0

What it does

Checks for usage of .get().unwrap() (or .get_mut().unwrap) on a standard library type which implements Index

Why restrict this?

Using the Index trait ([]) is more clear and more concise.

Known problems

Not a replacement for error handling: Using either .unwrap() or the Index trait ([]) carries the risk of causing a panic if the value being accessed is None. If the use of .get().unwrap() is a temporary placeholder for dealing with the Option type, then this does not mitigate the need for error handling. If there is a chance that .get() will be None in your program, then it is advisable that the None case is handled in a future refactor instead of using .unwrap() or the Index trait.

Example

let mut some_vec = vec![0, 1, 2, 3];
let last = some_vec.get(3).unwrap();
*some_vec.get_mut(0).unwrap() = 1;

The correct use would be:

let mut some_vec = vec![0, 1, 2, 3];
let last = some_vec[3];
some_vec[0] = 1;
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for the usage of the to_ne_bytes method and/or the function from_ne_bytes.

Why restrict this?

To ensure use of explicitly chosen endianness rather than the target’s endianness, such as when implementing network protocols or file formats rather than FFI.

Example

let _x = 2i32.to_ne_bytes();
let _y = 2i64.to_ne_bytes();
Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for identity operations, e.g., x + 0.

Why is this bad?

This code can be removed without changing the meaning. So it just obscures what’s going on. Delete it mercilessly.

Example

x / 1 + 0 * 1 - 0 | 0;
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for Mutex::lock calls in if let expression with lock calls in any of the else blocks.

Disabled starting in Edition 2024

This lint is effectively disabled starting in Edition 2024 as if let ... else scoping was reworked such that this is no longer an issue. See Proposal: stabilize if_let_rescope for Edition 2024

Why is this bad?

The Mutex lock remains held for the whole if let ... else block and deadlocks.

Example

if let Ok(thing) = mutex.lock() {
    do_thing();
} else {
    mutex.lock();
}

Should be written

let locked = mutex.lock();
if let Ok(thing) = locked {
    do_thing(thing);
} else {
    use_locked(locked);
}
Applicability: Unspecified(?)
Added in: 1.45.0

What it does

Checks for usage of ! or != in an if condition with an else branch.

Why is this bad?

Negations reduce the readability of statements.

Example

if !v.is_empty() {
    a()
} else {
    b()
}

Could be written:

if v.is_empty() {
    b()
} else {
    a()
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for if/else with the same body as the then part and the else part.

Why is this bad?

This is probably a copy & paste error.

Example

let foo = if … {
    42
} else {
    42
};
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for if-else that could be written using either bool::then or bool::then_some.

Why restrict this?

Looks a little redundant. Using bool::then is more concise and incurs no loss of clarity. For simple calculations and known values, use bool::then_some, which is eagerly evaluated in comparison to bool::then.

Example

let a = if v.is_empty() {
    println!("true!");
    Some(42)
} else {
    None
};

Could be written:

let a = v.is_empty().then(|| {
    println!("true!");
    42
});

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: Unspecified(?)
Added in: 1.53.0

What it does

Checks for consecutive ifs with the same condition.

Why is this bad?

This is probably a copy & paste error.

Example

if a == b {
    …
} else if a == b {
    …
}

Note that this lint ignores all conditions with a function call as it could have side effects:

if foo() {
    …
} else if foo() { // not linted
    …
}

Configuration

  • ignore-interior-mutability: A list of paths to types that should be treated as if they do not contain interior mutability

    (default: ["bytes::Bytes"])

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of _ in patterns of type ().

Why is this bad?

Matching with () explicitly instead of _ outlines the fact that the pattern contains no data. Also it would detect a type change that _ would ignore.

Example

match std::fs::create_dir("tmp-work-dir") {
   Ok(_) => println!("Working directory created"),
   Err(s) => eprintln!("Could not create directory: {s}"),
}

Use instead:

match std::fs::create_dir("tmp-work-dir") {
   Ok(()) => println!("Working directory created"),
   Err(s) => eprintln!("Could not create directory: {s}"),
}
Applicability: MachineApplicable(?)
Added in: 1.73.0

What it does

This lint is concerned with the semantics of Borrow and Hash for a type that implements all three of Hash, Borrow<str> and Borrow<[u8]> as it is impossible to satisfy the semantics of Borrow and Hash for both Borrow<str> and Borrow<[u8]>.

Why is this bad?

When providing implementations for Borrow<T>, one should consider whether the different implementations should act as facets or representations of the underlying type. Generic code typically uses Borrow<T> when it relies on the identical behavior of these additional trait implementations. These traits will likely appear as additional trait bounds.

In particular Eq, Ord and Hash must be equivalent for borrowed and owned values: x.borrow() == y.borrow() should give the same result as x == y. It follows then that the following equivalence must hold: hash(x) == hash((x as Borrow<[u8]>).borrow()) == hash((x as Borrow<str>).borrow())

Unfortunately it doesn’t hold as hash("abc") != hash("abc".as_bytes()). This happens because the Hash impl for str passes an additional 0xFF byte to the hasher to avoid collisions. For example, given the tuples ("a", "bc"), and ("ab", "c"), the two tuples would have the same hash value if the 0xFF byte was not added.

Example

use std::borrow::Borrow;
use std::hash::{Hash, Hasher};

struct ExampleType {
    data: String
}

impl Hash for ExampleType {
    fn hash<H: Hasher>(&self, state: &mut H) {
        self.data.hash(state);
    }
}

impl Borrow<str> for ExampleType {
    fn borrow(&self) -> &str {
        &self.data
    }
}

impl Borrow<[u8]> for ExampleType {
    fn borrow(&self) -> &[u8] {
        self.data.as_bytes()
    }
}

As a consequence, hashing a &ExampleType and hashing the result of the two borrows will result in different values.

Applicability: Unspecified(?)
Added in: 1.76.0

What it does

Lints when impl Trait is being used in a function’s parameters.

Why restrict this?

Turbofish syntax (::<>) cannot be used to specify the type of an impl Trait parameter, making impl Trait less powerful. Readability may also be a factor.

Example

trait MyTrait {}
fn foo(a: impl MyTrait) {
	// [...]
}

Use instead:

trait MyTrait {}
fn foo<T: MyTrait>(a: T) {
	// [...]
}
Applicability: HasPlaceholders(?)
Added in: 1.69.0

What it does

Checks for the usage of _.to_owned(), vec.to_vec(), or similar when calling _.clone() would be clearer.

Why is this bad?

These methods do the same thing as _.clone() but may be confusing as to why we are calling to_vec on something that is already a Vec or calling to_owned on something that is already owned.

Example

let a = vec![1, 2, 3];
let b = a.to_vec();
let c = a.to_owned();

Use instead:

let a = vec![1, 2, 3];
let b = a.clone();
let c = a.clone();
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

Checks for public impl or fn missing generalization over different hashers and implicitly defaulting to the default hashing algorithm (SipHash).

Why is this bad?

HashMap or HashSet with custom hashers cannot be used with them.

Known problems

Suggestions for replacing constructors can contain false-positives. Also applying suggestions can require modification of other pieces of code, possibly including external crates.

Example

impl<K: Hash + Eq, V> Serialize for HashMap<K, V> { }

pub fn foo(map: &mut HashMap<i32, i32>) { }

could be rewritten as

impl<K: Hash + Eq, V, S: BuildHasher> Serialize for HashMap<K, V, S> { }

pub fn foo<S: BuildHasher>(map: &mut HashMap<i32, i32, S>) { }
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for missing return statements at the end of a block.

Why restrict this?

Omitting the return keyword whenever possible is idiomatic Rust code, but:

  • Programmers coming from other languages might prefer the expressiveness of return.
  • It’s possible to miss the last returning statement because the only difference is a missing ;.
  • Especially in bigger code with multiple return paths, having a return keyword makes it easier to find the corresponding statements.

Example

fn foo(x: usize) -> usize {
    x
}

add return

fn foo(x: usize) -> usize {
    return x;
}
Applicability: MachineApplicable(?)
Added in: 1.33.0

What it does

Checks for implicit saturating addition.

Why is this bad?

The built-in function is more readable and may be faster.

Example

let mut u:u32 = 7000;

if u != u32::MAX {
    u += 1;
}

Use instead:

let mut u:u32 = 7000;

u = u.saturating_add(1);
Applicability: MachineApplicable(?)
Added in: 1.66.0

What it does

Checks for implicit saturating subtraction.

Why is this bad?

Simplicity and readability. Instead we can easily use an builtin function.

Example

let mut i: u32 = end - start;

if i != 0 {
    i -= 1;
}

Use instead:

let mut i: u32 = end - start;

i = i.saturating_sub(1);
Applicability: MachineApplicable(?)
Added in: 1.44.0

What it does

Looks for bounds in impl Trait in return position that are implied by other bounds. This can happen when a trait is specified that another trait already has as a supertrait (e.g. fn() -> impl Deref + DerefMut<Target = i32> has an unnecessary Deref bound, because Deref is a supertrait of DerefMut)

Why is this bad?

Specifying more bounds than necessary adds needless complexity for the reader.

Limitations

This lint does not check for implied bounds transitively. Meaning that it doesn’t check for implied bounds from supertraits of supertraits (e.g. trait A {} trait B: A {} trait C: B {}, then having an fn() -> impl A + C)

Example

fn f() -> impl Deref<Target = i32> + DerefMut<Target = i32> {
//             ^^^^^^^^^^^^^^^^^^^ unnecessary bound, already implied by the `DerefMut` trait bound
    Box::new(123)
}

Use instead:

fn f() -> impl DerefMut<Target = i32> {
    Box::new(123)
}
Applicability: MachineApplicable(?)
Added in: 1.74.0

What it does

Checks for double comparisons that can never succeed

Why is this bad?

The whole expression can be replaced by false, which is probably not the programmer’s intention

Example

if status_code <= 400 && status_code > 500 {}
Applicability: Unspecified(?)
Added in: 1.73.0

What it does

Looks for floating-point expressions that can be expressed using built-in methods to improve accuracy at the cost of performance.

Why is this bad?

Negatively impacts accuracy.

Example

let a = 3f32;
let _ = a.powf(1.0 / 3.0);
let _ = (1.0 + a).ln();
let _ = a.exp() - 1.0;

Use instead:

let a = 3f32;
let _ = a.cbrt();
let _ = a.ln_1p();
let _ = a.exp_m1();
Applicability: MachineApplicable(?)
Added in: 1.43.0

What it does

This lint checks that no function newer than the defined MSRV (minimum supported rust version) is used in the crate.

Why is this bad?

It would prevent the crate to be actually used with the specified MSRV.

Example

// MSRV of 1.3.0
use std::thread::sleep;
use std::time::Duration;

// Sleep was defined in `1.4.0`.
sleep(Duration::new(1, 0));

To fix this problem, either increase your MSRV or use another item available in your current MSRV.

Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Warns if an integral or floating-point constant is grouped inconsistently with underscores.

Why is this bad?

Readers may incorrectly interpret inconsistently grouped digits.

Example

618_64_9189_73_511

Use instead:

61_864_918_973_511
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for struct constructors where all fields are shorthand and the order of the field init shorthand in the constructor is inconsistent with the order in the struct definition.

Why is this bad?

Since the order of fields in a constructor doesn’t affect the resulted instance as the below example indicates,

#[derive(Debug, PartialEq, Eq)]
struct Foo {
    x: i32,
    y: i32,
}
let x = 1;
let y = 2;

// This assertion never fails:
assert_eq!(Foo { x, y }, Foo { y, x });

inconsistent order can be confusing and decreases readability and consistency.

Example

struct Foo {
    x: i32,
    y: i32,
}
let x = 1;
let y = 2;

Foo { y, x };

Use instead:

Foo { x, y };
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

The lint checks for slice bindings in patterns that are only used to access individual slice values.

Why is this bad?

Accessing slice values using indices can lead to panics. Using refutable patterns can avoid these. Binding to individual values also improves the readability as they can be named.

Limitations

This lint currently only checks for immutable access inside if let patterns.

Example

let slice: Option<&[u32]> = Some(&[1, 2, 3]);

if let Some(slice) = slice {
    println!("{}", slice[0]);
}

Use instead:

let slice: Option<&[u32]> = Some(&[1, 2, 3]);

if let Some(&[first, ..]) = slice {
    println!("{}", first);
}

Configuration

  • max-suggested-slice-pattern-length: When Clippy suggests using a slice pattern, this is the maximum number of elements allowed in the slice pattern that is suggested. If more elements are necessary, the lint is suppressed. For example, [_, _, _, e, ..] is a slice pattern with 4 elements.

    (default: 3)

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.59.0

What it does

Checks for usage of indexing or slicing. Arrays are special cases, this lint does report on arrays if we can tell that slicing operations are in bounds and does not lint on constant usize indexing on arrays because that is handled by rustc’s const_err lint.

Why restrict this?

To avoid implicit panics from indexing and slicing. There are “checked” alternatives which do not panic, and can be used with unwrap() to make an explicit panic when it is desired.

Limitations

This lint does not check for the usage of indexing or slicing on strings. These are covered by the more specific string_slice lint.

Example

// Vector
let x = vec![0; 5];

x[2];
&x[2..100];

// Array
let y = [0, 1, 2, 3];

&y[10..100];
&y[10..];

Use instead:

x.get(2);
x.get(2..100);

y.get(10);
y.get(10..100);

Configuration

  • suppress-restriction-lint-in-const: Whether to suppress a restriction lint in constant code. In same cases the restructured operation might not be unavoidable, as the suggested counterparts are unavailable in constant code. This configuration will cause restriction lints to trigger even if no suggestion can be made.

    (default: false)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for bit masks in comparisons which can be removed without changing the outcome. The basic structure can be seen in the following table:

ComparisonBit OpExampleequals
> / <=| / ^x | 2 > 3x > 3
< / >=| / ^x ^ 1 < 4x < 4

Why is this bad?

Not equally evil as bad_bit_mask, but still a bit misleading, because the bit mask is ineffective.

Known problems

False negatives: This lint will only match instances where we have figured out the math (which is for a power-of-two compared value). This means things like x | 1 >= 7 (which would be better written as x >= 6) will not be reported (but bit masks like this are fairly uncommon).

Example

if (x | 1 > 3) {  }

Use instead:

if (x >= 2) {  }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks if both .write(true) and .append(true) methods are called on a same OpenOptions.

Why is this bad?

.append(true) already enables write(true), making this one superfluous.

Example

let _ = OpenOptions::new()
           .write(true)
           .append(true)
           .create(true)
           .open("file.json");

Use instead:

let _ = OpenOptions::new()
           .append(true)
           .create(true)
           .open("file.json");
Applicability: MachineApplicable(?)
Added in: 1.76.0

What it does

Checks for usage of .to_string() on an &&T where T implements ToString directly (like &&str or &&String).

Why is this bad?

This bypasses the specialized implementation of ToString and instead goes through the more expensive string formatting facilities.

Example

// Generic implementation for `T: Display` is used (slow)
["foo", "bar"].iter().map(|s| s.to_string());

// OK, the specialized impl is used
["foo", "bar"].iter().map(|&s| s.to_string());
Applicability: MachineApplicable(?)
Added in: 1.40.0

What it does

Checks for matches being used to destructure a single-variant enum or tuple struct where a let will suffice.

Why is this bad?

Just readability – let doesn’t nest, whereas a match does.

Example

enum Wrapper {
    Data(i32),
}

let wrapper = Wrapper::Data(42);

let data = match wrapper {
    Wrapper::Data(i) => i,
};

The correct use would be:

enum Wrapper {
    Data(i32),
}

let wrapper = Wrapper::Data(42);
let Wrapper::Data(data) = wrapper;
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for iteration that is guaranteed to be infinite.

Why is this bad?

While there may be places where this is acceptable (e.g., in event streams), in most cases this is simply an error.

Example

use std::iter;

iter::repeat(1_u8).collect::<Vec<_>>();
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for infinite loops in a function where the return type is not ! and lint accordingly.

Why restrict this?

Making the return type ! serves as documentation that the function does not return. If the function is not intended to loop infinitely, then this lint may detect a bug.

Example

fn run_forever() {
    loop {
        // do something
    }
}

If infinite loops are as intended:

fn run_forever() -> ! {
    loop {
        // do something
    }
}

Otherwise add a break or return condition:

fn run_forever() {
    loop {
        // do something
        if condition {
            break;
        }
    }
}
Applicability: MaybeIncorrect(?)
Added in: 1.76.0

What it does

Checks for the definition of inherent methods with a signature of to_string(&self) -> String.

Why is this bad?

This method is also implicitly defined if a type implements the Display trait. As the functionality of Display is much more versatile, it should be preferred.

Example

pub struct A;

impl A {
    pub fn to_string(&self) -> String {
        "I am A".to_string()
    }
}

Use instead:

use std::fmt;

pub struct A;

impl fmt::Display for A {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "I am A")
    }
}
Applicability: Unspecified(?)
Added in: 1.38.0

What it does

Checks for the definition of inherent methods with a signature of to_string(&self) -> String and if the type implementing this method also implements the Display trait.

Why is this bad?

This method is also implicitly defined if a type implements the Display trait. The less versatile inherent method will then shadow the implementation introduced by Display.

Example

use std::fmt;

pub struct A;

impl A {
    pub fn to_string(&self) -> String {
        "I am A".to_string()
    }
}

impl fmt::Display for A {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "I am A, too")
    }
}

Use instead:

use std::fmt;

pub struct A;

impl fmt::Display for A {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "I am A")
    }
}
Applicability: Unspecified(?)
Added in: 1.38.0

What it does

Checks for tuple structs initialized with field syntax. It will however not lint if a base initializer is present. The lint will also ignore code in macros.

Why is this bad?

This may be confusing to the uninitiated and adds no benefit as opposed to tuple initializers

Example

struct TupleStruct(u8, u16);

let _ = TupleStruct {
    0: 1,
    1: 23,
};

// should be written as
let base = TupleStruct(1, 23);

// This is OK however
let _ = TupleStruct { 0: 42, ..base };
Applicability: MachineApplicable(?)
Added in: 1.59.0

What it does

Checks for items annotated with #[inline(always)], unless the annotated function is empty or simply panics.

Why is this bad?

While there are valid uses of this annotation (and once you know when to use it, by all means allow this lint), it’s a common newbie-mistake to pepper one’s code with it.

As a rule of thumb, before slapping #[inline(always)] on a function, measure if that additional function call really affects your runtime profile sufficiently to make up for the increase in compile time.

Known problems

False positives, big time. This lint is meant to be deactivated by everyone doing serious performance work. This means having done the measurement.

Example

#[inline(always)]
fn not_quite_hot_code(..) { ... }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of AT&T x86 assembly syntax.

Why restrict this?

To enforce consistent use of Intel x86 assembly syntax.

Example

asm!("lea ({}), {}", in(reg) ptr, lateout(reg) _, options(att_syntax));

Use instead:

asm!("lea {}, [{}]", lateout(reg) _, in(reg) ptr);
Applicability: Unspecified(?)
Added in: 1.49.0

What it does

Checks for usage of Intel x86 assembly syntax.

Why restrict this?

To enforce consistent use of AT&T x86 assembly syntax.

Example

asm!("lea {}, [{}]", lateout(reg) _, in(reg) ptr);

Use instead:

asm!("lea ({}), {}", in(reg) ptr, lateout(reg) _, options(att_syntax));
Applicability: Unspecified(?)
Added in: 1.49.0

What it does

Checks for #[inline] on trait methods without bodies

Why is this bad?

Only implementations of trait methods may be inlined. The inline attribute is ignored for trait methods without bodies.

Example

trait Animal {
    #[inline]
    fn name(&self) -> &'static str;
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of inspect().for_each().

Why is this bad?

It is the same as performing the computation inside inspect at the beginning of the closure in for_each.

Example

[1,2,3,4,5].iter()
.inspect(|&x| println!("inspect the number: {}", x))
.for_each(|&x| {
    assert!(x >= 0);
});

Can be written as

[1,2,3,4,5].iter()
.for_each(|&x| {
    println!("inspect the number: {}", x);
    assert!(x >= 0);
});
Applicability: Unspecified(?)
Added in: 1.51.0

What it does

Checks for usage of x >= y + 1 or x - 1 >= y (and <=) in a block

Why is this bad?

Readability – better to use > y instead of >= y + 1.

Example

if x >= y + 1 {}

Use instead:

if x > y {}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for division of integers

Why restrict this?

When outside of some very specific algorithms, integer division is very often a mistake because it discards the remainder.

Example

let x = 3 / 2;
println!("{}", x);

Use instead:

let x = 3f32 / 2f32;
println!("{}", x);
Applicability: Unspecified(?)
Added in: 1.37.0

What it does

Checks for the usage of division (/) and remainder (%) operations when performed on any integer types using the default Div and Rem trait implementations.

Why restrict this?

In cryptographic contexts, division can result in timing sidechannel vulnerabilities, and needs to be replaced with constant-time code instead (e.g. Barrett reduction).

Example

let my_div = 10 / 2;

Use instead:

let my_div = 10 >> 1;
Applicability: Unspecified(?)
Added in: 1.79.0

What it does

Checks for into_iter calls on references which should be replaced by iter or iter_mut.

Why is this bad?

Readability. Calling into_iter on a reference will not move out its content into the resulting iterator, which is confusing. It is better just call iter or iter_mut directly.

Example

(&vec).into_iter();

Use instead:

(&vec).iter();
Applicability: MachineApplicable(?)
Added in: 1.32.0

What it does

This is the opposite of the iter_without_into_iter lint. It looks for IntoIterator for (&|&mut) Type implementations without an inherent iter or iter_mut method on the type or on any of the types in its Deref chain.

Why is this bad?

It’s not bad, but having them is idiomatic and allows the type to be used in iterator chains by just calling .iter(), instead of the more awkward <&Type>::into_iter or (&val).into_iter() syntax in case of ambiguity with another IntoIterator impl.

Limitations

This lint focuses on providing an idiomatic API. Therefore, it will only lint on types which are accessible outside of the crate. For internal types, these methods can be added on demand if they are actually needed. Otherwise, it would trigger the dead_code lint for the unused method.

Example

struct MySlice<'a>(&'a [u8]);
impl<'a> IntoIterator for &MySlice<'a> {
    type Item = &'a u8;
    type IntoIter = std::slice::Iter<'a, u8>;
    fn into_iter(self) -> Self::IntoIter {
        self.0.iter()
    }
}

Use instead:

struct MySlice<'a>(&'a [u8]);
impl<'a> MySlice<'a> {
    pub fn iter(&self) -> std::slice::Iter<'a, u8> {
        self.into_iter()
    }
}
impl<'a> IntoIterator for &MySlice<'a> {
    type Item = &'a u8;
    type IntoIter = std::slice::Iter<'a, u8>;
    fn into_iter(self) -> Self::IntoIter {
        self.0.iter()
    }
}
Applicability: Unspecified(?)
Added in: 1.75.0

What it does

This lint checks for invalid usages of ptr::null.

Why is this bad?

This causes undefined behavior.

Example

// Undefined behavior
unsafe { std::slice::from_raw_parts(ptr::null(), 0); }

Use instead:

unsafe { std::slice::from_raw_parts(NonNull::dangling().as_ptr(), 0); }
Applicability: MachineApplicable(?)
Added in: 1.53.0

What it does

Checks regex creation (with Regex::new, RegexBuilder::new, or RegexSet::new) for correct regex syntax.

Why is this bad?

This will lead to a runtime panic.

Example

Regex::new("(")
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for comparisons where the relation is always either true or false, but where one side has been upcast so that the comparison is necessary. Only integer types are checked.

Why is this bad?

An expression like let x : u8 = ...; (x as u32) > 300 will mistakenly imply that it is possible for x to be outside the range of u8.

Known problems

https://github.com/rust-lang/rust-clippy/issues/886

Example

let x: u8 = 1;
(x as u32) > 300;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for comparisons between integers, followed by subtracting the greater value from the lower one.

Why is this bad?

This could result in an underflow and is most likely not what the user wants. If this was intended to be a saturated subtraction, consider using the saturating_sub method directly.

Example

let a = 12u32;
let b = 13u32;

let result = if a > b { b - a } else { 0 };

Use instead:

let a = 12u32;
let b = 13u32;

let result = a.saturating_sub(b);
Applicability: MaybeIncorrect(?)
Added in: 1.44.0

What it does

Checks for invisible Unicode characters in the code.

Why is this bad?

Having an invisible character in the code makes for all sorts of April fools, but otherwise is very much frowned upon.

Example

You don’t see it, but there may be a zero-width space or soft hyphen some­where in this text.

Past names

  • zero_width_space
Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

Finds usages of char::is_digit that can be replaced with is_ascii_digit or is_ascii_hexdigit.

Why is this bad?

is_digit(..) is slower and requires specifying the radix.

Example

let c: char = '6';
c.is_digit(10);
c.is_digit(16);

Use instead:

let c: char = '6';
c.is_ascii_digit();
c.is_ascii_hexdigit();
Applicability: MachineApplicable(?)
Added in: 1.62.0

What it does

Checks for items declared after some statement in a block.

Why is this bad?

Items live for the entire scope they are declared in. But statements are processed in order. This might cause confusion as it’s hard to figure out which item is meant in a statement.

Example

fn foo() {
    println!("cake");
}

fn main() {
    foo(); // prints "foo"
    fn foo() {
        println!("foo");
    }
    foo(); // prints "foo"
}

Use instead:

fn foo() {
    println!("cake");
}

fn main() {
    fn foo() {
        println!("foo");
    }
    foo(); // prints "foo"
    foo(); // prints "foo"
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Triggers if an item is declared after the testing module marked with #[cfg(test)].

Why is this bad?

Having items declared after the testing module is confusing and may lead to bad test coverage.

Example

#[cfg(test)]
mod tests {
    // [...]
}

fn my_function() {
    // [...]
}

Use instead:

fn my_function() {
    // [...]
}

#[cfg(test)]
mod tests {
    // [...]
}
Applicability: MachineApplicable(?)
Added in: 1.71.0

What it does

Checks for the use of .cloned().collect() on slice to create a Vec.

Why is this bad?

.to_vec() is clearer

Example

let s = [1, 2, 3, 4, 5];
let s2: Vec<isize> = s[..].iter().cloned().collect();

The better use would be:

let s = [1, 2, 3, 4, 5];
let s2: Vec<isize> = s.to_vec();
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for the use of .iter().count().

Why is this bad?

.len() is more efficient and more readable.

Example

let some_vec = vec![0, 1, 2, 3];

some_vec.iter().count();
&some_vec[..].iter().count();

Use instead:

let some_vec = vec![0, 1, 2, 3];

some_vec.len();
&some_vec[..].len();
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

Checks for usage of .filter(Result::is_ok) that may be replaced with a .flatten() call. This lint will require additional changes to the follow-up calls as it affects the type.

Why is this bad?

This pattern is often followed by manual unwrapping of Result. The simplification results in more readable and succinct code without the need for manual unwrapping.

Example

vec![Ok::<i32, String>(1)].into_iter().filter(Result::is_ok);

Use instead:

vec![Ok::<i32, String>(1)].into_iter().flatten();
Applicability: HasPlaceholders(?)
Added in: 1.77.0

What it does

Checks for usage of .filter(Option::is_some) that may be replaced with a .flatten() call. This lint will require additional changes to the follow-up calls as it affects the type.

Why is this bad?

This pattern is often followed by manual unwrapping of the Option. The simplification results in more readable and succinct code without the need for manual unwrapping.

Example

vec![Some(1)].into_iter().filter(Option::is_some);

Use instead:

vec![Some(1)].into_iter().flatten();
Applicability: HasPlaceholders(?)
Added in: 1.77.0

What it does

Checks for iterating a map (HashMap or BTreeMap) and ignoring either the keys or values.

Why is this bad?

Readability. There are keys and values methods that can be used to express that we only need the keys or the values.

Example

let map: HashMap<u32, u32> = HashMap::new();
let values = map.iter().map(|(_, value)| value).collect::<Vec<_>>();

Use instead:

let map: HashMap<u32, u32> = HashMap::new();
let values = map.values().collect::<Vec<_>>();

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.66.0

What it does

Checks for loops on x.next().

Why is this bad?

next() returns either Some(value) if there was a value, or None otherwise. The insidious thing is that Option<_> implements IntoIterator, so that possibly one value will be iterated, leading to some hard to find bugs. No one will want to write such code except to win an Underhanded Rust Contest.

Example

for x in y.next() {
    ..
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of iter().next() on a Slice or an Array

Why is this bad?

These can be shortened into .get()

Example

a[2..].iter().next();
b.iter().next();

should be written as:

a.get(2);
b.get(0);
Applicability: MachineApplicable(?)
Added in: 1.46.0

What it does

Detects methods named iter or iter_mut that do not have a return type that implements Iterator.

Why is this bad?

Methods named iter or iter_mut conventionally return an Iterator.

Example

// `String` does not implement `Iterator`
struct Data {}
impl Data {
    fn iter(&self) -> String {
        todo!()
    }
}

Use instead:

use std::str::Chars;
struct Data {}
impl Data {
   fn iter(&self) -> Chars<'static> {
       todo!()
   }
}
Applicability: Unspecified(?)
Added in: 1.57.0

What it does

Checks for usage of .iter().nth()/.iter_mut().nth() on standard library types that have equivalent .get()/.get_mut() methods.

Why is this bad?

.get() and .get_mut() are equivalent but more concise.

Example

let some_vec = vec![0, 1, 2, 3];
let bad_vec = some_vec.iter().nth(3);
let bad_slice = &some_vec[..].iter().nth(3);

The correct use would be:

let some_vec = vec![0, 1, 2, 3];
let bad_vec = some_vec.get(3);
let bad_slice = &some_vec[..].get(3);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for the use of iter.nth(0).

Why is this bad?

iter.next() is equivalent to iter.nth(0), as they both consume the next element, but is more readable.

Example

let x = s.iter().nth(0);

Use instead:

let x = s.iter().next();
Applicability: MachineApplicable(?)
Added in: 1.42.0

What it does

Checks for calls to iter, iter_mut or into_iter on empty collections

Why is this bad?

It is simpler to use the empty function from the standard library:

Example

use std::{slice, option};
let a: slice::Iter<i32> = [].iter();
let f: option::IntoIter<i32> = None.into_iter();

Use instead:

use std::iter;
let a: iter::Empty<i32> = iter::empty();
let b: iter::Empty<i32> = iter::empty();

Known problems

The type of the resulting iterator might become incompatible with its usage

Applicability: MaybeIncorrect(?)
Added in: 1.65.0

What it does

Checks for calls to iter, iter_mut or into_iter on collections containing a single item

Why is this bad?

It is simpler to use the once function from the standard library:

Example

let a = [123].iter();
let b = Some(123).into_iter();

Use instead:

use std::iter;
let a = iter::once(&123);
let b = iter::once(123);

Known problems

The type of the resulting iterator might become incompatible with its usage

Applicability: MaybeIncorrect(?)
Added in: 1.65.0

What it does

Looks for iterator combinator calls such as .take(x) or .skip(x) where x is greater than the amount of items that an iterator will produce.

Why is this bad?

Taking or skipping more items than there are in an iterator either creates an iterator with all items from the original iterator or an iterator with no items at all. This is most likely not what the user intended to do.

Example

for _ in [1, 2, 3].iter().take(4) {}

Use instead:

for _ in [1, 2, 3].iter() {}
Applicability: Unspecified(?)
Added in: 1.74.0

What it does

This is a restriction lint which prevents the use of hash types (i.e., HashSet and HashMap) in for loops.

Why restrict this?

Because hash types are unordered, when iterated through such as in a for loop, the values are returned in an undefined order. As a result, on redundant systems this may cause inconsistencies and anomalies. In addition, the unknown order of the elements may reduce readability or introduce other undesired side effects.

Example

    let my_map = std::collections::HashMap::<i32, String>::new();
    for (key, value) in my_map { /* ... */ }

Use instead:

    let my_map = std::collections::HashMap::<i32, String>::new();
    let mut keys = my_map.keys().clone().collect::<Vec<_>>();
    keys.sort();
    for key in keys {
        let value = &my_map[key];
    }
Applicability: Unspecified(?)
Added in: 1.76.0

What it does

Checks for usage of _.cloned().<func>() where call to .cloned() can be postponed.

Why is this bad?

It’s often inefficient to clone all elements of an iterator, when eventually, only some of them will be consumed.

Known Problems

This lint removes the side of effect of cloning items in the iterator. A code that relies on that side-effect could fail.

Examples

vec.iter().cloned().take(10);
vec.iter().cloned().last();

Use instead:

vec.iter().take(10).cloned();
vec.iter().last().cloned();
Applicability: MachineApplicable(?)
Added in: 1.60.0

What it does

Checks for usage of .skip(x).next() on iterators.

Why is this bad?

.nth(x) is cleaner

Example

let some_vec = vec![0, 1, 2, 3];
let bad_vec = some_vec.iter().skip(3).next();
let bad_slice = &some_vec[..].iter().skip(3).next();

The correct use would be:

let some_vec = vec![0, 1, 2, 3];
let bad_vec = some_vec.iter().nth(3);
let bad_slice = &some_vec[..].iter().nth(3);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of .skip(0) on iterators.

Why is this bad?

This was likely intended to be .skip(1) to skip the first element, as .skip(0) does nothing. If not, the call should be removed.

Example

let v = vec![1, 2, 3];
let x = v.iter().skip(0).collect::<Vec<_>>();
let y = v.iter().collect::<Vec<_>>();
assert_eq!(x, y);
Applicability: MaybeIncorrect(?)
Added in: 1.73.0

What it does

Checks for usage of .drain(..) on Vec and VecDeque for iteration.

Why is this bad?

.into_iter() is simpler with better performance.

Example

let mut foo = vec![0, 1, 2, 3];
let bar: HashSet<usize> = foo.drain(..).collect();

Use instead:

let foo = vec![0, 1, 2, 3];
let bar: HashSet<usize> = foo.into_iter().collect();
Applicability: MaybeIncorrect(?)
Added in: 1.61.0

What it does

Looks for iter and iter_mut methods without an associated IntoIterator for (&|&mut) Type implementation.

Why is this bad?

It’s not bad, but having them is idiomatic and allows the type to be used in for loops directly (for val in &iter {}), without having to first call iter() or iter_mut().

Limitations

This lint focuses on providing an idiomatic API. Therefore, it will only lint on types which are accessible outside of the crate. For internal types, the IntoIterator trait can be implemented on demand if it is actually needed.

Example

struct MySlice<'a>(&'a [u8]);
impl<'a> MySlice<'a> {
    pub fn iter(&self) -> std::slice::Iter<'a, u8> {
        self.0.iter()
    }
}

Use instead:

struct MySlice<'a>(&'a [u8]);
impl<'a> MySlice<'a> {
    pub fn iter(&self) -> std::slice::Iter<'a, u8> {
        self.0.iter()
    }
}
impl<'a> IntoIterator for &MySlice<'a> {
    type Item = &'a u8;
    type IntoIter = std::slice::Iter<'a, u8>;
    fn into_iter(self) -> Self::IntoIter {
        self.iter()
    }
}
Applicability: Unspecified(?)
Added in: 1.75.0

What it does

Checks for calling .step_by(0) on iterators which panics.

Why is this bad?

This very much looks like an oversight. Use panic!() instead if you actually intend to panic.

Example

for x in (0..100).step_by(0) {
    //..
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for calls to Path::join that start with a path separator (\\ or /).

Why is this bad?

If the argument to Path::join starts with a separator, it will overwrite the original path. If this is intentional, prefer using Path::new instead.

Note the behavior is platform dependent. A leading \\ will be accepted on unix systems as part of the file name

See Path::join

Example

let path = Path::new("/bin");
let joined_path = path.join("/sh");
assert_eq!(joined_path, PathBuf::from("/sh"));

Use instead;

let path = Path::new("/bin");

// If this was unintentional, remove the leading separator
let joined_path = path.join("sh");
assert_eq!(joined_path, PathBuf::from("/bin/sh"));

// If this was intentional, create a new path instead
let new = Path::new("/sh");
assert_eq!(new, PathBuf::from("/sh"));
Applicability: Unspecified(?)
Added in: 1.76.0

What it does

Checks if you have variables whose name consists of just underscores and digits.

Why is this bad?

It’s hard to memorize what a variable means without a descriptive name.

Example

let _1 = 1;
let ___1 = 1;
let __1___2 = 11;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for large const arrays that should be defined as static instead.

Why is this bad?

Performance: const variables are inlined upon use. Static items result in only one instance and has a fixed location in memory.

Example

pub const a = [0u32; 1_000_000];

Use instead:

pub static a = [0u32; 1_000_000];

Configuration

  • array-size-threshold: The maximum allowed size for arrays on the stack

    (default: 16384)

Applicability: MachineApplicable(?)
Added in: 1.44.0

What it does

Warns if the digits of an integral or floating-point constant are grouped into groups that are too large.

Why is this bad?

Negatively impacts readability.

Example

let x: u64 = 6186491_8973511;
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for large size differences between variants on enums.

Why is this bad?

Enum size is bounded by the largest variant. Having one large variant can penalize the memory layout of that enum.

Known problems

This lint obviously cannot take the distribution of variants in your running program into account. It is possible that the smaller variants make up less than 1% of all instances, in which case the overhead is negligible and the boxing is counter-productive. Always measure the change this lint suggests.

For types that implement Copy, the suggestion to Box a variant’s data would require removing the trait impl. The types can of course still be Clone, but that is worse ergonomically. Depending on the use case it may be possible to store the large data in an auxiliary structure (e.g. Arena or ECS).

The lint will ignore the impact of generic types to the type layout by assuming every type parameter is zero-sized. Depending on your use case, this may lead to a false positive.

Example

enum Test {
    A(i32),
    B([i32; 8000]),
}

Use instead:

// Possibly better
enum Test2 {
    A(i32),
    B(Box<[i32; 8000]>),
}

Configuration

  • enum-variant-size-threshold: The maximum size of an enum’s variant to avoid box suggestion

    (default: 200)

Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

It checks for the size of a Future created by async fn or async {}.

Why is this bad?

Due to the current unideal implementation of Coroutine, large size of a Future may cause stack overflows.

Example

async fn large_future(_x: [u8; 16 * 1024]) {}

pub async fn trigger() {
    large_future([0u8; 16 * 1024]).await;
}

Box::pin the big future instead.

async fn large_future(_x: [u8; 16 * 1024]) {}

pub async fn trigger() {
    Box::pin(large_future([0u8; 16 * 1024])).await;
}

Configuration

  • future-size-threshold: The maximum byte size a Future can have, before it triggers the clippy::large_futures lint

    (default: 16384)

Applicability: Unspecified(?)
Added in: 1.70.0

What it does

Checks for the inclusion of large files via include_bytes!() or include_str!().

Why restrict this?

Including large files can undesirably increase the size of the binary produced by the compiler. This lint may be used to catch mistakes where an unexpectedly large file is included, or temporarily to obtain a list of all large files.

Example

let included_str = include_str!("very_large_file.txt");
let included_bytes = include_bytes!("very_large_file.txt");

Use instead:

use std::fs;

// You can load the file at runtime
let string = fs::read_to_string("very_large_file.txt")?;
let bytes = fs::read("very_large_file.txt")?;

Configuration

  • max-include-file-size: The maximum size of a file included via include_bytes!() or include_str!(), in bytes

    (default: 1000000)

Applicability: Unspecified(?)
Added in: 1.62.0

What it does

Checks for local arrays that may be too large.

Why is this bad?

Large local arrays may cause stack overflow.

Example

let a = [0u32; 1_000_000];

Configuration

  • array-size-threshold: The maximum allowed size for arrays on the stack

    (default: 16384)

Applicability: Unspecified(?)
Added in: 1.41.0

What it does

Checks for functions that use a lot of stack space.

This often happens when constructing a large type, such as an array with a lot of elements, or constructing many smaller-but-still-large structs, or copying around a lot of large types.

This lint is a more general version of large_stack_arrays that is intended to look at functions as a whole instead of only individual array expressions inside of a function.

Why is this bad?

The stack region of memory is very limited in size (usually much smaller than the heap) and attempting to use too much will result in a stack overflow and crash the program. To avoid this, you should consider allocating large types on the heap instead (e.g. by boxing them).

Keep in mind that the code path to construction of large types does not even need to be reachable; it purely needs to exist inside of the function to contribute to the stack size. For example, this causes a stack overflow even though the branch is unreachable:

fn main() {
    if false {
        let x = [0u8; 10000000]; // 10 MB stack array
        black_box(&x);
    }
}

Known issues

False positives. The stack size that clippy sees is an estimated value and can be vastly different from the actual stack usage after optimizations passes have run (especially true in release mode). Modern compilers are very smart and are able to optimize away a lot of unnecessary stack allocations. In debug mode however, it is usually more accurate.

This lint works by summing up the size of all variables that the user typed, variables that were implicitly introduced by the compiler for temporaries, function arguments and the return value, and comparing them against a (configurable, but high-by-default).

Example

This function creates four 500 KB arrays on the stack. Quite big but just small enough to not trigger large_stack_arrays. However, looking at the function as a whole, it’s clear that this uses a lot of stack space.

struct QuiteLargeType([u8; 500_000]);
fn foo() {
    // ... some function that uses a lot of stack space ...
    let _x1 = QuiteLargeType([0; 500_000]);
    let _x2 = QuiteLargeType([0; 500_000]);
    let _x3 = QuiteLargeType([0; 500_000]);
    let _x4 = QuiteLargeType([0; 500_000]);
}

Instead of doing this, allocate the arrays on the heap. This currently requires going through a Vec first and then converting it to a Box:

struct NotSoLargeType(Box<[u8]>);

fn foo() {
    let _x1 = NotSoLargeType(vec![0; 500_000].into_boxed_slice());
//                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  Now heap allocated.
//                                                                The size of `NotSoLargeType` is 16 bytes.
//  ...
}

Configuration

  • stack-size-threshold: The maximum allowed stack size for functions in bytes

    (default: 512000)

Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for functions taking arguments by value, where the argument type is Copy and large enough to be worth considering passing by reference. Does not trigger if the function is being exported, because that might induce API breakage, if the parameter is declared as mutable, or if the argument is a self.

Why is this bad?

Arguments passed by value might result in an unnecessary shallow copy, taking up more space in the stack and requiring a call to memcpy, which can be expensive.

Example

#[derive(Clone, Copy)]
struct TooLarge([u8; 2048]);

fn foo(v: TooLarge) {}

Use instead:

fn foo(v: &TooLarge) {}

Configuration

  • avoid-breaking-exported-api: Suppress lints whenever the suggested change would cause breakage for other crates.

    (default: true)

  • pass-by-value-size-limit: The minimum size (in bytes) to consider a type for passing by reference instead of by value.

    (default: 256)

Applicability: MaybeIncorrect(?)
Added in: 1.49.0

What it does

Checks for usage of <integer>::max_value(), std::<integer>::MAX, std::<float>::EPSILON, etc.

Why is this bad?

All of these have been superseded by the associated constants on their respective types, such as i128::MAX. These legacy items may be deprecated in a future version of rust.

Example

let eps = std::f32::EPSILON;

Use instead:

let eps = f32::EPSILON;

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.79.0

What it does

Checks for items that implement .len() but not .is_empty().

Why is this bad?

It is good custom to have both methods, because for some data structures, asking about the length will be a costly operation, whereas .is_empty() can usually answer in constant time. Also it used to lead to false positives on the len_zero lint – currently that lint will ignore such entities.

Example

impl X {
    pub fn len(&self) -> usize {
        ..
    }
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for getting the length of something via .len() just to compare to zero, and suggests using .is_empty() where applicable.

Why is this bad?

Some structures can answer .is_empty() much faster than calculating their length. So it is good to get into the habit of using .is_empty(), and having it is cheap. Besides, it makes the intent clearer than a manual comparison in some contexts.

Example

if x.len() == 0 {
    ..
}
if y.len() != 0 {
    ..
}

instead use

if x.is_empty() {
    ..
}
if !y.is_empty() {
    ..
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for let-bindings, which are subsequently returned.

Why is this bad?

It is just extraneous code. Remove it to make your code more rusty.

Known problems

In the case of some temporaries, e.g. locks, eliding the variable binding could lead to deadlocks. See this issue. This could become relevant if the code is later changed to use the code that would have been bound without first assigning it to a let-binding.

Example

fn foo() -> String {
    let x = String::new();
    x
}

instead, use

fn foo() -> String {
    String::new()
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for let _ = <expr> where the resulting type of expr implements Future

Why is this bad?

Futures must be polled for work to be done. The original intention was most likely to await the future and ignore the resulting value.

Example

async fn foo() -> Result<(), ()> {
    Ok(())
}
let _ = foo();

Use instead:

async fn foo() -> Result<(), ()> {
    Ok(())
}
let _ = foo().await;
Applicability: Unspecified(?)
Added in: 1.67.0

What it does

Checks for let _ = sync_lock. This supports mutex and rwlock in parking_lot. For std locks see the rustc lint let_underscore_lock

Why is this bad?

This statement immediately drops the lock instead of extending its lifetime to the end of the scope, which is often not intended. To extend lock lifetime to the end of the scope, use an underscore-prefixed name instead (i.e. _lock). If you want to explicitly drop the lock, std::mem::drop conveys your intention better and is less error-prone.

Example

let _ = mutex.lock();

Use instead:

let _lock = mutex.lock();
Applicability: Unspecified(?)
Added in: 1.43.0

What it does

Checks for let _ = <expr> where expr is #[must_use]

Why restrict this?

To ensure that all #[must_use] types are used rather than ignored.

Example

fn f() -> Result<u32, u32> {
    Ok(0)
}

let _ = f();
// is_ok() is marked #[must_use]
let _ = f().is_ok();
Applicability: Unspecified(?)
Added in: 1.42.0

What it does

Checks for let _ = <expr> without a type annotation, and suggests to either provide one, or remove the let keyword altogether.

Why restrict this?

The let _ = <expr> expression ignores the value of <expr>, but will continue to do so even if the type were to change, thus potentially introducing subtle bugs. By supplying a type annotation, one will be forced to re-visit the decision to ignore the value in such cases.

Known problems

The _ = <expr> is not properly supported by some tools (e.g. IntelliJ) and may seem odd to many developers. This lint also partially overlaps with the other let_underscore_* lints.

Example

fn foo() -> Result<u32, ()> {
    Ok(123)
}
let _ = foo();

Use instead:

fn foo() -> Result<u32, ()> {
    Ok(123)
}
// Either provide a type annotation:
let _: Result<u32, ()> = foo();
// …or drop the let keyword:
_ = foo();
Applicability: Unspecified(?)
Added in: 1.69.0

What it does

Checks for binding a unit value.

Why is this bad?

A unit value cannot usefully be used anywhere. So binding one is kind of pointless.

Example

let x = {
    1;
};
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Detects when a variable is declared with an explicit type of _.

Why is this bad?

It adds noise, : _ provides zero clarity or utility.

Example

let my_number: _ = 1;

Use instead:

let my_number = 1;
Applicability: Unspecified(?)
Added in: 1.70.0

What it does

Checks for usage of lines.filter_map(Result::ok) or lines.flat_map(Result::ok) when lines has type std::io::Lines.

Why is this bad?

Lines instances might produce a never-ending stream of Err, in which case filter_map(Result::ok) will enter an infinite loop while waiting for an Ok variant. Calling next() once is sufficient to enter the infinite loop, even in the absence of explicit loops in the user code.

This situation can arise when working with user-provided paths. On some platforms, std::fs::File::open(path) might return Ok(fs) even when path is a directory, but any later attempt to read from fs will return an error.

Known problems

This lint suggests replacing filter_map() or flat_map() applied to a Lines instance in all cases. There are two cases where the suggestion might not be appropriate or necessary:

  • If the Lines instance can never produce any error, or if an error is produced only once just before terminating the iterator, using map_while() is not necessary but will not do any harm.
  • If the Lines instance can produce intermittent errors then recover and produce successful results, using map_while() would stop at the first error.

Example

let mut lines = BufReader::new(File::open("some-path")?).lines().filter_map(Result::ok);
// If "some-path" points to a directory, the next statement never terminates:
let first_line: Option<String> = lines.next();

Use instead:

let mut lines = BufReader::new(File::open("some-path")?).lines().map_while(Result::ok);
let first_line: Option<String> = lines.next();
Applicability: MaybeIncorrect(?)
Added in: 1.70.0

What it does

Checks for usage of any LinkedList, suggesting to use a Vec or a VecDeque (formerly called RingBuf).

Why is this bad?

Gankra says:

The TL;DR of LinkedList is that it’s built on a massive amount of pointers and indirection. It wastes memory, it has terrible cache locality, and is all-around slow. RingBuf, while “only” amortized for push/pop, should be faster in the general case for almost every possible workload, and isn’t even amortized at all if you can predict the capacity you need.

LinkedLists are only really good if you’re doing a lot of merging or splitting of lists. This is because they can just mangle some pointers instead of actually copying the data. Even if you’re doing a lot of insertion in the middle of the list, RingBuf can still be better because of how expensive it is to seek to the middle of a LinkedList.

Known problems

False positives – the instances where using a LinkedList makes sense are few and far between, but they can still happen.

Example

let x: LinkedList<usize> = LinkedList::new();

Configuration

  • avoid-breaking-exported-api: Suppress lints whenever the suggested change would cause breakage for other crates.

    (default: true)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for lint groups with the same priority as lints in the Cargo.toml [lints] table.

This lint will be removed once cargo#12918 is resolved.

Why is this bad?

The order of lints in the [lints] is ignored, to have a lint override a group the priority field needs to be used, otherwise the sort order is undefined.

Known problems

Does not check lints inherited using lints.workspace = true

Example

[lints.clippy]
pedantic = "warn"
similar_names = "allow"

Use instead:

[lints.clippy]
pedantic = { level = "warn", priority = -1 }
similar_names = "allow"
Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Checks if string literals have formatting arguments outside of macros using them (like format!).

Why is this bad?

It will likely not generate the expected content.

Example

let x: Option<usize> = None;
let y = "hello";
x.expect("{y:?}");

Use instead:

let x: Option<usize> = None;
let y = "hello";
x.expect(&format!("{y:?}"));
Applicability: Unspecified(?)
Added in: 1.83.0

What it does

Checks for the usage of the to_le_bytes method and/or the function from_le_bytes.

Why restrict this?

To ensure use of big-endian or the target’s endianness rather than little-endian.

Example

let _x = 2i32.to_le_bytes();
let _y = 2i64.to_le_bytes();
Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for whole number float literals that cannot be represented as the underlying type without loss.

Why restrict this?

If the value was intended to be exact, it will not be. This may be especially surprising when the lost precision is to the left of the decimal point.

Example

let _: f32 = 16_777_217.0; // 16_777_216.0

Use instead:

let _: f32 = 16_777_216.0;
let _: f64 = 16_777_217.0;
Applicability: MachineApplicable(?)
Added in: 1.43.0

What it does

Looks for macros that expand metavariables in an unsafe block.

Why is this bad?

This hides an unsafe block and allows the user of the macro to write unsafe code without an explicit unsafe block at callsite, making it possible to perform unsafe operations in seemingly safe code.

The macro should be restructured so that these metavariables are referenced outside of unsafe blocks and that the usual unsafety checks apply to the macro argument.

This is usually done by binding it to a variable outside of the unsafe block and then using that variable inside of the block as shown in the example, or by referencing it a second time in a safe context, e.g. if false { $expr }.

Known limitations

Due to how macros are represented in the compiler at the time Clippy runs its lints, it’s not possible to look for metavariables in macro definitions directly.

Instead, this lint looks at expansions of macros. This leads to false negatives for macros that are never actually invoked.

By default, this lint is rather conservative and will only emit warnings on publicly-exported macros from the same crate, because oftentimes private internal macros are one-off macros where this lint would just be noise (e.g. macros that generate impl blocks). The default behavior should help with preventing a high number of such false positives, however it can be configured to also emit warnings in private macros if desired.

Example

/// Gets the first element of a slice
macro_rules! first {
    ($slice:expr) => {
        unsafe {
            let slice = $slice; // ⚠️ expansion inside of `unsafe {}`

            assert!(!slice.is_empty());
            // SAFETY: slice is checked to have at least one element
            slice.first().unwrap_unchecked()
        }
    }
}

assert_eq!(*first!(&[1i32]), 1);

// This will compile as a consequence (note the lack of `unsafe {}`)
assert_eq!(*first!(std::hint::unreachable_unchecked() as &[i32]), 1);

Use instead:

macro_rules! first {
    ($slice:expr) => {{
        let slice = $slice; // ✅ outside of `unsafe {}`
        unsafe {
            assert!(!slice.is_empty());
            // SAFETY: slice is checked to have at least one element
            slice.first().unwrap_unchecked()
        }
    }}
}

assert_eq!(*first!(&[1]), 1);

// This won't compile:
assert_eq!(*first!(std::hint::unreachable_unchecked() as &[i32]), 1);

Configuration

  • warn-unsafe-macro-metavars-in-private-macros: Whether to also emit warnings for unsafe blocks with metavariable expansions in private macros.

    (default: false)

Applicability: Unspecified(?)
Added in: 1.80.0

What it does

Checks for #[macro_use] use....

Why is this bad?

Since the Rust 2018 edition you can import macro’s directly, this is considered idiomatic.

Example

#[macro_use]
use some_macro;
Applicability: MaybeIncorrect(?)
Added in: 1.44.0

What it does

Checks for recursion using the entrypoint.

Why is this bad?

Apart from special setups (which we could detect following attributes like #![no_std]), recursing into main() seems like an unintuitive anti-pattern we should be able to detect.

Example

fn main() {
    main();
}
Applicability: Unspecified(?)
Added in: 1.38.0

What it does

Detects if-then-panic! that can be replaced with assert!.

Why is this bad?

assert! is simpler than if-then-panic!.

Example

let sad_people: Vec<&str> = vec![];
if !sad_people.is_empty() {
    panic!("there are sad people: {:?}", sad_people);
}

Use instead:

let sad_people: Vec<&str> = vec![];
assert!(sad_people.is_empty(), "there are sad people: {:?}", sad_people);
Applicability: MachineApplicable(?)
Added in: 1.57.0

What it does

It checks for manual implementations of async functions.

Why is this bad?

It’s more idiomatic to use the dedicated syntax.

Example

use std::future::Future;

fn foo() -> impl Future<Output = i32> { async { 42 } }

Use instead:

async fn foo() -> i32 { 42 }
Applicability: MachineApplicable(?)
Added in: 1.45.0

What it does

Checks for usage of std::mem::size_of::<T>() * 8 when T::BITS is available.

Why is this bad?

Can be written as the shorter T::BITS.

Example

std::mem::size_of::<usize>() * 8;

Use instead:

usize::BITS as usize;

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.60.0

What it does

Checks for the manual creation of C strings (a string with a NUL byte at the end), either through one of the CStr constructor functions, or more plainly by calling .as_ptr() on a (byte) string literal with a hardcoded \0 byte at the end.

Why is this bad?

This can be written more concisely using c"str" literals and is also less error-prone, because the compiler checks for interior NUL bytes and the terminating NUL byte is inserted automatically.

Example

fn needs_cstr(_: &CStr) {}

needs_cstr(CStr::from_bytes_with_nul(b"Hello\0").unwrap());
unsafe { libc::puts("World\0".as_ptr().cast()) }

Use instead:

fn needs_cstr(_: &CStr) {}

needs_cstr(c"Hello");
unsafe { libc::puts(c"World".as_ptr()) }

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.78.0

What it does

Identifies good opportunities for a clamp function from std or core, and suggests using it.

Why is this bad?

clamp is much shorter, easier to read, and doesn’t use any control flow.

Limitations

This lint will only trigger if max and min are known at compile time, and max is greater than min.

Known issue(s)

If the clamped variable is NaN this suggestion will cause the code to propagate NaN rather than returning either max or min.

clamp functions will panic if max < min, max.is_nan(), or min.is_nan(). Some may consider panicking in these situations to be desirable, but it also may introduce panicking where there wasn’t any before.

See also the discussion in the PR.

Examples

if input > max {
    max
} else if input < min {
    min
} else {
    input
}
input.max(min).min(max)
match input {
    x if x > max => max,
    x if x < min => min,
    x => x,
}
let mut x = input;
if x < min { x = min; }
if x > max { x = max; }

Use instead:

input.clamp(min, max)

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.66.0

What it does

Checks for an expression like (x + (y - 1)) / y which is a common manual reimplementation of x.div_ceil(y).

Why is this bad?

It’s simpler, clearer and more readable.

Example

let x: i32 = 7;
let y: i32 = 4;
let div = (x + (y - 1)) / y;

Use instead:

#![feature(int_roundings)]
let x: i32 = 7;
let y: i32 = 4;
let div = x.div_ceil(y);
Applicability: MachineApplicable(?)
Added in: 1.83.0

What it does

Checks for usage of match which could be implemented using filter

Why is this bad?

Using the filter method is clearer and more concise.

Example

match Some(0) {
    Some(x) => if x % 2 == 0 {
                    Some(x)
               } else {
                    None
                },
    None => None,
};

Use instead:

Some(0).filter(|&x| x % 2 == 0);
Applicability: MachineApplicable(?)
Added in: 1.66.0

What it does

Checks for usage of _.filter(_).map(_) that can be written more simply as filter_map(_).

Why is this bad?

Redundant code in the filter and map operations is poor style and less performant.

Example

(0_i32..10)
    .filter(|n| n.checked_add(1).is_some())
    .map(|n| n.checked_add(1).unwrap());

Use instead:

(0_i32..10).filter_map(|n| n.checked_add(1));

Past names

  • filter_map
Applicability: MachineApplicable(?)
Added in: 1.51.0

What it does

Checks for manual implementations of Iterator::find

Why is this bad?

It doesn’t affect performance, but using find is shorter and easier to read.

Example

fn example(arr: Vec<i32>) -> Option<i32> {
    for el in arr {
        if el == 1 {
            return Some(el);
        }
    }
    None
}

Use instead:

fn example(arr: Vec<i32>) -> Option<i32> {
    arr.into_iter().find(|&el| el == 1)
}
Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for usage of _.find(_).map(_) that can be written more simply as find_map(_).

Why is this bad?

Redundant code in the find and map operations is poor style and less performant.

Example

(0_i32..10)
    .find(|n| n.checked_add(1).is_some())
    .map(|n| n.checked_add(1).unwrap());

Use instead:

(0_i32..10).find_map(|n| n.checked_add(1));

Past names

  • find_map
Applicability: MachineApplicable(?)
Added in: 1.51.0

What it does

Checks for unnecessary if let usage in a for loop where only the Some or Ok variant of the iterator element is used.

Why is this bad?

It is verbose and can be simplified by first calling the flatten method on the Iterator.

Example

let x = vec![Some(1), Some(2), Some(3)];
for n in x {
    if let Some(n) = n {
        println!("{}", n);
    }
}

Use instead:

let x = vec![Some(1), Some(2), Some(3)];
for n in x.into_iter().flatten() {
    println!("{}", n);
}
Applicability: MaybeIncorrect(?)
Added in: 1.52.0

What it does

Checks for cases where BuildHasher::hash_one can be used.

Why is this bad?

It is more concise to use the hash_one method.

Example

use std::hash::{BuildHasher, Hash, Hasher};
use std::collections::hash_map::RandomState;

let s = RandomState::new();
let value = vec![1, 2, 3];

let mut hasher = s.build_hasher();
value.hash(&mut hasher);
let hash = hasher.finish();

Use instead:

use std::hash::BuildHasher;
use std::collections::hash_map::RandomState;

let s = RandomState::new();
let value = vec![1, 2, 3];

let hash = s.hash_one(&value);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.75.0

What it does

Checks for manual case-insensitive ASCII comparison.

Why is this bad?

The eq_ignore_ascii_case method is faster because it does not allocate memory for the new strings, and it is more readable.

Example

fn compare(a: &str, b: &str) -> bool {
    a.to_ascii_lowercase() == b.to_ascii_lowercase() || a.to_ascii_lowercase() == "abc"
}

Use instead:

fn compare(a: &str, b: &str) -> bool {
   a.eq_ignore_ascii_case(b) || a.eq_ignore_ascii_case("abc")
}
Applicability: MachineApplicable(?)
Added in: 1.82.0

What it does

Checks for uses of map which return the original item.

Why is this bad?

inspect is both clearer in intent and shorter.

Example

let x = Some(0).map(|x| { println!("{x}"); x });

Use instead:

let x = Some(0).inspect(|x| println!("{x}"));
Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

Lints subtraction between Instant::now() and another Instant.

Why is this bad?

It is easy to accidentally write prev_instant - Instant::now(), which will always be 0ns as Instant subtraction saturates.

prev_instant.elapsed() also more clearly signals intention.

Example

use std::time::Instant;
let prev_instant = Instant::now();
let duration = Instant::now() - prev_instant;

Use instead:

use std::time::Instant;
let prev_instant = Instant::now();
let duration = prev_instant.elapsed();
Applicability: MachineApplicable(?)
Added in: 1.65.0

What it does

Suggests to use dedicated built-in methods, is_ascii_(lowercase|uppercase|digit|hexdigit) for checking on corresponding ascii range

Why is this bad?

Using the built-in functions is more readable and makes it clear that it’s not a specific subset of characters, but all ASCII (lowercase|uppercase|digit|hexdigit) characters.

Example

fn main() {
    assert!(matches!('x', 'a'..='z'));
    assert!(matches!(b'X', b'A'..=b'Z'));
    assert!(matches!('2', '0'..='9'));
    assert!(matches!('x', 'A'..='Z' | 'a'..='z'));
    assert!(matches!('C', '0'..='9' | 'a'..='f' | 'A'..='F'));

    ('0'..='9').contains(&'0');
    ('a'..='z').contains(&'a');
    ('A'..='Z').contains(&'A');
}

Use instead:

fn main() {
    assert!('x'.is_ascii_lowercase());
    assert!(b'X'.is_ascii_uppercase());
    assert!('2'.is_ascii_digit());
    assert!('x'.is_ascii_alphabetic());
    assert!('C'.is_ascii_hexdigit());

    '0'.is_ascii_digit();
    'a'.is_ascii_lowercase();
    'A'.is_ascii_uppercase();
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.67.0

What it does

Checks for manual is_finite reimplementations (i.e., x != <float>::INFINITY && x != <float>::NEG_INFINITY).

Why is this bad?

The method is_finite is shorter and more readable.

Example

if x != f32::INFINITY && x != f32::NEG_INFINITY {}
if x.abs() < f32::INFINITY {}

Use instead:

if x.is_finite() {}
if x.is_finite() {}
Applicability: MaybeIncorrect(?)
Added in: 1.73.0

What it does

Checks for manual is_infinite reimplementations (i.e., x == <float>::INFINITY || x == <float>::NEG_INFINITY).

Why is this bad?

The method is_infinite is shorter and more readable.

Example

if x == f32::INFINITY || x == f32::NEG_INFINITY {}

Use instead:

if x.is_infinite() {}
Applicability: MachineApplicable(?)
Added in: 1.73.0

What it does

Checks for expressions like x.count_ones() == 1 or x & (x - 1) == 0, with x and unsigned integer, which may be manual reimplementations of x.is_power_of_two().

Why is this bad?

Manual reimplementations of is_power_of_two increase code complexity for little benefit.

Example

let a: u32 = 4;
let result = a.count_ones() == 1;

Use instead:

let a: u32 = 4;
let result = a.is_power_of_two();
Applicability: MachineApplicable(?)
Added in: 1.83.0

What it does

Checks for usage of option.map(f).unwrap_or_default() and result.map(f).unwrap_or_default() where f is a function or closure that returns the bool type.

Why is this bad?

Readability. These can be written more concisely as option.is_some_and(f) and result.is_ok_and(f).

Example

option.map(|a| a > 10).unwrap_or_default();
result.map(|a| a > 10).unwrap_or_default();

Use instead:

option.is_some_and(|a| a > 10);
result.is_ok_and(|a| a > 10);
Applicability: MachineApplicable(?)
Added in: 1.77.0

What it does

Warn of cases where let...else could be used

Why is this bad?

let...else provides a standard construct for this pattern that people can easily recognize. It’s also more compact.

Example

let v = if let Some(v) = w { v } else { return };

Could be written:

let Some(v) = w else { return };

Configuration

  • matches-for-let-else: Whether the matches should be considered by the lint, and whether there should be filtering for common types.

    (default: "WellKnownTypes")

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: HasPlaceholders(?)
Added in: 1.67.0

What it does

Checks for references on std::path::MAIN_SEPARATOR.to_string() used to build a &str.

Why is this bad?

There exists a std::path::MAIN_SEPARATOR_STR which does not require an extra memory allocation.

Example

let s: &str = &std::path::MAIN_SEPARATOR.to_string();

Use instead:

let s: &str = std::path::MAIN_SEPARATOR_STR;
Applicability: MachineApplicable(?)
Added in: 1.70.0

What it does

Checks for usage of match which could be implemented using map

Why is this bad?

Using the map method is clearer and more concise.

Example

match Some(0) {
    Some(x) => Some(x + 1),
    None => None,
};

Use instead:

Some(0).map(|x| x + 1);
Applicability: MachineApplicable(?)
Added in: 1.52.0

What it does

Checks for for-loops that manually copy items between slices that could be optimized by having a memcpy.

Why is this bad?

It is not as fast as a memcpy.

Example

for i in 0..src.len() {
    dst[i + 64] = src[i];
}

Use instead:

dst[64..(src.len() + 64)].clone_from_slice(&src[..]);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for .rev().next() on a DoubleEndedIterator

Why is this bad?

.next_back() is cleaner.

Example

foo.iter().rev().next();

Use instead:

foo.iter().next_back();
Applicability: MachineApplicable(?)
Added in: 1.71.0

What it does

Checks for manual implementations of the non-exhaustive pattern.

Why is this bad?

Using the #[non_exhaustive] attribute expresses better the intent and allows possible optimizations when applied to enums.

Example

struct S {
    pub a: i32,
    pub b: i32,
    _c: (),
}

enum E {
    A,
    B,
    #[doc(hidden)]
    _C,
}

struct T(pub i32, pub i32, ());

Use instead:

#[non_exhaustive]
struct S {
    pub a: i32,
    pub b: i32,
}

#[non_exhaustive]
enum E {
    A,
    B,
}

#[non_exhaustive]
struct T(pub i32, pub i32);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.45.0

What it does

Finds patterns that reimplement Option::ok_or.

Why is this bad?

Concise code helps focusing on behavior instead of boilerplate.

Examples

let foo: Option<i32> = None;
foo.map_or(Err("error"), |v| Ok(v));

Use instead:

let foo: Option<i32> = None;
foo.ok_or("error");
Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

Checks for manual char comparison in string patterns

Why is this bad?

This can be written more concisely using a char or an array of char. This is more readable and more optimized when comparing to only one char.

Example

"Hello World!".trim_end_matches(|c| c == '.' || c == ',' || c == '!' || c == '?');

Use instead:

"Hello World!".trim_end_matches(['.', ',', '!', '?']);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

Checks for expressions like x >= 3 && x < 8 that could be more readably expressed as (3..8).contains(x).

Why is this bad?

contains expresses the intent better and has less failure modes (such as fencepost errors or using || instead of &&).

Example

// given
let x = 6;

assert!(x >= 3 && x < 8);

Use instead:

assert!((3..8).contains(&x));

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

Looks for combined OR patterns that are all contained in a specific range, e.g. 6 | 4 | 5 | 9 | 7 | 8 can be rewritten as 4..=9.

Why is this bad?

Using an explicit range is more concise and easier to read.

Known issues

This lint intentionally does not handle numbers greater than i128::MAX for u128 literals in order to support negative numbers.

Example

let x = 6;
let foo = matches!(x, 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10);

Use instead:

let x = 6;
let foo = matches!(x, 1..=10);
Applicability: MachineApplicable(?)
Added in: 1.72.0

What it does

Checks for an expression like ((x % 4) + 4) % 4 which is a common manual reimplementation of x.rem_euclid(4).

Why is this bad?

It’s simpler and more readable.

Example

let x: i32 = 24;
let rem = ((x % 4) + 4) % 4;

Use instead:

let x: i32 = 24;
let rem = x.rem_euclid(4);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

Checks for code to be replaced by .retain().

Why is this bad?

.retain() is simpler and avoids needless allocation.

Example

let mut vec = vec![0, 1, 2];
vec = vec.iter().filter(|&x| x % 2 == 0).copied().collect();
vec = vec.into_iter().filter(|x| x % 2 == 0).collect();

Use instead:

let mut vec = vec![0, 1, 2];
vec.retain(|x| x % 2 == 0);
vec.retain(|x| x % 2 == 0);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.64.0

What it does

It detects manual bit rotations that could be rewritten using standard functions rotate_left or rotate_right.

Why is this bad?

Calling the function better conveys the intent.

Known issues

Currently, the lint only catches shifts by constant amount.

Example

let x = 12345678_u32;
let _ = (x >> 8) | (x << 24);

Use instead:

let x = 12345678_u32;
let _ = x.rotate_right(8);
Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

Checks for .checked_add/sub(x).unwrap_or(MAX/MIN).

Why is this bad?

These can be written simply with saturating_add/sub methods.

Example

let add = x.checked_add(y).unwrap_or(u32::MAX);
let sub = x.checked_sub(y).unwrap_or(u32::MIN);

can be written using dedicated methods for saturating addition/subtraction as:

let add = x.saturating_add(y);
let sub = x.saturating_sub(y);
Applicability: MachineApplicable(?)
Added in: 1.39.0

What it does

When a is &[T], detect a.len() * size_of::<T>() and suggest size_of_val(a) instead.

Why is this better?

  • Shorter to write
  • Removes the need for the human and the compiler to worry about overflow in the multiplication
  • Potentially faster at runtime as rust emits special no-wrapping flags when it calculates the byte length
  • Less turbofishing

Example

let newlen = data.len() * std::mem::size_of::<i32>();

Use instead:

let newlen = std::mem::size_of_val(data);
Applicability: MachineApplicable(?)
Added in: 1.70.0

What it does

Checks for usage of str::splitn(2, _)

Why is this bad?

split_once is both clearer in intent and slightly more efficient.

Example

let s = "key=value=add";
let (key, value) = s.splitn(2, '=').next_tuple()?;
let value = s.splitn(2, '=').nth(1)?;

let mut parts = s.splitn(2, '=');
let key = parts.next()?;
let value = parts.next()?;

Use instead:

let s = "key=value=add";
let (key, value) = s.split_once('=')?;
let value = s.split_once('=')?.1;

let (key, value) = s.split_once('=')?;

Limitations

The multiple statement variant currently only detects iter.next()?/iter.next().unwrap() in two separate let statements that immediately follow the splitn()

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.57.0

What it does

Checks for manual implementations of str::repeat

Why is this bad?

These are both harder to read, as well as less performant.

Example

let x: String = std::iter::repeat('x').take(10).collect();

Use instead:

let x: String = "x".repeat(10);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.54.0

What it does

Checks for usage of "" to create a String, such as "".to_string(), "".to_owned(), String::from("") and others.

Why is this bad?

Different ways of creating an empty string makes your code less standardized, which can be confusing.

Example

let a = "".to_string();
let b: String = "".into();

Use instead:

let a = String::new();
let b = String::new();
Applicability: MachineApplicable(?)
Added in: 1.65.0

What it does

Suggests using strip_{prefix,suffix} over str::{starts,ends}_with and slicing using the pattern’s length.

Why is this bad?

Using str:strip_{prefix,suffix} is safer and may have better performance as there is no slicing which may panic and the compiler does not need to insert this panic code. It is also sometimes more readable as it removes the need for duplicating or storing the pattern used by str::{starts,ends}_with and in the slicing.

Example

let s = "hello, world!";
if s.starts_with("hello, ") {
    assert_eq!(s["hello, ".len()..].to_uppercase(), "WORLD!");
}

Use instead:

let s = "hello, world!";
if let Some(end) = s.strip_prefix("hello, ") {
    assert_eq!(end.to_uppercase(), "WORLD!");
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: HasPlaceholders(?)
Added in: 1.48.0

What it does

Checks for manual swapping.

Note that the lint will not be emitted in const blocks, as the suggestion would not be applicable.

Why is this bad?

The std::mem::swap function exposes the intent better without deinitializing or copying either variable.

Example

let mut a = 42;
let mut b = 1337;

let t = b;
b = a;
a = t;

Use std::mem::swap():

let mut a = 1;
let mut b = 2;
std::mem::swap(&mut a, &mut b);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of Iterator::fold with a type that implements Try.

Why is this bad?

The code should use try_fold instead, which short-circuits on failure, thus opening the door for additional optimizations not possible with fold as rustc can guarantee the function is never called on None, Err, etc., alleviating otherwise necessary checks. It’s also slightly more idiomatic.

Known issues

This lint doesn’t take into account whether a function does something on the failure case, i.e., whether short-circuiting will affect behavior. Refactoring to try_fold is not desirable in those cases.

Example

vec![1, 2, 3].iter().fold(Some(0i32), |sum, i| sum?.checked_add(*i));

Use instead:

vec![1, 2, 3].iter().try_fold(0i32, |sum, i| sum.checked_add(*i));

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: HasPlaceholders(?)
Added in: 1.72.0

What it does

Finds patterns that reimplement Option::unwrap_or or Result::unwrap_or.

Why is this bad?

Concise code helps focusing on behavior instead of boilerplate.

Example

let foo: Option<i32> = None;
match foo {
    Some(v) => v,
    None => 1,
};

Use instead:

let foo: Option<i32> = None;
foo.unwrap_or(1);
Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

Checks if a match or if let expression can be simplified using .unwrap_or_default().

Why is this bad?

It can be done in one call with .unwrap_or_default().

Example

let x: Option<String> = Some(String::new());
let y: String = match x {
    Some(v) => v,
    None => String::new(),
};

let x: Option<Vec<String>> = Some(Vec::new());
let y: Vec<String> = if let Some(v) = x {
    v
} else {
    Vec::new()
};

Use instead:

let x: Option<String> = Some(String::new());
let y: String = x.unwrap_or_default();

let x: Option<Vec<String>> = Some(Vec::new());
let y: Vec<String> = x.unwrap_or_default();
Applicability: MachineApplicable(?)
Added in: 1.79.0

What it does

Looks for loops that check for emptiness of a Vec in the condition and pop an element in the body as a separate operation.

Why is this bad?

Such loops can be written in a more idiomatic way by using a while-let loop and directly pattern matching on the return value of Vec::pop().

Example

let mut numbers = vec![1, 2, 3, 4, 5];
while !numbers.is_empty() {
    let number = numbers.pop().unwrap();
    // use `number`
}

Use instead:

let mut numbers = vec![1, 2, 3, 4, 5];
while let Some(number) = numbers.pop() {
    // use `number`
}
Applicability: MachineApplicable(?)
Added in: 1.71.0

What it does

Checks for too many variables whose name consists of a single character.

Why is this bad?

It’s hard to memorize what a variable means without a descriptive name.

Example

let (a, b, c, d, e, f, g) = (...);

Configuration

  • single-char-binding-names-threshold: The maximum number of single char bindings a scope may have

    (default: 4)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of .map(…), followed by .all(identity) or .any(identity).

Why is this bad?

The .all(…) or .any(…) methods can be called directly in place of .map(…).

Example

let e1 = v.iter().map(|s| s.is_empty()).all(|a| a);
let e2 = v.iter().map(|s| s.is_empty()).any(std::convert::identity);

Use instead:

let e1 = v.iter().all(|s| s.is_empty());
let e2 = v.iter().any(|s| s.is_empty());
Applicability: MachineApplicable(?)
Added in: 1.84.0

What it does

Checks for usage of map(|x| x.clone()) or dereferencing closures for Copy types, on Iterator or Option, and suggests cloned() or copied() instead

Why is this bad?

Readability, this can be written more concisely

Example

let x = vec![42, 43];
let y = x.iter();
let z = y.map(|i| *i);

The correct use would be:

let x = vec![42, 43];
let y = x.iter();
let z = y.cloned();

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for usage of _.map(_).collect::<Result<(), _>().

Why is this bad?

Using try_for_each instead is more readable and idiomatic.

Example

(0..3).map(|t| Err(t)).collect::<Result<(), _>>();

Use instead:

(0..3).try_for_each(|t| Err(t));
Applicability: MachineApplicable(?)
Added in: 1.49.0

What it does

Checks for usage of contains_key + insert on HashMap or BTreeMap.

Why is this bad?

Using entry is more efficient.

Known problems

The suggestion may have type inference errors in some cases. e.g.

let mut map = std::collections::HashMap::new();
let _ = if !map.contains_key(&0) {
    map.insert(0, 0)
} else {
    None
};

Example

if !map.contains_key(&k) {
    map.insert(k, v);
}

Use instead:

map.entry(k).or_insert(v);
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for instances of map_err(|_| Some::Enum)

Why restrict this?

This map_err throws away the original error rather than allowing the enum to contain and report the cause of the error.

Example

Before:

use std::fmt;

#[derive(Debug)]
enum Error {
    Indivisible,
    Remainder(u8),
}

impl fmt::Display for Error {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Error::Indivisible => write!(f, "could not divide input by three"),
            Error::Remainder(remainder) => write!(
                f,
                "input is not divisible by three, remainder = {}",
                remainder
            ),
        }
    }
}

impl std::error::Error for Error {}

fn divisible_by_3(input: &str) -> Result<(), Error> {
    input
        .parse::<i32>()
        .map_err(|_| Error::Indivisible)
        .map(|v| v % 3)
        .and_then(|remainder| {
            if remainder == 0 {
                Ok(())
            } else {
                Err(Error::Remainder(remainder as u8))
            }
        })
}

After:

use std::{fmt, num::ParseIntError};

#[derive(Debug)]
enum Error {
    Indivisible(ParseIntError),
    Remainder(u8),
}

impl fmt::Display for Error {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Error::Indivisible(_) => write!(f, "could not divide input by three"),
            Error::Remainder(remainder) => write!(
                f,
                "input is not divisible by three, remainder = {}",
                remainder
            ),
        }
    }
}

impl std::error::Error for Error {
    fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
        match self {
            Error::Indivisible(source) => Some(source),
            _ => None,
        }
    }
}

fn divisible_by_3(input: &str) -> Result<(), Error> {
    input
        .parse::<i32>()
        .map_err(Error::Indivisible)
        .map(|v| v % 3)
        .and_then(|remainder| {
            if remainder == 0 {
                Ok(())
            } else {
                Err(Error::Remainder(remainder as u8))
            }
        })
}
Applicability: Unspecified(?)
Added in: 1.48.0

What it does

Checks for usage of _.map(_).flatten(_) on Iterator and Option

Why is this bad?

Readability, this can be written more concisely as _.flat_map(_) for Iterator or _.and_then(_) for Option

Example

let vec = vec![vec![1]];
let opt = Some(5);

vec.iter().map(|x| x.iter()).flatten();
opt.map(|x| Some(x * 2)).flatten();

Use instead:

vec.iter().flat_map(|x| x.iter());
opt.and_then(|x| Some(x * 2));
Applicability: MachineApplicable(?)
Added in: 1.31.0

What it does

Checks for instances of map(f) where f is the identity function.

Why is this bad?

It can be written more concisely without the call to map.

Example

let x = [1, 2, 3];
let y: Vec<_> = x.iter().map(|x| x).map(|x| 2*x).collect();

Use instead:

let x = [1, 2, 3];
let y: Vec<_> = x.iter().map(|x| 2*x).collect();
Applicability: MachineApplicable(?)
Added in: 1.47.0

What it does

Checks for usage of option.map(_).unwrap_or(_) or option.map(_).unwrap_or_else(_) or result.map(_).unwrap_or_else(_).

Why is this bad?

Readability, these can be written more concisely (resp.) as option.map_or(_, _), option.map_or_else(_, _) and result.map_or_else(_, _).

Known problems

The order of the arguments is not in execution order

Examples

option.map(|a| a + 1).unwrap_or(0);
option.map(|a| a > 10).unwrap_or(false);
result.map(|a| a + 1).unwrap_or_else(some_function);

Use instead:

option.map_or(0, |a| a + 1);
option.is_some_and(|a| a > 10);
result.map_or_else(some_function, |a| a + 1);

Past names

  • option_map_unwrap_or
  • option_map_unwrap_or_else
  • result_map_unwrap_or_else

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.45.0

What it does

Checks for Iterator::map over ranges without using the parameter which could be more clearly expressed using std::iter::repeat(...).take(...) or std::iter::repeat_n.

Why is this bad?

It expresses the intent more clearly to take the correct number of times from a generating function than to apply a closure to each number in a range only to discard them.

Example

let random_numbers : Vec<_> = (0..10).map(|_| { 3 + 1 }).collect();

Use instead:

let f : Vec<_> = std::iter::repeat( 3 + 1 ).take(10).collect();

Known Issues

This lint may suggest replacing a Map<Range> with a Take<RepeatWith>. The former implements some traits that the latter does not, such as DoubleEndedIterator.

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.84.0

What it does

Checks for match which is used to add a reference to an Option value.

Why is this bad?

Using as_ref() or as_mut() instead is shorter.

Example

let x: Option<()> = None;

let r: Option<&()> = match x {
    None => None,
    Some(ref v) => Some(v),
};

Use instead:

let x: Option<()> = None;

let r: Option<&()> = x.as_ref();
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for matches where match expression is a bool. It suggests to replace the expression with an if...else block.

Why is this bad?

It makes the code less readable.

Example

let condition: bool = true;
match condition {
    true => foo(),
    false => bar(),
}

Use if/else instead:

let condition: bool = true;
if condition {
    foo();
} else {
    bar();
}
Applicability: HasPlaceholders(?)
Added in: pre 1.29.0

What it does

Checks for match or if let expressions producing a bool that could be written using matches!

Why is this bad?

Readability and needless complexity.

Known problems

This lint falsely triggers, if there are arms with cfg attributes that remove an arm evaluating to false.

Example

let x = Some(5);

let a = match x {
    Some(0) => true,
    _ => false,
};

let a = if let Some(0) = x {
    true
} else {
    false
};

Use instead:

let x = Some(5);
let a = matches!(x, Some(0));

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MaybeIncorrect(?)
Added in: 1.47.0

What it does

Checks for match vec[idx] or match vec[n..m].

Why is this bad?

This can panic at runtime.

Example

let arr = vec![0, 1, 2, 3];
let idx = 1;

match arr[idx] {
    0 => println!("{}", 0),
    1 => println!("{}", 3),
    _ => {},
}

Use instead:

let arr = vec![0, 1, 2, 3];
let idx = 1;

match arr.get(idx) {
    Some(0) => println!("{}", 0),
    Some(1) => println!("{}", 3),
    _ => {},
}
Applicability: MaybeIncorrect(?)
Added in: 1.45.0

What it does

Checks for overlapping match arms.

Why is this bad?

It is likely to be an error and if not, makes the code less obvious.

Example

let x = 5;
match x {
    1..=10 => println!("1 ... 10"),
    5..=15 => println!("5 ... 15"),
    _ => (),
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for matches where all arms match a reference, suggesting to remove the reference and deref the matched expression instead. It also checks for if let &foo = bar blocks.

Why is this bad?

It just makes the code less readable. That reference destructuring adds nothing to the code.

Example

match x {
    &A(ref y) => foo(y),
    &B => bar(),
    _ => frob(&x),
}

Use instead:

match *x {
    A(ref y) => foo(y),
    B => bar(),
    _ => frob(x),
}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for unnecessary ok() in while let.

Why is this bad?

Calling ok() in while let is unnecessary, instead match on Ok(pat)

Example

while let Some(value) = iter.next().ok() {
    vec.push(value)
}

if let Some(value) = iter.next().ok() {
    vec.push(value)
}

Use instead:

while let Ok(value) = iter.next() {
    vec.push(value)
}

if let Ok(value) = iter.next() {
       vec.push(value)
}

Past names

  • if_let_some_result
Applicability: MachineApplicable(?)
Added in: 1.57.0

What it does

Checks for match with identical arm bodies.

Note: Does not lint on wildcards if the non_exhaustive_omitted_patterns_lint feature is enabled and disallowed.

Why is this bad?

This is probably a copy & paste error. If arm bodies are the same on purpose, you can factor them using |.

Known problems

False positive possible with order dependent match (see issue #860).

Example

match foo {
    Bar => bar(),
    Quz => quz(),
    Baz => bar(), // <= oops
}

This should probably be

match foo {
    Bar => bar(),
    Quz => quz(),
    Baz => baz(), // <= fixed
}

or if the original code was not a typo:

match foo {
    Bar | Baz => bar(), // <= shows the intent better
    Quz => quz(),
}
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks for useless match that binds to only one value.

Why is this bad?

Readability and needless complexity.

Known problems

Suggested replacements may be incorrect when match is actually binding temporary value, bringing a ‘dropped while borrowed’ error.

Example

match (a, b) {
    (c, d) => {
        // useless match
    }
}

Use instead:

let (c, d) = (a, b);
Applicability: MachineApplicable(?)
Added in: 1.43.0

What it does

Checks for match expressions modifying the case of a string with non-compliant arms

Why is this bad?

The arm is unreachable, which is likely a mistake

Example

match &*text.to_ascii_lowercase() {
    "foo" => {},
    "Bar" => {},
    _ => {},
}

Use instead:

match &*text.to_ascii_lowercase() {
    "foo" => {},
    "bar" => {},
    _ => {},
}
Applicability: MachineApplicable(?)
Added in: 1.58.0

What it does

Checks for arm which matches all errors with Err(_) and take drastic actions like panic!.

Why is this bad?

It is generally a bad practice, similar to catching all exceptions in java with catch(Exception)

Example

let x: Result<i32, &str> = Ok(3);
match x {
    Ok(_) => println!("ok"),
    Err(_) => panic!("err"),
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for wildcard enum matches for a single variant.

Why is this bad?

New enum variants added by library updates can be missed.

Known problems

Suggested replacements may not use correct path to enum if it’s not present in the current scope.

Example

match x {
    Foo::A => {},
    Foo::B => {},
    _ => {},
}

Use instead:

match x {
    Foo::A => {},
    Foo::B => {},
    Foo::C => {},
}
Applicability: MaybeIncorrect(?)
Added in: 1.45.0

What it does

Checks for iteration that may be infinite.

Why is this bad?

While there may be places where this is acceptable (e.g., in event streams), in most cases this is simply an error.

Known problems

The code may have a condition to stop iteration, but this lint is not clever enough to analyze it.

Example

let infinite_iter = 0..;
[0..].iter().zip(infinite_iter.take_while(|x| *x > 5));
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of std::mem::forget(t) where t is Drop or has a field that implements Drop.

Why restrict this?

std::mem::forget(t) prevents t from running its destructor, possibly causing leaks. It is not possible to detect all means of creating leaks, but it may be desirable to prohibit the simple ones.

Example

mem::forget(Rc::new(55))
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for mem::replace() on an Option with None.

Why is this bad?

Option already has the method take() for taking its current value (Some(..) or None) and replacing it with None.

Example

use std::mem;

let mut an_option = Some(0);
let replaced = mem::replace(&mut an_option, None);

Is better expressed with:

let mut an_option = Some(0);
let taken = an_option.take();
Applicability: MachineApplicable(?)
Added in: 1.31.0

What it does

Checks for std::mem::replace on a value of type T with T::default().

Why is this bad?

std::mem module already has the method take to take the current value and replace it with the default value of that type.

Example

let mut text = String::from("foo");
let replaced = std::mem::replace(&mut text, String::default());

Is better expressed with:

let mut text = String::from("foo");
let taken = std::mem::take(&mut text);

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.42.0

What it does

Checks for mem::replace(&mut _, mem::uninitialized()) and mem::replace(&mut _, mem::zeroed()).

Why is this bad?

This will lead to undefined behavior even if the value is overwritten later, because the uninitialized value may be observed in the case of a panic.

Example

use std::mem;

#[allow(deprecated, invalid_value)]
fn myfunc (v: &mut Vec<i32>) {
    let taken_v = unsafe { mem::replace(v, mem::uninitialized()) };
    let new_v = may_panic(taken_v); // undefined behavior on panic
    mem::forget(mem::replace(v, new_v));
}

The take_mut crate offers a sound solution, at the cost of either lazily creating a replacement value or aborting on panic, to ensure that the uninitialized value cannot be observed.

Applicability: MachineApplicable(?)
Added in: 1.39.0

What it does

Checks for identifiers which consist of a single character (or fewer than the configured threshold).

Note: This lint can be very noisy when enabled; it may be desirable to only enable it temporarily.

Why restrict this?

To improve readability by requiring that every variable has a name more specific than a single letter can be.

Example

for m in movies {
    let title = m.t;
}

Use instead:

for movie in movies {
    let title = movie.title;
}

Configuration

  • allowed-idents-below-min-chars: Allowed names below the minimum allowed characters. The value ".." can be used as part of the list to indicate, that the configured values should be appended to the default configuration of Clippy. By default, any configuration will replace the default value.

    (default: ["i", "j", "x", "y", "z", "w", "n"])

  • min-ident-chars-threshold: Minimum chars an ident can have, anything below or equal to this will be linted.

    (default: 1)

Applicability: Unspecified(?)
Added in: 1.72.0

What it does

Checks for expressions where std::cmp::min and max are used to clamp values, but switched so that the result is constant.

Why is this bad?

This is in all probability not the intended outcome. At the least it hurts readability of the code.

Example

min(0, max(100, x))

// or

x.max(100).min(0)

It will always be equal to 0. Probably the author meant to clamp the value between 0 and 100, but has erroneously swapped min and max.

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Nothing. This lint has been deprecated

Deprecation reason

Split into clippy::cast_ptr_alignment and clippy::transmute_ptr_to_ptr.

Applicability: Unspecified(?)
Deprecated in: pre 1.29.0

What it does

Checks for type parameters which are positioned inconsistently between a type definition and impl block. Specifically, a parameter in an impl block which has the same name as a parameter in the type def, but is in a different place.

Why is this bad?

Type parameters are determined by their position rather than name. Naming type parameters inconsistently may cause you to refer to the wrong type parameter.

Limitations

This lint only applies to impl blocks with simple generic params, e.g. A. If there is anything more complicated, such as a tuple, it will be ignored.

Example

struct Foo<A, B> {
    x: A,
    y: B,
}
// inside the impl, B refers to Foo::A
impl<B, A> Foo<B, A> {}

Use instead:

struct Foo<A, B> {
    x: A,
    y: B,
}
impl<A, B> Foo<A, B> {}
Applicability: Unspecified(?)
Added in: 1.63.0

What it does

Checks for getter methods that return a field that doesn’t correspond to the name of the method, when there is a field’s whose name matches that of the method.

Why is this bad?

It is most likely that such a method is a bug caused by a typo or by copy-pasting.

Example

struct A {
    a: String,
    b: String,
}

impl A {
    fn a(&self) -> &str{
        &self.b
    }
}

Use instead:

struct A {
    a: String,
    b: String,
}

impl A {
    fn a(&self) -> &str{
        &self.a
    }
}
Applicability: MaybeIncorrect(?)
Added in: 1.67.0

What it does

Checks for a op= a op b or a op= b op a patterns.

Why is this bad?

Most likely these are bugs where one meant to write a op= b.

Known problems

Clippy cannot know for sure if a op= a op b should have been a = a op a op b or a = a op b/a op= b. Therefore, it suggests both. If a op= a op b is really the correct behavior it should be written as a = a op a op b as it’s less confusing.

Example

let mut a = 5;
let b = 2;
// ...
a += a + b;
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

Checks assertions without a custom panic message.

Why restrict this?

Without a good custom message, it’d be hard to understand what went wrong when the assertion fails. A good custom message should be more about why the failure of the assertion is problematic and not what is failed because the assertion already conveys that.

Although the same reasoning applies to testing functions, this lint ignores them as they would be too noisy. Also, in most cases understanding the test failure would be easier compared to understanding a complex invariant distributed around the codebase.

Known problems

This lint cannot check the quality of the custom panic messages. Hence, you can suppress this lint simply by adding placeholder messages like “assertion failed”. However, we recommend coming up with good messages that provide useful information instead of placeholder messages that don’t provide any extra information.

Example

fn call(service: Service) {
    assert!(service.ready);
}

Use instead:

fn call(service: Service) {
    assert!(service.ready, "`service.poll_ready()` must be called first to ensure that service is ready to receive requests");
}
Applicability: Unspecified(?)
Added in: 1.70.0

What it does

Checks for repeated slice indexing without asserting beforehand that the length is greater than the largest index used to index into the slice.

Why restrict this?

In the general case where the compiler does not have a lot of information about the length of a slice, indexing it repeatedly will generate a bounds check for every single index.

Asserting that the length of the slice is at least as large as the largest value to index beforehand gives the compiler enough information to elide the bounds checks, effectively reducing the number of bounds checks from however many times the slice was indexed to just one (the assert).

Drawbacks

False positives. It is, in general, very difficult to predict how well the optimizer will be able to elide bounds checks and it very much depends on the surrounding code. For example, indexing into the slice yielded by the slice::chunks_exact iterator will likely have all of the bounds checks elided even without an assert if the chunk_size is a constant.

Asserts are not tracked across function calls. Asserting the length of a slice in a different function likely gives the optimizer enough information about the length of a slice, but this lint will not detect that.

Example

fn sum(v: &[u8]) -> u8 {
    // 4 bounds checks
    v[0] + v[1] + v[2] + v[3]
}

Use instead:

fn sum(v: &[u8]) -> u8 {
    assert!(v.len() > 3);
    // no bounds checks
    v[0] + v[1] + v[2] + v[3]
}
Applicability: MachineApplicable(?)
Added in: 1.74.0

What it does

Suggests the use of const in functions and methods where possible.

Why is this bad?

Not having the function const prevents callers of the function from being const as well.

Known problems

Const functions are currently still being worked on, with some features only being available on nightly. This lint does not consider all edge cases currently and the suggestions may be incorrect if you are using this lint on stable.

Also, the lint only runs one pass over the code. Consider these two non-const functions:

fn a() -> i32 {
    0
}
fn b() -> i32 {
    a()
}

When running Clippy, the lint will only suggest to make a const, because b at this time can’t be const as it calls a non-const function. Making a const and running Clippy again, will suggest to make b const, too.

If you are marking a public function with const, removing it again will break API compatibility.

Example

fn new() -> Self {
    Self { random_number: 42 }
}

Could be a const fn:

const fn new() -> Self {
    Self { random_number: 42 }
}

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: 1.34.0

What it does

Suggests to use const in thread_local! macro if possible.

Why is this bad?

The thread_local! macro wraps static declarations and makes them thread-local. It supports using a const keyword that may be used for declarations that can be evaluated as a constant expression. This can enable a more efficient thread local implementation that can avoid lazy initialization. For types that do not need to be dropped, this can enable an even more efficient implementation that does not need to track any additional state.

https://doc.rust-lang.org/std/macro.thread_local.html

Example

thread_local! {
    static BUF: String = String::new();
}

Use instead:

thread_local! {
    static BUF: String = const { String::new() };
}

Past names

  • thread_local_initializer_can_be_made_const
Applicability: MachineApplicable(?)
Added in: 1.77.0

What it does

Warns if there is missing documentation for any private documentable item.

Why restrict this?

Doc is good. rustc has a MISSING_DOCS allowed-by-default lint for public members, but has no way to enforce documentation of private items. This lint fixes that.

Configuration

  • missing-docs-in-crate-items: Whether to only check for missing documentation in items visible within the current crate. For example, pub(crate) items.

    (default: false)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for imports that do not rename the item as specified in the enforced-import-renames config option.

Note: Even though this lint is warn-by-default, it will only trigger if import renames are defined in the clippy.toml file.

Why is this bad?

Consistency is important; if a project has defined import renames, then they should be followed. More practically, some item names are too vague outside of their defining scope, in which case this can enforce a more meaningful naming.

Example

An example clippy.toml configuration:

enforced-import-renames = [
    { path = "serde_json::Value", rename = "JsonValue" },
]
use serde_json::Value;

Use instead:

use serde_json::Value as JsonValue;

Configuration

  • enforced-import-renames: The list of imports to always rename, a fully qualified path followed by the rename.

    (default: [])

Applicability: MachineApplicable(?)
Added in: 1.55.0

What it does

Checks the doc comments of publicly visible functions that return a Result type and warns if there is no # Errors section.

Why is this bad?

Documenting the type of errors that can be returned from a function can help callers write code to handle the errors appropriately.

Examples

Since the following function returns a Result it has an # Errors section in its doc comment:

/// # Errors
///
/// Will return `Err` if `filename` does not exist or the user does not have
/// permission to read it.
pub fn read(filename: String) -> io::Result<String> {
    unimplemented!();
}

Configuration

  • check-private-items: Whether to also run the listed lints on private items.

    (default: false)

Applicability: Unspecified(?)
Added in: 1.41.0

What it does

Checks for manual core::fmt::Debug implementations that do not use all fields.

Why is this bad?

A common mistake is to forget to update manual Debug implementations when adding a new field to a struct or a new variant to an enum.

At the same time, it also acts as a style lint to suggest using core::fmt::DebugStruct::finish_non_exhaustive for the times when the user intentionally wants to leave out certain fields (e.g. to hide implementation details).

Known problems

This lint works based on the DebugStruct helper types provided by the Formatter, so this won’t detect Debug impls that use the write! macro. Oftentimes there is more logic to a Debug impl if it uses write! macro, so it tries to be on the conservative side and not lint in those cases in an attempt to prevent false positives.

This lint also does not look through function calls, so calling a function does not consider fields used inside of that function as used by the Debug impl.

Lastly, it also ignores tuple structs as their DebugTuple formatter does not have a finish_non_exhaustive method, as well as enums because their exhaustiveness is already checked by the compiler when matching on the enum, making it much less likely to accidentally forget to update the Debug impl when adding a new variant.

Example

use std::fmt;
struct Foo {
    data: String,
    // implementation detail
    hidden_data: i32
}
impl fmt::Debug for Foo {
    fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {
        formatter
            .debug_struct("Foo")
            .field("data", &self.data)
            .finish()
    }
}

Use instead:

use std::fmt;
struct Foo {
    data: String,
    // implementation detail
    hidden_data: i32
}
impl fmt::Debug for Foo {
    fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {
        formatter
            .debug_struct("Foo")
            .field("data", &self.data)
            .finish_non_exhaustive()
    }
}
Applicability: Unspecified(?)
Added in: 1.70.0

What it does

It lints if an exported function, method, trait method with default impl, or trait method impl is not #[inline].

Why restrict this?

When a function is not marked #[inline], it is not a “small” candidate for automatic inlining, and LTO is not in use, then it is not possible for the function to be inlined into the code of any crate other than the one in which it is defined. Depending on the role of the function and the relationship of the crates, this could significantly reduce performance.

Certain types of crates might intend for most of the methods in their public API to be able to be inlined across crates even when LTO is disabled. This lint allows those crates to require all exported methods to be #[inline] by default, and then opt out for specific methods where this might not make sense.

Example

pub fn foo() {} // missing #[inline]
fn ok() {} // ok
#[inline] pub fn bar() {} // ok
#[inline(always)] pub fn baz() {} // ok

pub trait Bar {
  fn bar(); // ok
  fn def_bar() {} // missing #[inline]
}

struct Baz;
impl Baz {
   fn private() {} // ok
}

impl Bar for Baz {
  fn bar() {} // ok - Baz is not exported
}

pub struct PubBaz;
impl PubBaz {
   fn private() {} // ok
   pub fn not_private() {} // missing #[inline]
}

impl Bar for PubBaz {
   fn bar() {} // missing #[inline]
   fn def_bar() {} // missing #[inline]
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks the doc comments of publicly visible functions that may panic and warns if there is no # Panics section.

Why is this bad?

Documenting the scenarios in which panicking occurs can help callers who do not want to panic to avoid those situations.

Examples

Since the following function may panic it has a # Panics section in its doc comment:

/// # Panics
///
/// Will panic if y is 0
pub fn divide_by(x: i32, y: i32) -> i32 {
    if y == 0 {
        panic!("Cannot divide by 0")
    } else {
        x / y
    }
}

Configuration

  • check-private-items: Whether to also run the listed lints on private items.

    (default: false)

Applicability: Unspecified(?)
Added in: 1.51.0

What it does

Checks for the doc comments of publicly visible unsafe functions and warns if there is no # Safety section.

Why is this bad?

Unsafe functions should document their safety preconditions, so that users can be sure they are using them safely.

Examples

/// This function should really be documented
pub unsafe fn start_apocalypse(u: &mut Universe) {
    unimplemented!();
}

At least write a line about safety:

/// # Safety
///
/// This function should not be called before the horsemen are ready.
pub unsafe fn start_apocalypse(u: &mut Universe) {
    unimplemented!();
}

Configuration

  • check-private-items: Whether to also run the listed lints on private items.

    (default: false)

Applicability: Unspecified(?)
Added in: 1.39.0

What it does

Checks for empty spin loops

Why is this bad?

The loop body should have something like thread::park() or at least std::hint::spin_loop() to avoid needlessly burning cycles and conserve energy. Perhaps even better use an actual lock, if possible.

Known problems

This lint doesn’t currently trigger on while let or loop { match .. { .. } } loops, which would be considered idiomatic in combination with e.g. AtomicBool::compare_exchange_weak.

Example

use core::sync::atomic::{AtomicBool, Ordering};
let b = AtomicBool::new(true);
// give a ref to `b` to another thread,wait for it to become false
while b.load(Ordering::Acquire) {};

Use instead:

while b.load(Ordering::Acquire) {
    std::hint::spin_loop()
}
Applicability: MachineApplicable(?)
Added in: 1.61.0

What it does

Checks if a provided method is used implicitly by a trait implementation.

Why restrict this?

To ensure that a certain implementation implements every method; for example, a wrapper type where every method should delegate to the corresponding method of the inner type’s implementation.

This lint should typically be enabled on a specific trait impl item rather than globally.

Example

trait Trait {
    fn required();

    fn provided() {}
}

#[warn(clippy::missing_trait_methods)]
impl Trait for Type {
    fn required() { /* ... */ }
}

Use instead:

trait Trait {
    fn required();

    fn provided() {}
}

#[warn(clippy::missing_trait_methods)]
impl Trait for Type {
    fn required() { /* ... */ }

    fn provided() { /* ... */ }
}
Applicability: Unspecified(?)
Added in: 1.66.0

What it does

Checks if transmute calls have all generics specified.

Why is this bad?

If not, one or more unexpected types could be used during transmute(), potentially leading to Undefined Behavior or other problems.

This is particularly dangerous in case a seemingly innocent/unrelated change causes type inference to result in a different type. For example, if transmute() is the tail expression of an if-branch, and the else-branch type changes, the compiler may silently infer a different type to be returned by transmute(). That is because the compiler is free to change the inference of a type as long as that inference is technically correct, regardless of the programmer’s unknown expectation.

Both type-parameters, the input- and the output-type, to any transmute() should be given explicitly: Setting the input-type explicitly avoids confusion about what the argument’s type actually is. Setting the output-type explicitly avoids type-inference to infer a technically correct yet unexpected type.

Example

// Avoid "naked" calls to `transmute()`!
let x: i32 = std::mem::transmute([1u16, 2u16]);

// `first_answers` is intended to transmute a slice of bool to a slice of u8.
// But the programmer forgot to index the first element of the outer slice,
// so we are actually transmuting from "pointers to slices" instead of
// transmuting from "a slice of bool", causing a nonsensical result.
let the_answers: &[&[bool]] = &[&[true, false, true]];
let first_answers: &[u8] = std::mem::transmute(the_answers);

Use instead:

let x = std::mem::transmute::<[u16; 2], i32>([1u16, 2u16]);

// The explicit type parameters on `transmute()` makes the intention clear,
// and cause a type-error if the actual types don't match our expectation.
let the_answers: &[&[bool]] = &[&[true, false, true]];
let first_answers: &[u8] = std::mem::transmute::<&[bool], &[u8]>(the_answers[0]);
Applicability: MaybeIncorrect(?)
Added in: 1.79.0

What it does

Warns for mistyped suffix in literals

Why is this bad?

This is most probably a typo

Known problems

  • Does not match on integers too large to fit in the corresponding unsigned type
  • Does not match on _127 since that is a valid grouping for decimal and octal numbers

Example

`2_32` => `2_i32`
`250_8 => `250_u8`
Applicability: MaybeIncorrect(?)
Added in: 1.30.0

What it does

Checks for items that have the same kind of attributes with mixed styles (inner/outer).

Why is this bad?

Having both style of said attributes makes it more complicated to read code.

Known problems

This lint currently has false-negatives when mixing same attributes but they have different path symbols, for example:

#[custom_attribute]
pub fn foo() {
    #![my_crate::custom_attribute]
}

Example

#[cfg(linux)]
pub fn foo() {
    #![cfg(windows)]
}

Use instead:

#[cfg(linux)]
#[cfg(windows)]
pub fn foo() {
}
Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Warns on hexadecimal literals with mixed-case letter digits.

Why is this bad?

It looks confusing.

Example

0x1a9BAcD

Use instead:

0x1A9BACD
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for a read and a write to the same variable where whether the read occurs before or after the write depends on the evaluation order of sub-expressions.

Why restrict this?

While [the evaluation order of sub-expressions] is fully specified in Rust, it still may be confusing to read an expression where the evaluation order affects its behavior.

Known problems

Code which intentionally depends on the evaluation order, or which is correct for any evaluation order.

Example

let mut x = 0;

let a = {
    x = 1;
    1
} + x;
// Unclear whether a is 1 or 2.

Use instead:

let tmp = {
    x = 1;
    1
};
let a = tmp + x;

Past names

  • eval_order_dependence
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks that module layout uses only self named module files; bans mod.rs files.

Why restrict this?

Having multiple module layout styles in a project can be confusing.

Example

src/
  stuff/
    stuff_files.rs
    mod.rs
  lib.rs

Use instead:

src/
  stuff/
    stuff_files.rs
  stuff.rs
  lib.rs
Applicability: Unspecified(?)
Added in: 1.57.0

What it does

Checks for modules that have the same name as their parent module

Why is this bad?

A typical beginner mistake is to have mod foo; and again mod foo { .. } in foo.rs. The expectation is that items inside the inner mod foo { .. } are then available through foo::x, but they are only available through foo::foo::x. If this is done on purpose, it would be better to choose a more representative module name.

Example

// lib.rs
mod foo;
// foo.rs
mod foo {
    ...
}

Configuration

  • allow-private-module-inception: Whether to allow module inception if it’s not public.

    (default: false)

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Detects public item names that are prefixed or suffixed by the containing public module’s name.

Why is this bad?

It requires the user to type the module name twice in each usage, especially if they choose to import the module rather than its contents.

Lack of such repetition is also the style used in the Rust standard library; e.g. io::Error and fmt::Error rather than io::IoError and fmt::FmtError; and array::from_ref rather than array::array_from_ref.

Known issues

Glob re-exports are ignored; e.g. this will not warn even though it should:

pub mod foo {
    mod iteration {
        pub struct FooIter {}
    }
    pub use iteration::*; // creates the path `foo::FooIter`
}

Example

mod cake {
    struct BlackForestCake;
}

Use instead:

mod cake {
    struct BlackForest;
}

Past names

  • stutter

Configuration

  • allowed-prefixes: List of prefixes to allow when determining whether an item’s name ends with the module’s name. If the rest of an item’s name is an allowed prefix (e.g. item ToFoo or to_foo in module foo), then don’t emit a warning.

Example

allowed-prefixes = [ "to", "from" ]

Noteworthy

  • By default, the following prefixes are allowed: to, as, into, from, try_into and try_from

  • PascalCase variant is included automatically for each snake_case variant (e.g. if try_into is included, TryInto will also be included)

  • Use ".." as part of the list to indicate that the configured values should be appended to the default configuration of Clippy. By default, any configuration will replace the default value

    (default: ["to", "as", "into", "from", "try_into", "try_from"])

Applicability: Unspecified(?)
Added in: 1.33.0

What it does

Checks for modulo arithmetic.

Why restrict this?

The results of modulo (%) operation might differ depending on the language, when negative numbers are involved. If you interop with different languages it might be beneficial to double check all places that use modulo arithmetic.

For example, in Rust 17 % -3 = 2, but in Python 17 % -3 = -1.

Example

let x = -17 % 3;

Configuration

  • allow-comparison-to-zero: Don’t lint when comparing the result of a modulo operation to zero.

    (default: true)

Applicability: Unspecified(?)
Added in: 1.42.0

What it does

Checks for getting the remainder of integer division by one or minus one.

Why is this bad?

The result for a divisor of one can only ever be zero; for minus one it can cause panic/overflow (if the left operand is the minimal value of the respective integer type) or results in zero. No one will write such code deliberately, unless trying to win an Underhanded Rust Contest. Even for that contest, it’s probably a bad idea. Use something more underhanded.

Example

let a = x % 1;
let a = x % -1;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for nested assignments.

Why is this bad?

While this is in most cases already a type mismatch, the result of an assignment being () can throw off people coming from languages like python or C, where such assignments return a copy of the assigned value.

Example

a = b = 42;

Use instead:

b = 42;
a = b;
Applicability: Unspecified(?)
Added in: 1.65.0

What it does

Check if a generic is defined both in the bound predicate and in the where clause.

Why is this bad?

It can be confusing for developers when seeing bounds for a generic in multiple places.

Example

fn ty<F: std::fmt::Debug>(a: F)
where
    F: Sized,
{}

Use instead:

fn ty<F>(a: F)
where
    F: Sized + std::fmt::Debug,
{}
Applicability: Unspecified(?)
Added in: 1.78.0

What it does

Checks to see if multiple versions of a crate are being used.

Why is this bad?

This bloats the size of targets, and can lead to confusing error messages when structs or traits are used interchangeably between different versions of a crate.

Known problems

Because this can be caused purely by the dependencies themselves, it’s not always possible to fix this issue. In those cases, you can allow that specific crate using the allowed_duplicate_crates configuration option.

Example

[dependencies]
ctrlc = "=3.1.0"
ansi_term = "=0.11.0"

Configuration

  • allowed-duplicate-crates: A list of crate names to allow duplicates of

    (default: [])

Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for multiple inherent implementations of a struct

Why restrict this?

Splitting the implementation of a type makes the code harder to navigate.

Example

struct X;
impl X {
    fn one() {}
}
impl X {
    fn other() {}
}

Could be written:

struct X;
impl X {
    fn one() {}
    fn other() {}
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for unsafe blocks that contain more than one unsafe operation.

Why restrict this?

Combined with undocumented_unsafe_blocks, this lint ensures that each unsafe operation must be independently justified. Combined with unused_unsafe, this lint also ensures elimination of unnecessary unsafe blocks through refactoring.

Example

/// Reads a `char` from the given pointer.
///
/// # Safety
///
/// `ptr` must point to four consecutive, initialized bytes which
/// form a valid `char` when interpreted in the native byte order.
fn read_char(ptr: *const u8) -> char {
    // SAFETY: The caller has guaranteed that the value pointed
    // to by `bytes` is a valid `char`.
    unsafe { char::from_u32_unchecked(*ptr.cast::<u32>()) }
}

Use instead:

/// Reads a `char` from the given pointer.
///
/// # Safety
///
/// - `ptr` must be 4-byte aligned, point to four consecutive
///   initialized bytes, and be valid for reads of 4 bytes.
/// - The bytes pointed to by `ptr` must represent a valid
///   `char` when interpreted in the native byte order.
fn read_char(ptr: *const u8) -> char {
    // SAFETY: `ptr` is 4-byte aligned, points to four consecutive
    // initialized bytes, and is valid for reads of 4 bytes.
    let int_value = unsafe { *ptr.cast::<u32>() };

    // SAFETY: The caller has guaranteed that the four bytes
    // pointed to by `bytes` represent a valid `char`.
    unsafe { char::from_u32_unchecked(int_value) }
}
Applicability: Unspecified(?)
Added in: 1.69.0

What it does

Checks for public functions that have no #[must_use] attribute, but return something not already marked must-use, have no mutable arg and mutate no statics.

Why is this bad?

Not bad at all, this lint just shows places where you could add the attribute.

Known problems

The lint only checks the arguments for mutable types without looking if they are actually changed. On the other hand, it also ignores a broad range of potentially interesting side effects, because we cannot decide whether the programmer intends the function to be called for the side effect or the result. Expect many false positives. At least we don’t lint if the result type is unit or already #[must_use].

Examples

// this could be annotated with `#[must_use]`.
pub fn id<T>(t: T) -> T { t }
Applicability: MachineApplicable(?)
Added in: 1.40.0

What it does

Checks for a #[must_use] attribute on unit-returning functions and methods.

Why is this bad?

Unit values are useless. The attribute is likely a remnant of a refactoring that removed the return type.

Examples

#[must_use]
fn useless() { }
Applicability: MachineApplicable(?)
Added in: 1.40.0

What it does

This lint checks for functions that take immutable references and return mutable ones. This will not trigger if no unsafe code exists as there are multiple safe functions which will do this transformation

To be on the conservative side, if there’s at least one mutable reference with the output lifetime, this lint will not trigger.

Why is this bad?

Creating a mutable reference which can be repeatably derived from an immutable reference is unsound as it allows creating multiple live mutable references to the same object.

This error actually lead to an interim Rust release 1.15.1.

Known problems

This pattern is used by memory allocators to allow allocating multiple objects while returning mutable references to each one. So long as different mutable references are returned each time such a function may be safe.

Example

fn foo(&Foo) -> &mut Bar { .. }
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for instances of mut mut references.

Why is this bad?

Multiple muts don’t add anything meaningful to the source. This is either a copy’n’paste error, or it shows a fundamental misunderstanding of references.

Example

let x = &mut &mut y;
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for &mut Mutex::lock calls

Why is this bad?

Mutex::lock is less efficient than calling Mutex::get_mut. In addition you also have a statically guarantee that the mutex isn’t locked, instead of just a runtime guarantee.

Example

use std::sync::{Arc, Mutex};

let mut value_rc = Arc::new(Mutex::new(42_u8));
let value_mutex = Arc::get_mut(&mut value_rc).unwrap();

let mut value = value_mutex.lock().unwrap();
*value += 1;

Use instead:

use std::sync::{Arc, Mutex};

let mut value_rc = Arc::new(Mutex::new(42_u8));
let value_mutex = Arc::get_mut(&mut value_rc).unwrap();

let value = value_mutex.get_mut().unwrap();
*value += 1;
Applicability: MaybeIncorrect(?)
Added in: 1.49.0

What it does

Checks for loops with a range bound that is a mutable variable.

Why is this bad?

One might think that modifying the mutable variable changes the loop bounds. It doesn’t.

Known problems

False positive when mutation is followed by a break, but the break is not immediately after the mutation:

let mut x = 5;
for _ in 0..x {
    x += 1; // x is a range bound that is mutated
    ..; // some other expression
    break; // leaves the loop, so mutation is not an issue
}

False positive on nested loops (#6072)

Example

let mut foo = 42;
for i in 0..foo {
    foo -= 1;
    println!("{i}"); // prints numbers from 0 to 41, not 0 to 21
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for sets/maps with mutable key types.

Why is this bad?

All of HashMap, HashSet, BTreeMap and BtreeSet rely on either the hash or the order of keys be unchanging, so having types with interior mutability is a bad idea.

Known problems

False Positives

It’s correct to use a struct that contains interior mutability as a key when its implementation of Hash or Ord doesn’t access any of the interior mutable types. However, this lint is unable to recognize this, so it will often cause false positives in these cases.

False Negatives

This lint does not follow raw pointers (*const T or *mut T) as Hash and Ord apply only to the address of the contained value. This can cause false negatives for custom collections that use raw pointers internally.

Example

use std::cmp::{PartialEq, Eq};
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
use std::sync::atomic::AtomicUsize;

struct Bad(AtomicUsize);
impl PartialEq for Bad {
    fn eq(&self, rhs: &Self) -> bool {
         ..
    }
}

impl Eq for Bad {}

impl Hash for Bad {
    fn hash<H: Hasher>(&self, h: &mut H) {
        ..
    }
}

fn main() {
    let _: HashSet<Bad> = HashSet::new();
}

Configuration

  • ignore-interior-mutability: A list of paths to types that should be treated as if they do not contain interior mutability

    (default: ["bytes::Bytes"])

Applicability: Unspecified(?)
Added in: 1.42.0

What it does

Checks for usage of Mutex<X> where an atomic will do.

Why restrict this?

Using a mutex just to make access to a plain bool or reference sequential is shooting flies with cannons. std::sync::atomic::AtomicBool and std::sync::atomic::AtomicPtr are leaner and faster.

On the other hand, Mutexes are, in general, easier to verify correctness. An atomic does not behave the same as an equivalent mutex. See this issue’s commentary for more details.

Known problems

This lint cannot detect if the mutex is actually used for waiting before a critical section.

Example

let x = Mutex::new(&y);

Use instead:

let x = AtomicBool::new(y);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for usage of Mutex<X> where X is an integral type.

Why is this bad?

Using a mutex just to make access to a plain integer sequential is shooting flies with cannons. std::sync::atomic::AtomicUsize is leaner and faster.

Known problems

This lint cannot detect if the mutex is actually used for waiting before a critical section.

Example

let x = Mutex::new(0usize);

Use instead:

let x = AtomicUsize::new(0usize);
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for naive byte counts

Why is this bad?

The bytecount crate has methods to count your bytes faster, especially for large slices.

Known problems

If you have predominantly small slices, the bytecount::count(..) method may actually be slower. However, if you can ensure that less than 2³²-1 matches arise, the naive_count_32(..) can be faster in those cases.

Example

let count = vec.iter().filter(|x| **x == 0u8).count();

Use instead:

let count = bytecount::count(&vec, 0u8);
Applicability: MaybeIncorrect(?)
Added in: pre 1.29.0

What it does

The lint checks for self in fn parameters that specify the Self-type explicitly

Why is this bad?

Increases the amount and decreases the readability of code

Example

enum ValType {
    I32,
    I64,
    F32,
    F64,
}

impl ValType {
    pub fn bytes(self: Self) -> usize {
        match self {
            Self::I32 | Self::F32 => 4,
            Self::I64 | Self::F64 => 8,
        }
    }
}

Could be rewritten as

enum ValType {
    I32,
    I64,
    F32,
    F64,
}

impl ValType {
    pub fn bytes(self) -> usize {
        match self {
            Self::I32 | Self::F32 => 4,
            Self::I64 | Self::F64 => 8,
        }
    }
}
Applicability: MachineApplicable(?)
Added in: 1.47.0

What it does

It detects useless calls to str::as_bytes() before calling len() or is_empty().

Why is this bad?

The len() and is_empty() methods are also directly available on strings, and they return identical results. In particular, len() on a string returns the number of bytes.

Example

let len = "some string".as_bytes().len();
let b = "some string".as_bytes().is_empty();

Use instead:

let len = "some string".len();
let b = "some string".is_empty();
Applicability: MachineApplicable(?)
Added in: 1.84.0

What it does

Checks for usage of bitwise and/or operators between booleans, where performance may be improved by using a lazy and.

Why is this bad?

The bitwise operators do not support short-circuiting, so it may hinder code performance. Additionally, boolean logic “masked” as bitwise logic is not caught by lints like unnecessary_fold

Known problems

This lint evaluates only when the right side is determined to have no side effects. At this time, that determination is quite conservative.

Example

let (x,y) = (true, false);
if x & !y {} // where both x and y are booleans

Use instead:

let (x,y) = (true, false);
if x && !y {}
Applicability: MachineApplicable(?)
Added in: 1.54.0

What it does

Checks for expressions of the form if c { true } else { false } (or vice versa) and suggests using the condition directly.

Why is this bad?

Redundant code.

Known problems

Maybe false positives: Sometimes, the two branches are painstakingly documented (which we, of course, do not detect), so they may have some value. Even then, the documentation can be rewritten to match the shorter code.

Example

if x {
    false
} else {
    true
}

Use instead:

!x
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for expressions of the form if c { x = true } else { x = false } (or vice versa) and suggest assigning the variable directly from the condition.

Why is this bad?

Redundant code.

Example

if must_keep(x, y) {
    skip = false;
} else {
    skip = true;
}

Use instead:

skip = !must_keep(x, y);
Applicability: MachineApplicable(?)
Added in: 1.71.0

What it does

Checks for address of operations (&) that are going to be dereferenced immediately by the compiler.

Why is this bad?

Suggests that the receiver of the expression borrows the expression.

Known problems

The lint cannot tell when the implementation of a trait for &T and T do different things. Removing a borrow in such a case can change the semantics of the code.

Example

fn fun(_a: &i32) {}

let x: &i32 = &&&&&&5;
fun(&x);

Use instead:

let x: &i32 = &5;
fun(x);

Past names

  • ref_in_deref

Configuration

  • msrv: The minimum rust version that the project supports. Defaults to the rust-version field in Cargo.toml

    (default: current version)

Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for bindings that needlessly destructure a reference and borrow the inner value with &ref.

Why is this bad?

This pattern has no effect in almost all cases.

Example

let mut v = Vec::<String>::new();
v.iter_mut().filter(|&ref a| a.is_empty());

if let &[ref first, ref second] = v.as_slice() {}

Use instead:

let mut v = Vec::<String>::new();
v.iter_mut().filter(|a| a.is_empty());

if let [first, second] = v.as_slice() {}
Applicability: MachineApplicable(?)
Added in: pre 1.29.0

What it does

Checks for borrow operations (&) that are used as a generic argument to a function when the borrowed value could be used.

Why is this bad?

Suggests that the receiver of the expression borrows the expression.

Known problems

The lint cannot tell when the implementation of a trait for &T and T do different things. Removing a borrow in such a case can change the semantics of the code.

Example

fn f(_: impl AsRef<str>) {}

let x = "foo";
f(&x);

Use instead:

fn f(_: impl AsRef<str>) {}

let x = "foo";
f(x);
Applicability: MachineApplicable(?)
Added in: 1.74.0

What it does

Checks if an iterator is used to check if a string is ascii.

Why is this bad?

The str type already implements the is_ascii method.

Example

"foo".chars().all(|c| c.is_ascii());

Use instead:

"foo".is_ascii();
Applicability: MachineApplicable(?)
Added in: 1.81.0

What it does

Checks for functions collecting an iterator when collect is not needed.

Why is this bad?

collect causes the allocation of a new data structure, when this allocation may not be needed.

Example

let len = iterator.collect::<Vec<_>>().len();

Use instead:

let len = iterator.count();
Applicability: MachineApplicable(?)
Added in: 1.30.0

What it does

The lint checks for if-statements appearing in loops that contain a continue statement in either their main blocks or their else-blocks, when omitting the else-block possibly with some rearrangement of code can make the code easier to understand.

Why is this bad?

Having explicit else blocks for if statements containing continue in their THEN branch adds unnecessary branching and nesting to the code. Having an else block containing just continue can also be better written by grouping the statements following the whole if statement within the THEN block and omitting the else block completely.

Example

while condition() {
    update_condition();
    if x {
        // ...
    } else {
        continue;
    }
    println!("Hello, world");
}

Could be rewritten as

while condition() {
    update_condition();
    if x {
        // ...
        println!("Hello, world");
    }
}

As another example, the following code

loop {
    if waiting() {
        continue;
    } else {
        // Do something useful
    }
    # break;
}

Could be rewritten as

loop {
    if waiting() {
        continue;
    }
    // Do something useful
    # break;
}
Applicability: Unspecified(?)
Added in: pre 1.29.0

What it does

Checks for fn main() { .. } in doctests

Why is this bad?

The test can be shorter (and likely more readable) if the fn main() is left implicit.

Examples

/// An example of a doctest with a `main()` function
///
/// # Examples
///
/// ```
/// fn main() {
///     // this needs not be in an `fn`
/// }
/// ```
fn needless_main() {
    unimplemented!();
}
Applicability: Unspecified(?)
Added in: 1.40.0

What it does

Checks for empty else branches.

Why is this bad?

An empty else branch does nothing and can be removed.

Example

if check() {
    println!("Check successful!");
} else {
}

Use instead:

if check() {
    println!("Check successful!");
}
Applicability: MachineApplicable(?)
Added in: 1.72.0

What it does

Checks for usage of for_each that would be more simply written as a for loop.

Why is this bad?

for_each may be used after applying iterator transformers like filter for better readability and performance. It may also be used to fit a simple operation on one line. But when none of these apply, a simple for loop is more idiomatic.

Example

let v = vec![0, 1, 2];
v.iter().for_each(|elem| {
    println!("{elem}");
})

Use instead:

let v = vec![0, 1, 2];
for elem in &v {
    println!("{elem}");
}

Known Problems

When doing things such as:

let v = vec![0, 1, 2];
v.iter().for_each(|elem| unsafe {
    libc::printf(c"%d\n".as_ptr(), elem);
});

This lint will not trigger.

Applicability: MachineApplicable(?)
Added in: 1.53.0

What it does

Checks for empty if branches with no else branch.

Why is this bad?

It can be entirely omitted, and often the condition too.

Known issues

This will usually only suggest to remove the if statement, not the condition. Other lints such as no_effect will take care of removing the condition if it’s unnecessary.

Example

if really_expensive_condition(&i) {}
if really_expensive_condition_with_side_effects(&mut i) {}

Use instead:

// <omitted>
really_expensive_condition_with_side_effects(&mut i);
Applicability: MachineApplicable(?)
Added in: 1.72.0